Jump to content

Vocoder

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 212.140.165.49 (talk) at 15:26, 12 April 2012 (RAWCLI vocoder). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

A vocoder (/[invalid input: 'icon']ˈvkdər/, short for voice encoder) is an analysis/synthesis system, used to reproduce human speech. In the encoder, the input is passed through a multiband filter, each band is passed through an envelope follower, and the control signals from the envelope followers are communicated to the decoder. The decoder applies these (amplitude) control signals to corresponding filters in the (re)synthesizer.

It was originally developed as a speech coder for telecommunications applications in the 1930s, the idea being to code speech for transmission. Transmitting the parameters of a speech model instead of a digitized representation of the speech waveform saves bandwidth in the communication channel; the parameters of the model change relatively slowly, compared to the changes in the speech waveform that they describe. Its primary use in this fashion is for secure radio communication, where voice has to be encrypted and then transmitted. The advantage of this method of "encryption" is that no 'signal' is sent, but rather envelopes of the bandpass filters. The receiving unit needs to be set up in the same channel configuration to resynthesize a version of the original signal spectrum. The vocoder as both hardware and software has also been used extensively as an electronic musical instrument.

Whereas the vocoder analyzes speech, transforms it into electronically transmitted information, and recreates it, The Voder (from Voice Operating Demonstrator) generates synthesized speech by means of a console with fifteen touch-sensitive keys and a pedal, basically consisting of the "second half" of the vocoder, but with manual filter controls, needing a highly trained operator.[1][2]

Early 1970s vocoder, custom built for electronic music band Kraftwerk

Vocoder theory

The human voice consists of sounds generated by the opening and closing of the glottis by the vocal cords, which produces a periodic waveform with many harmonics. This basic sound is then filtered by the nose and throat (a complicated resonant piping system) to produce differences in harmonic content (formants) in a controlled way, creating the wide variety of sounds used in speech. There is another set of sounds, known as the unvoiced and plosive sounds, which are created or modified by the mouth in different fashions.

The vocoder examines speech by measuring how its spectral characteristics change over time. This results in a series of numbers representing these modified frequencies at any particular time as the user speaks. In simple terms, the signal is split into a number of frequency bands (the larger this number, the more accurate the analysis) and the level of signal present at each frequency band gives the instantaneous representation of the spectral energy content. Thus, the vocoder dramatically reduces the amount of information needed to store speech, from a complete recording to a series of numbers. To recreate speech, the vocoder simply reverses the process, processing a broadband noise source by passing it through a stage that filters the frequency content based on the originally recorded series of numbers. Information about the instantaneous frequency (as distinct from spectral characteristic) of the original voice signal is discarded; it wasn't important to preserve this for the purposes of the vocoder's original use as an encryption aid, and it is this "dehumanizing" quality of the vocoding process that has made it useful in creating special voice effects in popular music and audio entertainment.

Since the vocoder process sends only the parameters of the vocal model over the communication link, instead of a point by point recreation of the waveform, it allows a significant reduction in the bandwidth required to transmit speech.

History

Channel vocoder schematic

Analog vocoders typically analyze an incoming signal by splitting the signal into a number of tuned frequency bands or ranges. A modulator and carrier signal are sent through a series of these tuned band pass filters. In the example of a typical robot voice the modulator is a microphone and the carrier is noise or a sawtooth waveform. There are usually between 8 and 20 bands.

The amplitude of the modulator for each of the individual analysis bands generates a voltage that is used to control amplifiers for each of the corresponding carrier bands. The result is that frequency components of the modulating signal are mapped onto the carrier signal as discrete amplitude changes in each of the frequency bands.

Often there is an unvoiced band or sibilance channel. This is for frequencies outside of analysis bands for typical speech but still important in speech. Examples are words that start with the letters s, f, ch or any other sibilant sound. These can be mixed with the carrier output to increase clarity. The result is recognizable speech, although somewhat "mechanical" sounding. Vocoders also often include a second system for generating unvoiced sounds, using a noise generator instead of the fundamental frequency.

SIGSALY (1943-1946) speech encipherment system
HY-2 Vocoder (designed in 1961), was the last generation of channel vocoder in the US.[3]

The first experiments with a vocoder were conducted in 1928 by Bell Labs engineer Homer Dudley, who was granted a patent for it on March 21, 1939.[4] The Voder(Voice Operating Demonstrator), was introduced to the public at the AT&T building at the 1939-1940 New York World's Fair.[2] The Voder consisted of a series of manually-controlled oscillators, filters, and a noise source. The filters were controlled by a set of keys and a foot pedal to convert the hisses and tones into vowels, consonants, and inflections. This was a complex machine to operate, but with a skilled operator could produce recognizable speech.[2][media 1]

Dudley's vocoder was used in the SIGSALY system, which was built by Bell Labs engineers in 1943. SIGSALY was used for encrypted high-level voice communications during World War II. Later work in this field has been conducted by James Flanagan.

Vocoder applications

  • Terminal equipment for Digital Mobile Radio (DMR) based systems.
  • Digital Trunking
  • DMR TDMA
  • Digital Voice Scrambling and Encryption
  • Digital WLL
  • Voice Storage and Playback Systems
  • Messaging Systems
  • VoIP Systems
  • Voice Pagers
  • Regenerative Digital Voice Repeaters

Modern vocoder implementations

Even with the need to record several frequencies, and the additional unvoiced sounds, the compression of the vocoder system is impressive. Standard speech-recording systems capture frequencies from about 500 Hz to 3400 Hz, where most of the frequencies used in speech lie, typically using a sampling rate of 8 kHz (slightly greater than the Nyquist rate). The sampling resolution is typically at least 12 or more bits per sample resolution (16 is standard), for a final data rate in the range of 96-128 kbit/s. However, a good vocoder can provide a reasonable good simulation of voice with as little as 2.4 kbit/s of data.

'Toll Quality' voice coders, such as ITU G.729, are used in many telephone networks. G.729 in particular has a final data rate of 8 kbit/s with superb voice quality. G.723 achieves slightly worse quality at data rates of 5.3 kbit/s and 6.4 kbit/s. Many voice systems use even lower data rates, but below 5 kbit/s voice quality begins to drop rapidly.

Several vocoder systems are used in NSA encryption systems:

(ADPCM is not a proper vocoder but rather a waveform codec. ITU has gathered G.721 along with some other ADPCM codecs into G.726.)

Vocoders are also currently used in developing psychophysics, linguistics, computational neuroscience and cochlear implant research.

Modern vocoders that are used in communication equipment and in voice storage devices today are based on the following algorithms:

Linear prediction-based vocoders

Since the late 1970s, most non-musical vocoders have been implemented using linear prediction, whereby the target signal's spectral envelope (formant) is estimated by an all-pole IIR filter. In linear prediction coding, the all-pole filter replaces the bandpass filter bank of its predecessor and is used at the encoder to whiten the signal (i.e., flatten the spectrum) and again at the decoder to re-apply the spectral shape of the target speech signal.

One advantage of this type of filtering is that the location of the linear predictor's spectral peaks is entirely determined by the target signal, and can be as precise as allowed by the time period to be filtered. This is in contrast with vocoders realized using fixed-width filter banks, where spectral peaks can generally only be determined to be within the scope of a given frequency band. LP filtering also has disadvantages in that signals with a large number of constituent frequencies may exceed the number of frequencies that can be represented by the linear prediction filter. This restriction is the primary reason that LP coding is almost always used in tandem with other methods in high-compression voice coders.

RAWCLI vocoder

Robust Advanced Low Complexity Waveform Interpolation (RALCWI) technology uses proprietary signal decomposition and parameter encoding methods to provide high voice quality at high compression ratios. The voice quality of RALCWI-class vocoders, as estimated by independent listeners, is similar to that provided by standard vocoders running at bit rates above 4000 bit/s. The Mean Opinion Score (MOS) of voice quality for this Vocoder is about 3.5-3.6. This value was determined by a paired comparison method, performing listening tests of developed and standard voice Vocoders.[citation needed]

The RALCWI vocoder operates on a “frame-by-frame” basis. The 20ms source voice frame consists of 160 samples of linear 16-bit PCM sampled at 8 kHz. The Voice Encoder performs voice analysis at the high time resolution (8 times per frame) and forms a set of estimated parameters for each voice segment. All of the estimated parameters are quantized to produce 41-, 48- or 55-bit frames, using vector quantization (VQ) of different types. All of the vector quantizers were trained on a mixed multi-language voice base, which contains voice samples in both Eastern and Western languages.

Waveform-Interpolative (WI) vocoder was developed in AT&T Bell Laboratories around 1995 by W.B. Kleijn, and subsequently a low- complexity version was developed by AT&T for the DoD secure vocoder competition. Notable enhancements to the WI coder were made at the University of California, Santa Barbara. AT&T holds the core patents related to WI, and other institutes hold additional patents. Using these patents as a part of WI coder implementation requires licensing from all IPR holders.

The product is the result of a co-operation between CML Microcircuits and SPIRIT DSP. The co-operation combines CML’s 39-year history of developing mixed-signal semiconductors for professional and leisure communication applications, with SPIRIT’s experience in embedded voice products.

Voice effects in music

For musical applications, a source of musical sounds is used as the carrier, instead of extracting the fundamental frequency. For instance, one could use the sound of a synthesizer as the input to the filter bank, a technique that became popular in the 1970s.

Musical history

One of the earliest person who recognized the possibility of Vocoder/Voder on the electronic music may be Werner Meyer-Eppler, a German physicist/experimental acoustician/phoneticist. In 1949, he published thesis on the electronic music and speech synthesis from the viewpoint of sound synthesis,[10] and in 1951, he joined to the successful proposal of establishment of WDR Cologne Studio for Electronic Music.[11]

Siemens Synthesizer (c.1959) at Siemens Studio for Electronic Music was one of the first attempt to divert vocoder to create music

One of the first attempt to divert vocoder to create music may be a “Siemens Synthesizer” at Siemens Studio for Electronic Music, developed between 1956-1959.[12][media 2]

In 1968, Robert Moog developed one of the first solid-state musical vocoder for electronic music studio of University at Buffalo.[13]

In 1969, Bruce Haack built a prototype vocoder, named "Farad" after Michael Faraday,[14] and it was featured on his rock album The Electric Lucifer released in the same year.[15][media 3]

In 1970 Wendy Carlos and Robert Moog built another musical vocoder, a 10-band device inspired by the vocoder designs of Homer Dudley. It was originally called a spectrum encoder-decoder, and later referred to simply as a vocoder. The carrier signal came from a Moog modular synthesizer, and the modulator from a microphone input. The output of the 10-band vocoder was fairly intelligible, but relied on specially articulated speech. Later improved vocoders use a high-pass filter to let some sibilance through from the microphone; this ruins the device for its original speech-coding application, but it makes the "talking synthesizer" effect much more intelligible.

Carlos and Moog's vocoder was featured in several recordings, including the soundtrack to Stanley Kubrick's A Clockwork Orange in which the vocoder sang the vocal part of Beethoven's "Ninth Symphony". Also featured in the soundtrack was a piece called "Timesteps," which featured the vocoder in two sections. "Timesteps" was originally intended as merely an introduction to vocoders for the "timid listener", but Kubrick chose to include the piece on the soundtrack, much to the surprise of Wendy Carlos.[citation needed]

Kraftwerk's Autobahn (1974) was one of the first successful pop/rock albums to feature vocoder vocals. Another of the early songs to feature a vocoder was "The Raven" on the 1976 album Tales of Mystery and Imagination by progressive rock band The Alan Parsons Project; the vocoder also was used on later albums such as I Robot. Following Alan Parsons' example, vocoders began to appear in pop music in the late 1970s, for example, on disco recordings. Jeff Lynne of Electric Light Orchestra used the vocoder in several albums such as Time (featuring the Roland VP-330 Plus MkI). ELO songs such as "Mr. Blue Sky" and "Sweet Talkin' Woman" both from Out of the Blue (1977) use the vocoder extensively. Featured on the album are the EMS Vocoder 2000W MkI, and the EMS Vocoder (-System) 2000 (W or B, MkI or II).

Giorgio Moroder made extensive use of the vocoder on the 1975 album Einzelganger and on the 1977 album From Here to Eternity. Another example is Pink Floyd's album Animals, where the band put the sound of a barking dog through the device. Vocoders are often used to create the sound of a robot talking, as in the Styx song "Mr. Roboto". It was also used for the introduction to the Main Street Electrical Parade at Disneyland.

Vocoders have appeared on pop recordings from time to time ever since, most often simply as a special effect rather than a featured aspect of the work. However, many experimental electronic artists of the New Age music genre often utilize vocoder in a more comprehensive manner in specific works, such as Jean Michel Jarre (on Zoolook, 1984) and Mike Oldfield (on QE2, 1980 and Five Miles Out, 1982). There are also some artists who have made vocoders an essential part of their music, overall or during an extended phase. Examples include the German synthpop group Kraftwerk, Stevie Wonder ("Send One Your Love", "A Seed's a Star") and jazz/fusion keyboardist Herbie Hancock during his late 1970s period.

In 1982 Neil Young used a Sennheiser Vocoder VSM201 on six of the nine tracks on Trans.[16]

Voice effects

"Robot voices" became a recurring element in popular music during the 20th century. Apart from vocoders, several other methods of producing variations on this effect include: the Sonovox, Talk box, and Auto-Tune,[media 4] linear prediction vocoders, speech synthesis, [media 5][media 6] ring modulation and comb filter.

Vocoders are used in television production, filmmaking and games, usually for robots or talking computers. The Cylons from Battlestar Galactica used an EMS Vocoder 2000[16] to create their voice-effects. The 1980 version of the Doctor Who theme has a section generated by a Roland SVC-350 Vocoder.

Synthesizer voice

In 1972, Isao Tomita's first electronic music album Electric Samurai: Switched on Rock was an early attempt at applying speech synthesis technique through a vocoder[citation needed] in electronic rock and pop music. The album featured electronic renditions of contemporary rock and pop songs, while utilizing synthesized voices in place of human voices. In 1974, he utilized synthesized voices again in his popular classical music album Snowflakes are Dancing, which became a worldwide success and helped popularize electronic music.[17]

See also

for musical applications

References

  1. ^ "Wendy Carlos Vocoder Q&A". Wendy Carlos.
  2. ^ a b c "Homer Dudley's Speech Synthesisers, "The Vocoder" (1940) & "Voder"(1939)". Electronic Musical Instrument 1870 - 1990. 120 Years of Electronic Music (120years.net).
  3. ^ "HY-2 Vocoder". Crypto Machines.
  4. ^ Homer Dudley. Signal Transmission US Patent No.2151019, May 21, 1939. (Filed Oct. 30, 1935)
  5. ^ Voice Age
  6. ^ Compandent
  7. ^ Digital Voice Systems Inc.
  8. ^ DSP Innovations Inc.
  9. ^ TWELP
  10. ^ Meyer-Eppler, Werner (1949), Elektronische Klangerzeugung: Elektronische Musik und synthetische Sprache, Bonn: Ferdinand Dümmlers
  11. ^ Sonja Diesterhöft (2003), "Meyer-Eppler und der Vocoder", Seminars Klanganalyse und -synthese (in Germany), Fachgebiet Kommunikationswissenschaft, Institut für Sprache und Kommunikation, Berlin Institute of Technology, archived from the original on 2008-03-05 {{citation}}: External link in |publisher= (help)CS1 maint: unrecognized language (link)
  12. ^ "Das Siemens-Studio für elektronische Musik von Alexander Schaaf und Helmut Klein" (in Deutsch). Deutsches Museum.{{cite web}}: CS1 maint: unrecognized language (link)
  13. ^ Harald Bode (October 1984). "History of Electronic Sound Modification". J. of Audio Engineering Society. 32 (10): 730–739.
  14. ^ BRUCE HAACK - FARAD: THE ELECTRIC VOICE (Media notes). Stones Throw Records LLC. 2010. {{cite AV media notes}}: Unknown parameter |artist= ignored (|others= suggested) (help)
  15. ^ "Bruce Haack's Biography". Bruce Haack Publishing.
  16. ^ a b Dave Tompkins (20102011). How to Wreck a Nice Beach: The Vocoder from World War II to Hip-Hop, The Machine Speaks. Melville House. ISBN 978-1-933633-88-6 (2010), ISBN 978-1-61219-093-8 (2011). {{cite book}}: Check date values in: |year= (help)CS1 maint: year (link)
  17. ^ Mark Jenkins (2007), Analog synthesizers: from the legacy of Moog to software synthesis, Elsevier, pp. 133–4, ISBN 0-240-52072-6, retrieved 2011-05-27
Multimedia references
  1. ^ One Of The First Vo(co)der Machine (Motion picture). c.1939. {{cite AV media}}: Check date values in: |date= (help)
      a demonstration of Voder (not Vocoder).
  2. ^ Siemens Electronic Music Studio in Deutsches Museum (multi part) (Video).
      details of Siemens Electronic Music Studio, exhibited on Deutsches Museum.
  3. ^ Bruce Haack (1970). Electric to Me Turn – from "The Electric Lucifer" (Phonograph). Columbia Records.
      a sample of earlier Vocoder.
  4. ^ T-Pain (2005). I'm Sprung (CD Single/Download). Jive Records.
      a sample of Auto-Tune effect (a.k.a. T-Pain effect).
  5. ^ Earlier Computer Speech Synthesis (Audio). AT&T Bell Labs. c.1961. {{cite AV media}}: Check date values in: |date= (help)
      a sample of earlier computer based speech synthesis and song synthesis, by John Larry Kelly, Jr. and Louis Gerstman at Bell Labs, using IBM 704 computer. The demo song “Daisy Bell”, musical accompanied by Max Mathews, impressed Arthur C. Clarke and later he used it in the climactic scene of screenplay for his novel 2001: A Space Odyssey.
  6. ^ TI Speak & Spell (Video). Texas Instruments. c.1980. {{cite AV media}}: Check date values in: |date= (help)
      a sample of speech synthesis.