Surround sound
Surround sound is a technique for enriching the sound reproduction quality of an audio source with additional audio channels from speakers that surround the listener (surround channels), providing sound from a 360° radius in the horizontal plane (2D) as opposed to "screen channels" (centre, [front] left, and [front] right) originating only from the listener's forward arc.
Surround sound is characterized by a listener location or sweet spot where the audio effects work best, and presents a fixed or forward perspective of the sound field to the listener at this location. The technique enhances the perception of sound spatialization by exploiting sound localization; a listener's ability to identify the location or origin of a detected sound in direction and distance. Typically this is achieved by using multiple discrete audio channels routed to an array of loudspeakers.[1]
There are various surround sound based formats and techniques, varying in reproduction and recording methods along with the number and positioning of additional channels.
Fields of application
Though cinema and soundtracks represent the major uses of surround techniques, its scope of application is broader than that as surround sound permits creation of an audio-environment for all sorts of purposes. Multichannel audio techniques may be used to reproduce contents as varied as music, speech, natural or synthetic sounds for cinema, television, broadcasting, or computers. In terms of music content for example, a live performance may use multichannel techniques in the context of an open-air concert, of a musical theatre or for broadcasting;[2] for a film specific techniques are adapted to movie theater, or to home (e.g. home cinema systems).[3][4] The narrative space is also a content that can be enhanced through multichannel techniques. This applies mainly to cinema narratives, for example the speech of the characters of a film,[5][6][7] but may also be applied to plays for theatre, to a conference, or to integrate voice-based comments in an archeological site or monument. For example, an exhibition may be enhanced with topical ambient sound of water, birds, train or machine noise. Topical natural sounds may also be used in educational applications.[8] Other fields of application include video game consoles, personal computers and other platforms.[9][10][11][12] In such applications, the content would typically be synthetic noise produced by the computer device in interaction with its user. Significant work has also been done using surround sound for enhanced situation awareness in military and public safety applications.[13]
Types of media and technologies
Commercial surround sound media include videocassettes, DVDs, and HDTV broadcasts encoded as compressed Dolby Digital and DTS, and lossless audio such as DTS HD Master Audio and Dolby TrueHD on Blu-ray Disc and HD DVD, which are identical to the studio master. Other commercial formats include the competing DVD-Audio (DVD-A) and Super Audio CD (SACD) formats, and MP3 Surround. Cinema 5.1 surround formats include Dolby Digital and DTS. Sony Dynamic Digital Sound (SDDS) is an 8 channel cinema configuration which features 5 independent audio channels across the front with two independent surround channels, and a Low-frequency effects channel. Traditional 7.1 surround speaker configuration introduces two additional rear speakers to the conventional 5.1 arrangement, for a total of four surround channels and three front channels, to create a more 360° sound field.
Most surround sound recordings are created by film production companies or video game producers; however some consumer camcorders have such capability either built-in or available separately. Surround sound technologies can also be used in music to enable new methods of artistic expression. After the failure of quadraphonic audio in the 1970s, multichannel music has slowly been reintroduced since 1999 with the help of SACD and DVD-Audio formats. Some AV receivers, stereophonic systems, and computer soundcards contain integral digital signal processors and/or digital audio processors to simulate surround sound from a stereophonic source (see fake stereo).
In 1967, the rock group Pink Floyd performed the first-ever surround sound concert at "Games for May", a lavish affair at London’s Queen Elizabeth Hall where the band debuted its custom-made quadraphonic speaker system.[14] The control device they had made, the Azimuth Co-ordinator, is now displayed at London's Victoria and Albert Museum, as part of their Theatre Collections gallery.[15]
History
The first documented use of surround sound was in 1940, for the Disney studio's animated film Fantasia. Walt Disney was inspired by Nikolai Rimsky-Korsakov's operatic piece, Flight of the Bumblebee to have a bumblebee featured in his musical Fantasia and also sound as if it was flying in all parts of the theatre. The initial multichannel audio application was called 'Fantasound', comprising three audio channels and speakers. The sound was diffused throughout the cinema, controlled by an engineer using some 54 loudspeakers. The surround sound was achieved using the sum and the difference of the phase of the sound. However, this experimental use of surround sound was excluded from the film in later showings. In 1952, "surround sound" successfully reappeared with the film "This is Cinerama", using discrete seven-channel sound, and the race to develop other surround sound methods took off.[16][17]
In the 1950s, the German composer Karlheinz Stockhausen experimented with and produced ground-breaking electronic compositions such as Gesang der Jünglinge and Kontakte, the latter using fully discrete and rotating quadraphonic sounds generated with industrial electronic equipment in Herbert Eimert's studio at the Westdeutscher Rundfunk (WDR). Edgar Varese's Poeme Electronique, created for the Iannis Xenakis designed Philips Pavilion at the 1958 Brussels World's Fair, also utilised spatial audio with 425 loudspeakers used to move sound throughout the pavilion.
In 1957, working with artist Jordan Belson, Henry Jacobs produced Vortex: Experiments in Sound and Light - a series of concerts featuring new music, including some of Jacobs' own, and that of Karlheinz Stockhausen, and many others - taking place in the Morrison Planetarium in Golden Gate Park, San Francisco. Sound designers commonly regard this as the origin of the (now standard) concept of "surround sound." The program was popular, and Jacobs and Belson were invited to reproduce it at the 1958 World Expo in Brussels. [18] There are also many other composers that created ground-breaking surround sound works in the same time period.
In 1978, a concept devised by Max Bell for Dolby Laboratories called "split surround" was tested with the movie "Superman". This led to the 70mm stereo surround release of "Apocalypse Now," which became the first formal release in cinemas with three channels in the front and two in the rear. There were typically five speakers behind the screens of 70mm-capable cinemas, but only the Left, Center and Right were used full-frequency, while Center-Left and Center-Right were only used for bass-frequencies (as it is currently common). The "Apocalypse Now" encoder/decoder was designed by Michael Karagosian, also for Dolby Laboratories. The surround mix was produced by an Oscar-winning crew led by Walter Murch for American Zoetrope. The format was also deployed in 1982 with the stereo surround release of Blade Runner.
The 5.1 version of surround sound originated in 1987 at the famous French Cabaret Moulin Rouge. A French engineer, Dominique Bertrand used a mixing board specially designed in cooperation with Solid State Logic, based on 5000 series and including six channels. Respectively: A left, B right, C centre, D left rear, E right rear, F bass. The same engineer had already achieved a 3.1 system in 1974, for the International Summit of Francophone States in Dakar Senegal.
Creating surround sound
Surround sound is created in several ways. The first and simplest method is using a surround sound recording technique—capturing two distinct stereo images, one for the front and one for the back or by using a dedicated setup, e.g. an augmented Decca tree [19]—and/or mixing-in surround sound for playback on an audio system using speakers encircling the listener to play audio from different directions. A second approach is processing the audio with psychoacoustic sound localization methods to simulate a two-dimensional (2-D) sound field with headphones. A third approach, based on Huygens' principle, attempts reconstructing the recorded sound field wave fronts within the listening space; an "audio hologram" form. One form, wave field synthesis (WFS), produces a sound field with an even error field over the entire area. Commercial WFS systems, currently marketed by companies sonic emotion and Iosono, require many loudspeakers and significant computing power.
The Ambisonics form, also based on Huygens' principle, gives an exact sound reconstruction at the central point; less accurate away from center point. There are many free and commercial software programs available for Ambisonics, which dominates most of the consumer market, especially musicians using electronic and computer music. Moreover, Ambisonics products are the standard in surround sound hardware sold by Meridian Audio In its simplest form, Ambisonics consumes few resources, however this is not true for recent developments, such as Near Field Compensated Higher Order Ambisonics.[20] Some years ago it was shown that, in the limit, WFS and Ambisonics converge.[21]
Finally, surround sound can also be achieved by mastering level, from stereophonic sources as with Penteo, which uses Digital Signal Processing analysis of a stereo recording to parse out individual sounds to component panorama positions, then positions them, accordingly, into a five-channel field. However, there are more ways to create surround sound out of stereo, for instance with the routines based on QS and SQ for encoding Quad sound, where instruments were divided over 4 speakers in the studio. This way of creating surround with software routines is normally referred to as "upmixing,",[22] which was particularly successful on the Sansui QSD-series decoders that had a mode where it mapped the L ↔ R stereo onto an ∩ arc.
Mapping channels to speakers
In most cases, surround sound systems rely on the mapping of each source channel to its own loudspeaker. Matrix systems recover the number and content of the source channels and apply them to their respective loudspeakers. With discrete surround sound, the transmission medium allows for (at least) the same number of channels of source and destination; however, one-to-one, channel-to-speaker, mapping is not the only way of transmitting surround sound signals.
The transmitted signal might encode the information (defining the original sound field) to a greater or lesser extent; the surround sound information is rendered for replay by a decoder generating the number and configuration of loudspeaker feeds for the number of speakers available for replay – one renders a sound field as produced by a set of speakers, analogously to rendering in computer graphics. This "replay device independent" encoding is analogous to encoding and decoding an Adobe PostScript file, where the file describes the page, and is rendered per the output device's resolution capacity. The Ambisonics and WFS systems use audio rendering; the Meridian Lossless Packing contains elements of this capability
Standard Configurations
There are many alternative setups available for a surround sound experience, with a 3-2 (3 front, 2 back speakers and a Low Frequency Effects channel) configuration (more commonly referred to as 5.1 surround) being the standard for most surround sound applications, including cinema, television and consumer applications.[23] This is a compromise between the ideal image creation of a room and that of practicality and compatibility with two-channel stereo.[24] Because most surround sound mixes are produced for 5.1 surround (6 channels), larger setups require matrixes or processors to feed the additional speakers.[24]
The standard surround setup consists of three front speakers, LCR (left, center and right, two surround speakers LS and RS (left and right surround respectively) and a subwoofer for the Low Frequency Effects (LFE) channel, that is low-pass filtered at 120 Hz. The angles between the speakers have been standardized by the ITU (International Telecommunication Union) recommendation 775 and AES (Audio Engineering Society) as follows: 60 degrees between the L and R channels (allows for two-channel stereo compatibility) with the center speaker directly in front of the listener. The Surround channels are placed 100-120 degrees from the center channel, with the subwoofer’s positioning not being critical due to the low directional factor of frequencies below 120 Hz.[25] The ITU standard also allows for additional surround speakers, that need to be distributed evenly between 60 and 150 degrees.[23][25]
Surround mixes of more or less channels are acceptable, if they are compatible, as described by the ITU-R BS. 775-1, with 5.1 surround. The 3-1 channel setup (consisting of one monophonic surround channel) is such a case, where both LS and RS are fed by the monophonic signal at an attenuated level of -3 dB.[24] 7.1 channel surround is another setup, most commonly used in large cinemas, that is compatible with 5.1 surround, though it is not stated in the ITU-standards. 7.1 channel surround adds two additional channels, center-left (CL) and center-right (CR) to the 5.1 surround setup, with the speakers situated 15 degrees off centre from the listener.[23] This convention is used to cover an increased angle between the front loudspeakers as a product of a larger screen.
The function of the center channel is to anchor the signal so that any central panned images do not shift when a listener is moving or is sitting away from the sweet spot.[26] The center channel also prevents any timbral modifications from occurring, which is typical for 2-channel stereo, due to phase differences at the two ears of a listener.[23] The centre channel is especially used in films and television, with dialogue primarily feeding the center channel.[24] The function of the center channel can either be of a monophonic nature (as with dialogue) or it can be used in combination with the left and right channels for true three-channel stereo. Motion Pictures tend to use the center channel for monophonic purposes with stereo being reserved purely for the left and right channels. Surround microphones techniques have however been developed that fully use the potential of three-channel stereo.
In 5.1 surround, phantom images between the front speakers are quite accurate, with images towards the back and especially to the sides being unstable.[23][24] The localisation of a virtual source, based on level differences between two loudspeakers to the side of a listener, shows great inconsistency across the standardised 5.1 setup, also being largely affected by movement away from the reference position. 5.1 surround is therefore limited in its ability to convey 3D sound, making the surround channels more appropriate for ambience or effects.[23])
Surround Microphone Techniques
Most 2-channel stereophonic microphone techniques are compatible with a 3-channel setup (LCR), as many of these techniques already contain a center microphone or microphone pair. Microphone techniques for LCR should, however, try to obtain greater channel separation to prevent conflicting phantom images between L/C and L/R for example.[24][26][27] Specialised techniques have therefore been developed for 3-channel stereo. Surround microphone techniques largely depend on the setup used, therefore being biased towards the 5.1 surround setup, as this is the standard.[23]
Surround recording techniques can be differentiated into those that use single arrays of microphones placed in close proximity, and those treating front and rear channels with separate arrays.[23][25] Close arrays present more accurate phantom images, whereas separate treatment of rear channels is usually used for ambience.[25] For accurate depiction of an acoustic environment, such as a halls, side reflections are essential. Appropriate microphone techniques should therefore be used, if room impression is important. Although the reproduction of side images are very unstable in the 5.1 surround setup, room impressions can still be accurately presented.[24]
Some microphone techniques used for coverage of three front channels, include double-stereo techniques, INA-3 (Ideal Cardioid Arrangement), the Decca Tree setup and the OCT (Optimum Cardioid Triangle).[24][27] Surround techniques are largely based on 3-channel techniques with additional microphones used for the surround channels. A distinguishing factor for the pickup of the front channels in surround is that less reverberation should be picked up, as the surround microphones will be responsible for the pickup of reverberation.[23] Cardioid, hypercardioid, or supercardioid polar patterns will therefore often replace omnidirectional polar patterns for surround recordings. To compensate for the lost low-end of directional (pressure gradient) microphones, additional omnidirectional (pressure microphones), exhibiting an extended low-end response, can be added. The microphone’s output is usually low-pass filtered.[24][27] A simple surround microphone configuration involves the use of a front array in combination with two backward-facing omnidirectional room microphones placed about 10–15 meters away from the front array. If echoes are notable, the front array can be delayed appropriately. Alternatively, backward facing cardioid microphones can be placed closer to the front array for a similar reverberation pickup.[25]
The INA-5 (Ideal Cardioid Arrangement) is a surround microphone array that uses five cardioid microphones resembling the angles of the standardised surround loudspeaker configuration defined by the ITU Rec. 775.[25] Dimensions between the front three microphone as well as the polar patterns of the microphones can be changed for different pickup angles and ambient response.[23] This technique therefore allows for great flexibility.
A well established microphone array is the Fukada Tree, which is a modified variant of the Decca Tree stereo technique. The array consists of 5 spaced cardioid microphones, 3 front microphones resembling a Decca Tree and two surround microphones. Two additional omnidirectional outriggers can be added to enlarge the perceived size of the orchestra and/or to better integrate the front and surround channels.[23][24] The L, R, LS and RS microphones should be placed in a square formation, with L/R and LS/RS angled at 45 degrees and 135 degrees from the center microphone respectively. Spacing between these microphones should be about 1.8 meters. This square formation is responsible for the room impressions. The center channel is placed a meter in front of the L and R channels, producing a strong center image. The surround microphones are usually placed at the critical distance (where the direct and reverberant field is equal), with the full array usually situated several meters above and behind the conductor.[23][24]
The NHK (Japanese broadcasting company) developed an alternative technique also involving 5 cardioid microphones. Here a baffle is used for separation between the front left and right channels, which are 30 cm apart.[23] Outrigger omnidirectional microphones, low-pass filtered at 250 Hz, are spaced 3 meters apart in line with the L and R cardioids. These compensate for the bass roll-off of the cardioid microphones and also add expansiveness.[26] A 3-meter spaced microphone pair, situated 2–3 meters behind front array, is used for the surround channels.[23] The centre channel is again placed slightly forward, with the L/R and LS/RS again angled at 45 and 135 degrees respectively.
The OCT-Surround (Optimum Cardioid Triangle-Surround) microphone array is an augmented technique of the stereo OCT technique using the same front array with added surround microphones.The front array is designed for minimum crosstalk, with the front left and right microphones having supercardioid polar patterns and angled at 90 degrees relative to the center microphone.[23][24] It is important that high quality small diaphragm microphones are used for the L and R channels to reduce off-axis coloration.[25] Equalization can also be used to flatten the response of the supercardioid microphones to signals coming in at up to about 30 degrees from the front of the array.[23] The center channel is placed slightly forward. The surround microphones are backwards facing cardioid microphones, that are placed 40 cm back from the L and R microphones. The L, R, LS and RS microphones pick up early reflections from both the sides and the back of an acoustic venue, therefore giving significant room impressions.[24] Spacing between the L and R microphones can be varied to obtain the required stereo width.[24]
Specialized microphone arrays have been developed for recording purely the ambience of a space. These arrays are used in combination with suitable front arrays, or can be added to above mentioned surround techniques.[25] The Hamasaki square (also proposed by NHK) is a well established microphone array used for the pickup of hall ambience. Four figure-eight microphones are arranged in a square, ideally placed far away and high up in the hall. Spacing between the microphones should be between 1–3 meters.[24] The microphones nulls (zero pickup point) are set to face the main sound source with positive polarities outward facing, therefore very effectively minimizing the direct sound pickup as well as echoes from the back of the hall[25] The back two microphones are mixed to the surround channels, with the front two channels being mixed in combination with the front array into L and R.
Another ambient technique is the IRT (Institut fuer Rundfunktechnik) cross. Here, four cardioid microphones, 90 degrees relative to one another, are placed in square formation, separated by 21–25 cm.[25][27] The front two microphones should be positioned 45 degrees off axis from the sound source. This technique therefore resembles back to back near-coincident stereo pairs. The microphones outputs are fed to the L, R and LS, RS channels. The disadvantage of this approach is that direct sound pickup is quite significant.
Many recordings do not require pickup of side reflections. For Live Pop music concerts a more appropriate array for the pickup of ambience is the cardioid trapezium.[24] All four cardioid microphones are backward facing and angled at 60 degrees from one another, therefore similar to a semi-circle. This is effective for the pickup of audience and ambience.
All the above-mentioned microphone arrays take up considerable space, making them quite ineffective for field recordings. In this respect, the double MS (Mid Side) technique is quite advantageous. This array uses back to back cardioid microphones, one facing forward, the other backwards, combined with either one or two figure-eight microphone. Different channels are obtained by sum and difference of the figure-eight and cardioid patterns.[24][25] When using only one figure-eight microphone, the double MS technique is extremely compact and therefore also perfectly compatible with monophonic playback. This technique also allows for postproduction changes of the pickup angle.
Bass management
Surround replay systems may make use of bass management, the fundamental principle of which is that bass content in the incoming signal, irrespective of channel, should be directed only to loudspeakers capable of handling it, whether the latter are the main system loudspeakers or one or more special low-frequency speakers called subwoofers.
There is a notation difference before and after the bass management system. Before the bass management system there is a Low Frequency Effects (LFE) channel. After the bass management system there is a subwoofer signal. A common misunderstanding is the belief that the LFE channel is the "subwoofer channel". The bass management system may direct bass to one or more subwoofers (if present) from any channel, not just from the LFE channel. Also, if there is no subwoofer speaker present then the bass management system can direct the LFE channel to one or more of the main speakers.
Low Frequency Effects (LFE) channel
Because the low-frequency effects channel requires only a fraction of the bandwidth of the other audio channels, it is referred to as the ".1" channel; for example "5.1" or "7.1".[citation needed]
The LFE is a source of some confusion in surround sound. The LFE channel was originally developed to carry extremely low "sub-bass" cinematic sound effects (with commercial subwoofers sometimes going down to 30 Hz, e.g., the loud rumble of thunder or explosions) on their own channel. This allowed theaters to control the volume of these effects to suit the particular cinema's acoustic environment and sound reproduction system. Independent control of the sub-bass effects also reduced the problem of intermodulation distortion in analog movie sound reproduction. A "sub-woofer" capable of playing back frequencies as low as 5 Hz was developed by a small speaker manufacturer in Florida. It utilized a propellor design and required a large cabinet to move sub-sonic air mass.[28]
In the original movie theater implementation, the LFE was a separate channel fed to one or more subwoofers. Home replay systems, however, may not have a separate subwoofer, so modern home surround decoders and systems often include a bass management system that allows bass on any channel (main or LFE) to be fed only to the loudspeakers that can handle low-frequency signals. The salient point here is that the LFE channel is not the "subwoofer channel"; there may be no subwoofer and, if there is, it may be handling a good deal more than effects.[29]
Some record labels such as Telarc and Chesky have argued that LFE channels are not needed in a modern digital multichannel entertainment system.[citation needed] They argue that all available channels have a full-frequency range and, as such, there is no need for an LFE in surround music production, because all the frequencies are available in all the main channels. These labels sometimes use the LFE channel to carry a height channel, underlining its redundancy for its original purpose. The label BIS generally uses a 5.0 channel mix.
Surround sound specifications
The descriptions of surround sound specifications below distinguish between the number of discrete channels encoded in the original signal and the number of channels reproduced for playback. The number of channels reproduced for playback can be changed by using matrix decoding. A distinction is also made between the number of channels reproduced for playback and the number of speakers used to reproduce (each channel may refer to a group of speakers). The graphics to the right of each specification description represent the number of channels, not the number of speakers.
Notation
This notation, e.g. "5.1", reflects the number of full range channels; including a ".1" to reflect the limited range of the LFE channel.
E.g. 2 basic stereo speakers with no LFE channel = 2.0
5 full-range channels + 1 LFE channel = 5.1
It can also be expressed as the number of full-range channels in front of the listener, separated by a slash from the number of full-range channels beside or behind the listener, separated by a decimal point from the number of limited-range LFE channels.
E.g. 3 front channels + 2 side channels + an LFE channel = 3/2.1
This notation can then be expanded to include the notation of Matrix Decoders. Dolby Digital EX, for example, has a sixth full-range channel incorporated into the two rear channels with a matrix. This would be expressed:
3 front channels + 2 rear channels + 3 channels reproduced in the rear in total + 1 LFE channel = 3/2:3.1
Note: The term stereo, although popularised in reference to two channel audio, can also be properly used to refer to surround sound, as it strictly means "solid" (actually meaning three-dimensional sound) sound. However this is no longer a common usage and "stereo sound" is almost exclusively used to describe two channel, left and right, sound.
Channel identification
In accordance with ANSI/CEA-863-A[30]
Zero-based order within multi-channel mp3/wav/flac datastream[31][32][33][34] |
Order within DTS/AAC[35][36] |
Channel name | Color-coding on commercial receiver and cabling |
---|---|---|---|
0 | 1 | Front left | White |
1 | 2 | Front right | Red |
2 | 0 | Center | Green |
3 | 5 | Low frequency | Purple |
4 | 3 | Surround left | Blue |
5 | 4 | Surround right | Grey |
6 | 6 | Surround back left | Brown |
7 | 7 | Surround back right | Khaki |
Front left | Center | Front right |
Surround left | Surround right | |
Surround back left | Surround back right | |
Low frequency |
Sonic Whole Overhead Sound
In 2002, Dolby premiered a master of We Were Soldiers which featured a Sonic Whole Overhead Sound soundtrack. This mix included a new ceiling-mounted height channel.
Ambisonics
Ambisonics is a series of recording and replay techniques using multichannel mixing technology that can be used live or in the studio and which recreates the soundfield as it existed in the space, in contrast to traditional surround systems, which can only create illusion of the soundfield if the listener is located in a very narrow sweetspot between speakers. Any number of speakers in any physical arrangement can be used to recreate a sound field. With 6 or more speakers arranged around a listener, a 3-dimensional ("periphonic", or full-sphere) sound field can be presented. Ambisonics was invented by Michael Gerzon.
Panor-Ambiophonic (PanAmbio) 4.0/4.1
PanAmbio combines a stereo dipole and crosstalk cancellation in front and a second set behind the listener (total of four speakers) for 360° 2D surround reproduction. Four channel recordings, especially those containing binaural cues, create speaker-binaural surround sound. 5.1 channel recordings, including movie DVDs, are compatible by mixing C-channel content to the front speaker pair. 6.1 can be played by mixing SC to the back pair.
Standard speaker channels
This table shows the various speaker configurations that are commonly used for end-user equipment. The order and identifiers are those specified for the channel mask in the standard uncompressed WAV file format (which contains a raw multichannel PCM stream) and are used according to the same specification for most PC connectible digital sound hardware and PC operating systems capable of handling multiple channels.[37][38] While it is certainly possible to build any speaker configuration, there isn't a lot of commercially available movie or music content for alternative speaker configurations. Such cases, however, can be worked around by remixing the source content channels to the speaker channels using a matrix table specifying how much of each content channel is played through each speaker channel.
Channel name | Identifier | Index | Flag | 1.0 Mono[Note 1] | 2.0 Stereo[Note 2] | 3.0 Stereo | 3.0 Surround | 4.0 Quad | 4.0 Surround | 5.0 | 5.0 Side[Note 3] | 6.0 | 6.0 Side[Note 3] | 7.0 | 7.0 Side[Note 4] | 7.0 Surround[Note 3] | 9.0 Surround | 11.0 Surround |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Front Left | SPEAKER_FRONT_LEFT | 0 | 0x00000001 | No | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Front Right | SPEAKER_FRONT_RIGHT | 1 | 0x00000002 | No | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Front Center | SPEAKER_FRONT_CENTER | 2 | 0x00000004 | Yes | No | Yes | No | No | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Back Left | SPEAKER_BACK_LEFT | 4 | 0x00000010 | No | No | No | No | Yes | No | Yes | No | Yes | No | Yes | No | Yes | Yes | Yes |
Back Right | SPEAKER_BACK_RIGHT | 5 | 0x00000020 | No | No | No | No | Yes | No | Yes | No | Yes | No | Yes | No | Yes | Yes | Yes |
Front Left of Center | SPEAKER_FRONT_LEFT_OF_CENTER | 6 | 0x00000040 | No | No | No | No | No | No | No | No | No | No | Yes | Yes | No | No | Yes |
Front Right of Center | SPEAKER_FRONT_RIGHT_OF_CENTER | 7 | 0x00000080 | No | No | No | No | No | No | No | No | No | No | Yes | Yes | No | No | Yes |
Back Center | SPEAKER_BACK_CENTER | 8 | 0x00000100 | No | No | No | Yes | No | Yes | No | No | Yes | Yes | No | No | No | No | No |
Side Left | SPEAKER_SIDE_LEFT | 9 | 0x00000200 | No | No | No | No | No | No | No | Yes | No | Yes | No | Yes | Yes | Yes | Yes |
Side Right | SPEAKER_SIDE_RIGHT | 10 | 0x00000400 | No | No | No | No | No | No | No | Yes | No | Yes | No | Yes | Yes | Yes | Yes |
Front Left Height | SPEAKER_LEFT_HEIGHT | 12 | 0x00001000 | No | No | No | No | No | No | No | No | No | No | No | No | No | Yes | Yes |
Front Right Height | SPEAKER_RIGHT_HEIGHT | 14 | 0x00004000 | No | No | No | No | No | No | No | No | No | No | No | No | No | Yes | Yes |
Any of the channel configurations above may include a low frequency effects (LFE) channel (the channel played through the subwoofer.) This would make the configuration ".1" instead of ".0". Most modern multichannel mixes will contain an LFE.
10.2 surround sound
10.2 is the surround sound format developed by THX creator Tomlinson Holman of TMH Labs and University of Southern California (schools of Cinema/Television and Engineering). Developed along with Chris Kyriakakis of the USC Viterbi School of Engineering, 10.2 refers to the format's promotional slogan: "Twice as good as 5.1". Advocates of 10.2 argue that it is the audio equivalent of IMAX[weasel words].
22.2 surround sound
22.2 is the surround sound component of Ultra High Definition Television, and has been developed by NHK Science & Technical Research Laboratories. As its name suggests, it uses 24 speakers. These are arranged in three layers: A middle layer of ten speakers, an upper layer of nine speakers, and a lower layer of three speakers and two sub-woofers. The system was demonstrated at Expo 2005, Aichi, Japan, the NAB Shows 2006 and 2009, Las Vegas, and the IBC trade shows 2006 and 2008, Amsterdam, Netherlands.
See also
- 3D audio effect
- Duophonic
- Dolby Surround
- Four-channel Compact Disc Digital Audio
- Holophonics
- MPEG Surround
- Precedence effect
- Soundfield microphone
- Virtual surround
Notes
- ^ For historical reasons, when using (1.0) mono sound, often in technical implementations the first (left) channel is used, instead of the center speaker channel, in many other cases when playing back multichannel content on a device with a mono speaker configuration all channels are downmixed into one channel. The way standard mono and stereo plugs used for common audio devices are designed ensures this as well.
- ^ Stereo (2.0) is still the most common format for music, as most computers, television sets and portable audio players only feature two speakers, and the red book Audio CD standard used for retail distribution of music only allows for two channels. A 2.1 speaker set does generally not have a separate physical channel for the low-frequency effects, as the speaker set downmixes the low-frequency components of the two stereo channels into one channel for the subwoofer.
- ^ a b c THX 5.1 Surround Sound Speaker set-up. This is the correct speaker placement for 5.0/6.0/7.0 channel sound reproduction for Dolby and Digital Theater Systems.
- ^ "Sony Print Master Guidelines" (PDF)This plus an LFE is the correct speaker placement for 8-track Sony Dynamic Digital Sound.
{{cite web}}
: CS1 maint: postscript (link)
References
- ^ Channels Defined by Audiogurus
- ^ Mick M Sawaguchi, and Akira Fukada (1999), Multichannel sound mixing practice for broadcasting. IBC Conference, 1999 Article
- ^ Eliasson, Jens; Leijon, Ulrika; Persson, Emil (2001). "Multichannel cinema sound": 8. CiteSeerx: 10.1.1.150.854.
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ Graham Healy, and Alan F. Smeaton (2009). Spatially augmented audio delivery: applications of spatial sound awareness in sensor-equipped indoor environments. In: ISA 2009: First International Workshop on Indoor Spatial Awareness, 18 May 2009, Taipei, Taiwan. ISBN 978-1-4244-4153-2. Abstract
- ^ Christos Manolas, and Sandra Pauletto (2009). "Enlarging the Diegetic Space: Uses of the Multi-channel Soundtrack in Cinematic Narrative". The soundtrack, 2(1), August 2009, pp. 39–55, doi:10.1386/st.2.1.39_1, Print ISSN: 1751-4193 , Electronic ISSN: 1751-4207, Abstract
- ^ Josephine Anstey, Dave Pape, Daniel J. Sandin (2000). Building a VR Narrative. Proc. SPIE, Vol. 3957, 370, doi:10.1117/12.384463. Abstract
- ^ Mark Kerins (2006). "Narration in the Cinema of Digital Sound". University of Texas Press, The Velvet Light Trap, 58, Fall 2006, pp. 41–54. doi:10.1353/vlt.2006.0030. Abstract
- ^ Marc S. Dantzker (2004). Acoustics in the Cetaceans Environment: A Multimedia Educational Package. Article
- ^ Dan Gärdenfors (2003). Designing sound-based computer games". Digital Creativity, 14, 2, June 2003 , pp. 111–114. doi:10.1076/digc.14.2.111.27863. Abstract
- ^ Timothy Roden, Ian Parberry (2005). Designing a narrative-based audio only 3D game engine. ACM International Conference Proceeding Series; Vol. 265, Proceedings of the 2005 ACM SIGCHI International Conference on Advances in computer entertainment technology, Valencia, Spain, pp. 274–277, ISBN 1-59593-110-4. Abstract
- ^ Stephan Schütze (2003). "The creation of an audio environment as part of a computer game world: the design for Jurassic Park – Operation Genesis on the XBOX as a broad concept for surround installation creation". Cambridge University Press, Organised Sound, 8 : 171–180. doi:10.1017/S1355771803000074. Abstract
- ^ Mike Jones (2000). "Composing Space: Cinema and Computer Gaming. The Macro-Mise En Scene and Spatial Composition". Article
- ^ Durand Begault et al (2005). "Audio-Visual Communication Monitoring System for Enhanced Situational Awareness" [1]
- ^ "Pink Floyd Astounds With 'Sound in the Round'". WIRED. May 12, 1967.
- ^ "pink floyd". Retrieved 2009-08-14.
- ^ Tomlinson, Holman (2007). Surround sound: up and running. Focal Press. p. 3,4. ISBN 978-0-240-80829-1. Retrieved 2010-04-03.
- ^ Emil Torick (1998). "Highlights in the history of multichannel sound". Journal of the Audio Engineering Society, 46:1/2, pp. 27–31, February 1998 Abstract
- ^ http://en.wikipedia.org/wiki/Henry_Jacobs
- ^ Ron Steicher (2003): The DECCA Tree—it's not just for stereo any more
- ^ Spatial Sound Encoding Including Near Field Effect: Introducing Distance Coding Filters and a Viable, New Ambisonic Format
- ^ Further Investigations of High Order Ambisonics and Wavefield Synthesis for Holophonic Sound Imaging
- ^ "DTSAC3".
- ^ a b c d e f g h i j k l m n o p Rumsey, Francis; McCormick, Tim (2009). Sound And Recording (Sixth Edition ed.). Oxford: Focal Press.
{{cite book}}
:|edition=
has extra text (help) - ^ a b c d e f g h i j k l m n o p q Wöhr, Martin; Dickreiter, Michael; Dittel, Volker; Hoeg, Wolfgang, eds. (2008). Handbuch der Tonstudiotechnik Band 1 (Seventh Edition ed.). Munich: K.G Saur.
{{cite book}}
:|edition=
has extra text (help) - ^ a b c d e f g h i j k Holman, Tomlinson (2008). Surround Sound: Up and Running (Second Edition ed.). Oxford: Focal Press.
{{cite book}}
:|edition=
has extra text (help) - ^ a b c Bartlett, Bruce; Bartlett, Jenny (1999). On-Location Recording Techniques. Focal Press.
- ^ a b c d Eargle, John (2005). The Microphone Book (Second Edition ed.). Oxford: Focal Press.
{{cite book}}
:|edition=
has extra text (help) - ^ http://www.eminent-tech.com/RWbrochure.htm
- ^ Multichannel Music Mixing by Dolby Laboratories, Inc.
- ^ Consumer Electronics Association standards: Setup and Connection
- ^ "Updated: Player 6.3.1 with mp3 Surround support now available!".
- ^ Creating 7.1 Audio
- ^ "Multiple channel audio data and WAVE files". Microsoft.
- ^ Josh Coalson. "FLAC - format".
- ^ Avisynth.org, GetChannel
- ^ Hydrogenaudio, 5.1 Channel Mappings
- ^ "KSAUDIO_CHANNEL_CONFIG structure". Microsoft.
- ^ Header file for OpenSL, containing various identifier definitions