Jump to content

Audio mixing (recorded music)

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by XLinkBot (talk | contribs) at 00:34, 3 December 2016 (BOT--Reverting link addition(s) by Mhuire67 to revision 752709381 (https://www.youtube.com/watch?v=mcSBtwbHV4o [\byoutube\.com])). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Digital Mixing Console Sony DMX R-100 used in project studios

In sound recording and reproduction, mixing is the process of summing a multitrack recording down to mono, stereo, or surround sound print. Mixing methods include but are not limited to: setting levels, setting equalization, using stereo panning, and adding effects. Minor adjustments in the relationship among the various instruments within the song can have dramatic impacts on how the song affects the listeners.[1]

Audio mixing is largely dependent on both the arrangement and the recordings.[2] The process is generally carried out by a mixing engineer, though sometimes the musical producer, or even the artist, mixes the recorded material. After mixing, a mastering engineer prepares the final product for production on a CD, for radio, or other distribution.

Before the emergence of digital audio workstations (DAWs), the process of mixing was carried out on a mixing console. Currently, more and more engineers and independent artists are using a personal computer for the process. Mixing consoles still play a large part in the recording process. They are often used in conjunction with a DAW, although the DAW may only be used as a multi track recorder and for editing or sequencing, with the actual mixing being performed on the console.

History

Early recording machines

In the late nineteenth century, Edison and Berliner developed the first recording machines. The recording and reproduction process itself was completely mechanical with little or no electrical apparatus. The system utilizing a small horn terminated in a stretched, flexible diaphragm attached to a stylus which cut a groove of varying depth into the malleable tin foil on Edison's "phonograph" cylinder or of varying lateral deviation in the wax on Berliner's gramophone disc.[3]

Electronic recording became more widely used during the 1920s. It was based on the principles of electromagnetic transduction. The possibility for a microphone to be connected remotely to a recording machine meant that microphones could be positioned in more suitable places, connected by wires to a corresponding transducer at the other end of the wire, which drove the stylus to cut the disc. Even more useful was the fact that the outputs of the microphones could be mixed before being fed to the disc cutter, allowing greater flexibility in the balance.[4]

Before the introduction of multitrack recording, all the sounds and effects that were to be part of a record were mixed at one time during a live performance. If the recorded blend (or mix, as it is called) wasn't satisfactory, or if one musician made a mistake, the selection had to be performed over until the desired balance and performance was obtained. However, with the introduction of multi-track recording, the production phase of a modern recording has radically changed into one that generally involves three stages: recording, overdubbing, and downmix.[5]

Mixing as we know it today emerged with the introduction of commercial multi-track tape machines, most notably the 8-track recorders that were introduced during the 1960s. The ability to record sounds into a multitude of channels meant that treating these sounds could be postponed to a later stage – the mixing stage. [citation needed]

In the 1980s, home recording and mixing began to take market share from recording studios. The 4-track Portastudio was introduced in 1979. Using one, Bruce Springsteen released the album Nebraska in 1982. The Eurythmics topped the charts in 1983 with the song "Sweet Dreams (Are Made of This)", recorded by band member Dave Stewart on a makeshift 8-track recorder.[6] In the mid-to-late 1990s, computers replaced tape-based recording for most home studios, with the Power Macintosh proving popular.[7] At the same time, digital audio workstations (DAW), first used in the mid-1980s, began to replace tape in many professional recording studios. [citation needed]

Equipment

Mixing consoles

A mixer - or mixing console, mixing desk, mixing board, or software mixer - is the operational heart of the mixing process.[8] Mixers offer a multitude of inputs, each fed by a track from a multitrack recorder. Mixers typically have 2 main outputs (in the case of two-channel stereo mixing) or 8 (in the case of surround).

Mixers offer three main functionalities:[8][9]

  • Mixing – summing signals together, which is normally done by a dedicated summing amplifier or in the case of digital by a simple algorithm.
  • Routing – allows the routing of source signals to internal buses or external processing units and effects.
  • Processing – many mixers also offer on-board processors, like equalizers and compressors.
A simple mixing console

Mixing consoles used for dubbing can often be seen as large and intimidating, due to the exceptional amount of controls. However, many of these controls are duplicated, so much of the console can be learnt by studying one part of it. The controls on a mixing console will typically fall into one of two categories: processing and configuration. Processors are the controls used to manipulate the sound. These can vary in complexity, from simple internal level controls, to sophisticated outboard reverberation units. Configuration controls deal with the signal routing from the input to the output of the console through the various processes.[10]

Digital Audio Workstations (DAWs) of the 2000s have many mixing features which potentially have more processes available than that of a major console. The distinction between large consoles and DAWs equipped with a control surface is that a digital console will typically consist of dedicated digital signal processors for each channel. It is thus designed not to "overload" under the burden of signal processing, which may crash or lose signals. DAWs can dynamically assign resources like digital audio signal processing power, and may run out if many signal processes were in simultaneous use. However, this can be solved fairly easily by simply plugging more hardware into the DAW although the cost of such an endeavor may begin to approach that of a major console.[10]

Outboard gear and plugins

Outboard gear (analogue) and software plugins (digital) can be inserted to the signal path to extend processing possibilities. Outboard gear and plugins fall into two main categories:[8][9]

  • Processors – these devices are normally connected in series to the signal path, so the input signal is replaced with the processed signal (e.g., equalizers).
  • Effects – these can be considered as any unit that has an effect upon the signal, the term is mostly used to describe units that are connected in parallel to the signal path, and therefore they add to the existing sounds but do not replace them. Examples would include reverb and delay.

Multiple level controls in signal path

A single signal can pass through a large number of level controls – such as an individual channel fader, subgroup master fader, master fader and monitor volume control. According to audio engineer Tomlinson Holman, problems are created due to the multiplicity of the controls. Each and every console has their own dynamic range and it is important to utilize this correctly to avoid excessive noise or distortions. Attacking this problem – of the correct setting for the variety of controls - can be accomplished relatively quickly. Holman refers to the scale of the control as a clue for the solution of this problem. With 0 dB being the nominal setting of the controls, many have a "gain in hand," which goes above 0 dB. This means that one can turn it up from the nominal setting to have something that sounds clear. Other controls, such as sub masters and master level controls, are used for slight trims to the overall section-by-section balance or for the main fade-ins and fade-outs of the overall mix. [10]: 174 

Processes that affect levels

  • Faders – used to attenuate or boost the level of signals.
  • Pan pots – A fundamental part of configuration in recording console is panning. Pan pots are devices that place sound among the channels: L, C, R, LS, and RS.[10]: 174  They are also used to pan signals to the left or right and in surround, to the back or front.
  • Compressors – A device which automatically varies the volume range of tracks being mixed, so that one track is not obscured by another when a low volume level on the primary track coincides with a high volume level on a secondary track. Compressors are equipped with a number of controls to vary the volume range over which the action of compression occurs, the amount of compression, and how quickly or slowly the compressor acts.[10]: 175 
  • Expansion – The Expansion device does exactly the opposite of what the compressor does. It increases the volume range of a source and may do so across a wide dynamic range or may be restricted to a narrower region by control functions. Restricting expansion to only low-level sounds helps to minimize noise. This function is often referred to as downward expansion, noise gating, or keying and reduces the level below a threshold set by a specific control. Noise gates have numerous audible problems. (e.g.: In a dialog recording with air conditioning noise in the background, the threshold of the noise gate may remove the air conditioner sound between lines of dialog which can create an exaggerated difference that could be much more noticeable than if the audio had been left unprocessed.)[10]: 176 
  • Limiters – A limiter acts on signals above a certain threshold. Above that threshold, the level is controlled so that for each dB of increase on the input, the gain is reduced by the same amount. Therefore, the output level above the threshold will stay exactly the same, regardless of any increases in the input level. Limiters can be used to catch occasional events that might not otherwise be controlled, to bring them into a range in which the recording medium can handle the signal linearly.[10]: 176 

These items discussed thus far affect the level of audio signal. The most commonly used process is level control, which is used even on the simplest of mixers.[10]: 177 

Processes that affect frequency response

Processes that primarily affect the frequency response of the signal are generally seen as second in importance to level control. These processes clean the audio signal, enhance interchangeability between other signals, adjust for the loudness effect, and generally create a much more pleasant or deliberately worse sound. There are two principle frequency response processes – equalization and filtering.[10]: 177 

  • Equalizers – The simplest description of EQ is the process of altering the frequency response in a manner similar to what tone controls do on a stereo system. Professional EQs dissect the audio spectrum into three or four parts which may be called the low-bass, mid-bass, mid-treble, and high frequency controls.[10]: 178 
  • Filters – Filters are used to essentially eliminate certain frequencies from the output. Filters strip off the any part of the audio spectrum. There are various types of filters. A high-pass filter (low-cut) is used to remove excessive room noise at low frequencies. A low-pass filter (high-cut) is used to help isolate a low frequency instrument playing in a studio along with others. And a band-pass filter is a combination of high- and low-pass filters, also known as a telephone filter (because a sound lacking in high and low frequencies resembles the quality of sound transmitted and received by telephone).[11]

Processes that affect time

  • Reverbs – Reverbs are used to simulate boundary reflections created in a real room, adding a sense of space and depth to otherwise 'dry' recordings. Another use is to help distinguish among auditory objects; all sound having one reverberant character will be categorized together by human hearing in a process called auditory streaming. This is an important feature in layering sound in depth from in front of the speaker to behind it.[10]: 181 

Before the advent of electronic reverb and echo processing, physical means were used to generate the effects. An echo chamber, a large reverberant room, could be used, equipped with a speaker and at least two spaced microphones. Signals were sent to the speaker and the reverberation generated in the room was picked up by the two microphones, constituting a "stereo return".[11]

Downmixing

Downmixing is the process of converting a program with a multiple-channel configuration into a program with fewer channels. Common examples include downmixing from 5.1 surround sound to stereo, and stereo to mono. In the former case, the left and right surround channels are blended with the left and right front channels. The centre channel is blended equally with the left and right channels. The LFE channel is either mixed with the front signals or not used. Because these are common scenarios, it is common practice to verify the sound of such downmixes during the production process to ensure stereo and mono compatibility.

The alternative channel configuration can be explicitly authored during the production process with multiple channel configurations provided for distribution. For example, a stereo mix can be put on DVDAudio discs or Super Audio CDs along with the surround mix.[12] Alternatively, the program can be automatically downmixed by the end consumer's audio system. For example, a DVD player or sound card may downmix a surround sound program to stereophonic sound (two channels) for playback through two speakers. [citation needed]

Mixing in surround sound

Any device having a number of multiple bus consoles (typically having eight or more buses) can be used to create a 5.1 surround sound mix, but this may be frustrating if the device is not designed to facilitate signal routing, panning and processing in a surround sound environment. Whether working in an analog hardware, digital hardware, or DAW "in-the-box" mixing environment, the ability to pan mono or stereo sources and place effects in the 5.1 soundscape and monitor multiple output formats without difficulty can make the difference between a successful or compromised mix.[13] Mixing in surround is very similar to mixing in stereo except that there are more speakers, placed to "surround" the listener. In addition to the horizontal panoramic options available in stereo, mixing in surround lets the mix engineer pan sources within a much wider and more enveloping environment. In a surround mix, sounds can appear to originate from many more or almost any direction depending on the number of speakers used, their placement and how audio is processed.

There are two common ways to approach mixing in surround:

  • Expanded Stereo – With this approach, the mix will still sound very much like an ordinary stereo mix. Most of the sources such as the instruments of a band, the vocals, and so on, will still be panned between the left and right speakers, but lower levels might also be sent to the rear speakers in order to create a wider stereo image, while lead sources such as the main vocal might be sent to the center speaker. Additionally, reverb and delay effects will often be sent to the rear speakers to create a more realistic sense of being in a real acoustic space. In the case of mixing a live recording that was performed in front of an audience, signals recorded by microphones aimed at, or placed among the audience will also often be sent to the rear speakers to make the listener feel as if he or she is actually a part of the audience.
  • Complete Surround/All speakers are treated equally – Instead of following the traditional ways of mixing in stereo, this much more liberal approach lets the mix engineer do anything he or she wants. Instruments can appear to originate from anywhere, or even spin around the listener. When done appropriately and with taste, interesting sonic experiences can be achieved, as was the case with James Guthrie's 5.1 mix of Pink Floyd's The Dark Side of the Moon, albeit with input from the band.[14] This is a much different mix from the 1970s quadrophonic mix.

Naturally, these two approaches can be combined any way the mix engineer sees fit. Recently, a third approach, or method, of mixing in surround was developed by surround mix engineer Unne Liljeblad.

  • MSS – Multi Stereo Surround[15] – This approach treats the speakers in a surround sound system as a multitude of stereo pairs. For example, a stereo recording of a piano, created using two microphones in an ORTF configuration, might have its left channel sent to the left rear speaker and its right channel sent to the center speaker. The piano might also be sent to a reverb having its left and right outputs sent to the left front speaker and right rear speaker, respectively. Additional elements of the song, such as an acoustic guitar recorded in stereo, might have its left and right channels sent to a different stereo pair such as the left front speaker and the right rear speaker with its reverb returning to yet another stereo pair, the left rear speaker and the center speaker. Thus, multiple clean stereo recordings surround the listener without the smearing comb-filtering effects that often occur when the same or similar sources are sent to multiple speakers.

References

  1. ^ Strong, Jeff (2009). Home Recording For Musicians For Dummies (Third ed.). Indianapolis, Indiana: Wiley Publishing, Inc. p. 249. {{cite book}}: |access-date= requires |url= (help)
  2. ^ Hepworth-Sawyrr, Russ (2009). From Demo to Delivery. The production process. Oxford, United Kingdom: Focal Press. p. 109. {{cite book}}: |access-date= requires |url= (help)
  3. ^ Rumsey, Francis; McCormick, Tim (2009). Sound and Recording (6th ed.). Oxford, United Kingdom: Elsevier Inc. p. 168. ISBN 978-0-240-52163-3.
  4. ^ Rumsey, Francis; McCormick, Tim (2009). Sound and Recording (6th ed.). Oxford, United Kingdom: Elsevier Inc. p. 169. ISBN 978-0-240-52163-3.
  5. ^ Huber, David Miles (2001). Modern Recording Techniques. Focal Press. p. 321. ISBN 0240804562.
  6. ^ "Eurythmics: Biography". Artist Directory. Rolling Stone. 2010. Retrieved March 20, 2010.
  7. ^ "Studio Recording Software: Personal And Project Audio Adventures". studiorecordingsoftware101.com. 2008. Archived from the original on February 8, 2011. Retrieved March 20, 2010. {{cite web}}: Unknown parameter |deadurl= ignored (|url-status= suggested) (help)
  8. ^ a b c White, Paul (2003). Creative Recording (2nd ed.). Sanctuary Publishing. p. 335. ISBN 1-86074-456-7.
  9. ^ a b Izhaki, Roey (2008). Mixing Audio. Focal Press. p. 566. ISBN 978-0-240-52068-1.
  10. ^ a b c d e f g h i j k Holman, Tomlinson (2010). Sound for Film and Television (3rd ed.). Oxford, United Kingdom: Elsevier Inc. ISBN 978-0-240-81330-1.
  11. ^ a b Rumsey, Francis; McCormick, Tim (2009). Sound and Recording (6th ed.). Oxford, United Kingdom: Elsevier Inc. p. 390. ISBN 978-0-240-52163-3.
  12. ^ Bartlett, Bruce; Bartlett, Jenny (2009). Practical Recording Techniques (5th ed.). Oxford, United Kingdom: Focal Press. p. 484. ISBN 978-0-240-81144-4.
  13. ^ Huber, David Miles; Runstein, Robert (2010). Modern Recording Techniques (7th ed.). Oxford, United Kingdom: Focal Press. p. 559. ISBN 978-0-240-81069-0.
  14. ^ "Archived copy". Archived from the original on 2012-04-02. Retrieved 2011-11-12. {{cite web}}: Unknown parameter |deadurl= ignored (|url-status= suggested) (help)CS1 maint: archived copy as title (link)
  15. ^ "Surround Sound Mixing". www.mix-engineer.com. Retrieved 2010-01-12.