Audio mixing (recorded music)

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Digital Mixing Console Sony DMX R-100 used in project studios

In sound recording and reproduction, audio mixing (or “mix down”) is the process which commences after all tracks are recorded and edited as individual parts into one piece of music. The mixing-process can consist of various processes but are not limited to: setting levels; setting equalization; using stereo panning and the addition of effects. The way the song is mixed has as much impact on the way it sounds as each of the individual parts that has been recorded. Dramatic impacts on how the song affects the listeners can be created by minor adjustments in the relationship among the various instruments within the song [1]

Audio mixing is utilized as part of creating an album or single. Mixing is largely dependent on both the arrangement and the recordings [2] The mixing stage often follows a multitrack recording. The process is generally carried out by a mixing engineer, though sometimes it is the musical producer, or even the artist, who mixes the recorded material. After mixing, a mastering engineer prepares the final product for reproduction on a CD, for radio, or otherwise.

Prior to the emergence of digital audio workstations (DAWs), the process of mixing used to be carried out on a mixing console. Currently, more and more engineers and independent artists are using a personal computer for the process. Mixing consoles still play a large part in the recording process. They are often used in conjunction with a DAW, although the DAW may only be used as a multitrack recorder and for editing or sequencing, with the actual mixing being performed on the console.

The role of audio mixing[edit]

An audio production facility at An-Najah National University

The role of a music producer is not necessarily a technical one, with the physical aspects of recording being assumed by the audio engineer, and so producers often leave the similarly technical mixing process to a specialist audio mixer. Even producers with a technical background may prefer that a mixer comes in to take care of the final stage of the production process. Noted producer and mixer Joe Chiccarelli has said that it is often better for a project that an outside person comes in because:

"when you're spending months on a project you get so mired in the detail that you can't bring all the enthusiasm to the final [mixing] stage that you'd like. [You] need somebody else to take over those responsibilities so that you can sit back and regain your objectivity."[3]

However, as Chiccarelli explains, sometimes limited budgets dictate that a producer takes care of the mixing as well.[3]

History[edit]

Early recording machines[edit]

Edison and Berliner first developed recording machines in the last years of the nineteenth century with little or no electrical apparatus. The recording and reproduction process itself was completely mechanical – the system utilizing a small horn terminated in a stretched, flexible diaphragm attached to a stylus which cut a groove of varying depth into the malleable tin foil on Edinson’s “phonograph” cylinder or of varying lateral deviation in the wax on Berliner’s gramphone disc.[4]

Electronic recording became more widely used during the 1920s. It was based on the principles of electromagnetic transduction. The possibility for a microphone to be connected remotely to a recording machine meant that microphones could be positioned in more suitable places, connected by wires to a complementary transducer at the other end of the wire, which drove the stylus to cut the disc. Even more useful was the fact that the outputs of the microphones could be mixed together before being fed to the disc cutter, allowing greater flexibility in the balance.[5]

Before the introduction of multitrack recording, all the sounds and effects that were to be part of a recording were mixed together at one time during a live performance. If the recorded blend (or mix, as it is called) wasn't satisfactory, or if one musician made a mistake, the selection had to be performed over until the desired balance and performance was obtained. However, with the introduction of multitrack recording, the production phase of a modern recording has radically changed into one that generally involves three stages: recording, overdubbing, and mixdown.[6]

Mixing as we know it today emerged with the introduction of commercial multitrack tape machines, most notably the 8-track recorders that were introduced during the 1960s. The ability to record sounds into a multitude of channels meant that treating these sounds can be postponed to a later stage – the mixing stage.

In the 1980s, home recording and mixing began to take market share from recording studios. The 4-track Portastudio was introduced in 1979. Using one, Bruce Springsteen released the album Nebraska in 1982. The Eurythmics topped the charts in 1983 with the song "Sweet Dreams (Are Made of This)", recorded by bandmember Dave Stewart on a makeshift 8-track recorder.[7] In the mid-to-late 1990s, computers replaced tape-based recording for most home studios, with the Power Macintosh proving popular.[8] At the same time, digital audio workstations (DAW), first used in the mid-1980s, began to replace tape in many professional recording studios.

Equipment[edit]

Mixing Consoles[edit]

Main article: mixing console

A mixer, or mixing console, or mixing desk, or mixing board, or software mixer is the operational heart of the mixing process.[9] Mixers offer a multitude of inputs, each is fed by a track from a multitrack recorder. Mixers typically have 2 main outputs (in the case of two-channel stereo mixing) or 8 (in the case of surround).

Mixers offer three main functionalities:[9][10]

  • Mixing – summing signals together, which is normally done by a dedicated summing amplifier or in the case of digital by a simple algorithm.
  • Routing – allows the routing of source signals to internal buses or external processing units and effects.
  • Processing – many mixers also offer on-board processors, like equalizers and compressors.
Simple mixing console

Mixing consoles used for dubbing are often large and intimidating, with exceptional amount of controls. These controls luckily consist of a great deal of duplication among them, so by studying just one area of a console, one learns nearly all of the areas. The mixing console can at the end of the day be broken down into two ingredients, processing and configuration. Sound processes are the devices that is used to manipulate the sound, all the way from simple internal level controls to sophisticated outboard reverberation units whereas the configuration issues consist out the signal routing from the input to the output of the console through the various processes.[11]

Digital Audio Workstations (DAW’s) today have many mixing features which potentially have more processes available to that of a major console. The distinction between DAW’s equipped with a control surface and large consoles is usually that, if the console is digital, it will consist of dedicated digital signal processors for each channel and is thus designed not to “overload” under the burden of signal processing and which may possibly crash or lose signals. DAW’s can dynamically assign resources like digital audio signal processing power and so could run out if many signal processes were in simultaneous use. The upside of this is that this can be solved fairly easily by just plugging in more hardware into the DAW, but the downside would be that the cost of this endeavour may approach that of a major console.[11]

Outboard gear and plugins[edit]

Outboard gear (analog) and software plugins (digital) can be inserted to the signal path in order to extend processing possibilities. Outboard gear and plugins fall into two main categories:[9][10]

  • Processors – these devices are normally connected in series to the signal path, so the input signal is replaced with the processed signal (e.g., equalizers).
  • Effects – while an effect can be considered as any unit that affects the signal, the term is mostly used to describe units that are connected in parallel to the signal path and therefore they add to the existing sounds, but do not replace them. Examples would include reverb and delay.

Multiple Level Controls in Signal Path[edit]

A single signal can pass through a large number of level controls – such as an individual channel fader, subgroup master fader, master fader and monitor volume control. Problems are created according to Holman due to the multiplicity of the controls. Each and every console has their own dynamic range – so it is important to utilize this correctly in order to avoid excessive noise or distortions. Attacking this problem – of the correct setting of the variety of controls can be accomplished relatively easily. Holman refers to the scale of the control as a clue for the solution of this problem. With 0 dB being the nominal setting of the controls, many have a “gain in hand,” which goes above 0 dB. This means that one can turn it up from the nominal setting to have something that sounds clear. As for other controls such as sub masters and master level controls are used for slight trims to the overall section by section balance or for the main fade-ins and fade-outs of the overall mix. [11]

Processes that Affect Audio Levels[edit]

  • Faders – used to attenuate or boost the level of signals.
  • Pan pots – A fundamental part of configuration in recording console is panning. Pan pots are devices that places sound among the channels: L, C, R, LS, and RS.[11] Also used to pan signal to the left or right and in surround also back and front.
  • Compressors  - Every track has a volume range. The moment when tracks are combined in mixing, a problem of unintentional masking of one signal by another arises. For example: we have a dialog recording, with a volume range and a music recording with its own volume range. The problem is that although most of the time the music will lie underneath the dialog, at a certain point in time the peaks of the music may correspond to the minimum level of the dialog and the dialog will be obscured.[11]

To solve this problem we could decrease the gain level of the music during its higher level passages and increase it during its softer ones to maintain a more even level behind the dialog – but this process will be time consuming. This process can easily be automated by utilizing a device called a compressor – this device is equipped with a number of control to vary the volume range over which the action of the compression occurs, the amount of the compression, and how fast or slow the compressor acts.[11]

  • Expansion – The Expansion device does exactly the opposite of that of the compressor. It increases the volume range of a source and may do so across a wide dynamic range or may be restricted to a narrower region by control functions. Restricting expansion to only low-level sounds helps to minimize noise. This function is often referred to as downward expansion, noise gating, or keying and turns the level down below a threshold set by a specific control. Noise gates have numerous audible problems. If for instance one has a dialog recording with air conditioning noise in it – the threshold of the noise gate can distinguish between the dialog and the air conditioner because the air conditioning noise is lower in level that the dialog. The problem arises with the audible air conditioning noise behind the dialog and the absence thereof in between lines. This exaggerated difference will be much more unnoticeable by being just left unprocessed [11]
  • Limitors – A limiter acts on signals above a certain threshold. Above that threshold, the level is controlled so that for each dB of increased on the input, the gain is reduced by the same amount. Therefor the level will above the threshold stay exactly the same, despite any increases in the level. Limiters can be used to catch occasional events that might not otherwise be controlled as to level, to bring them into a range in which the recording medium can handle the signal linearly.[11]

These items discussed thus far affect the level of audio signal. The most commonly used processes is level control, which is used even on the simplest of mixers.[11]

Processes that Affect the Frequency Response[edit]

Processes that primarily affect the frequency response of the signal is generally seen as second in importance to level control. These processes cleanses the audio signal, enhances interchangeability between other signals, adjust for the loudness effect and generally creates a much more pleasant or deliberately worse. There are two principle frequency response processes- equalization and filtering.[11]

  • Equalizers – The simplest way of describing EQ is the process of altering the frequency response in a similar manner to what tone controls do on a stereo system. Professional EQ’s uses a more intensified audio spectrum which is broken down into three or four parts which may be called the low-bass, mid-bass, mid-treble and high frequency controls.[11]
  • Filters –Filters are used to essentially eliminate certain frequencies from the output. Filters strip off the any part of the audio spectrum. There are various types of filters:

High pass-filter (Low-cut): used to remove excessive room noise at low frequencies. Low pass-filter (high-cut): used to help isolate a low frequency instrument playing a studio along with others. Band pass-filter: a combination of high- and low pass filter: used as a telephone filter – often referred to this because restricting the audible frequency range in this way sound like one of the primary things that a telephone does to sound.[12]

Processes that Affect the Time domain[edit]

  • Reverbs – used to simulate boundary reflections created in a real room, adding a sense of space and depth to otherwise 'dry' recordings. Another use is to help distinguish among auditory objects; all sound having one reverberant character will be categorized together by human hearing in a process called auditory streaming. This is an important feature in layering sound in depth from in front of the screen to behind it.[11]

For example: Before the advent of electronic reverb and echo processing, somewhat more basic, ‘physical’ means were used to generate the effects. An echo chamber, a large reverberant room equipped with a speaker and at least two spaced microphones. Signals was sent to the speaker and the reverb generated in the room was picked up by the two microphones which constituted a “stereo return”.[12]

Downmixing[edit]

Downmixing is making a stereo mix from a 5.1 surround mix. It is done in the user’s home theatre receiver. In the down-mixing circuit, the left and right surround channels are blended with the left and right front channels. The centre channel is blended equally with the left and right channels. The LFE channel is either mixed with the front signals or not used. Downmixes made this way seldom create a well-balanced stereo mix, so be sure to check the 5.1 mix for stereo compatibility. Surround monitoring systems should have a downmix button so one can hear how surround mixes will sound when down-mixed to stereo by consumer receivers. It is best to do a separate stereo mix and record it on tracks 7 and 8 of an eight-track mix-down recorder. This stereo mix can be put on DVDAudio discs or Super Audio CDs along with the surround mix.[13]

Consumer electronics may also downmix automatically. For example, a DVD player or sound card may downmix a surround sound signal (four or more channels) to stereophonic sound (two channels) for playback through two speakers.

Downmixing doesn't just apply to audio signals. In radio communication, downmixing brings an IF signal down to baseband via demodulation with a complex carrier frequency.[citation needed]

Mixing in surround[edit]

Any device having a number of multiple bus consoles (typically having eight or more buses) can be used to create a surround-sound mix, but the important question of the day is this: How easily can signals be routed, panned and affected in a surround environment to create a 5.1 mix without going nuts with frustration?

Whether you’re working in an analog hardware, digital hardware or DAW “inthe-box” mixing environment, the ability to pan mono or stereo sources into a surround soundscape, place effects in the 5.1 scape and monitor multiple output formats without difficulty can make the difference between a difficult, compromised mix and one that lifts your spirits.[14]

Mixing in surround is very similar to mixing in stereo except that there are more speakers, placed to "surround" the listener. In addition to the horizontal panoramic options available in stereo, mixing in surround lets the mix engineer pan sources within a much wider and more enveloping environment. In a surround mix, sounds can appear to originate from many more or almost any direction depending on the number of speakers used, their placement and how audio is processed.

There are two common ways to approach mixing in surround:

  • Expanded Stereo – With this approach, the mix will still sound very much like an ordinary stereo mix. Most of the sources such as the instruments of a band, the vocals, and so on, will still be panned between the left and right speakers, but lower levels might also be sent to the rear speakers in order to create a wider stereo image, while lead sources such as the main vocal might be sent to the center speaker. Additionally, reverb and delay effects will often be sent to the rear speakers to create a more realistic sense of being in a real acoustic space. In the case of mixing a live recording that was performed in front of an audience, signals recorded by microphones aimed at, or placed among the audience will also often be sent to the rear speakers to make the listener feel as if he or she is actually a part of the audience.
  • Complete Surround/All speakers are treated equally – Instead of following the traditional ways of mixing in stereo, this much more liberal approach lets the mix engineer do anything he or she wants. Instruments can appear to originate from anywhere, or even spin around the listener. When done appropriately and with taste, interesting sonic experiences can be achieved, as was the case with James Guthrie's 5.1 mix of Pink Floyd's The Dark Side of the Moon, albeit with input from the band.[15] This is a much different mix from the 1970s quadrophonic mix.

Naturally, these two approaches can be combined any way the mix engineer sees fit. Recently, a third approach, or method of mixing in surround was developed by surround mix engineer Unne Liljeblad.

  • MSS – Multi Stereo Surround[16] – This approach treats the speakers in a surround sound system as a multitude of stereo pairs. For example, a stereo recording of a piano, created using two microphones in an ORTF configuration, might have its left channel sent to the left rear speaker and its right channel sent to the center speaker. The piano might also be sent to a reverb having its left and right outputs sent to the left front speaker and right rear speaker, respectively. Additional elements of the song, such as an acoustic guitar recorded in stereo, might have its left and right channels sent to a different stereo pair such as the left front speaker and the right rear speaker with its reverb returning to yet another stereo pair, the left rear speaker and the center speaker. Thus, multiple clean stereo recordings surround the listener without the smearing comb filtering effects that often occurs when the same or similar sources are sent to multiple speakers.

References[edit]

  1. ^ Strong, Jeff (2009). Home Recording For Musicians For Dummies (Third Edition ed.). Indianapolis, Indiana: Wiley Publishing, Inc. p. 249. 
  2. ^ Hepworth-Sawyrr, Russ (2009). From Demo to Delivery. The production process. Oxford, United Kingdom: Focal Press. p. 109. 
  3. ^ a b "Interview with Joe Chiccarelli". HitQuarters. 14 June 2010. Retrieved Sep 3, 2010. 
  4. ^ Rumsey, Francis; McCormick, Tim (2009). Sound and Recording (6th Edition ed.). Oxford, United Kingdom: Elsevier Inc. p. 168. ISBN 978-0-240-52163-3. 
  5. ^ Rumsey, Francis; McCormick, Tim (2009). Sound and Recording (6th Edition ed.). Oxford, United Kingdom: Elsevier Inc. p. 169. ISBN 978-0-240-52163-3. 
  6. ^ Huber, David Miles (2001). Modern Recording Techniques. Focal Press. p. 321. ISBN 0240804562. 
  7. ^ "Eurythmics: Biography". Artist Directory. Rolling Stone. 2010. Retrieved March 20, 2010. 
  8. ^ "Studio Recording Software: Personal And Project Audio Adventures". studiorecordingsoftware101.com. 2008. Retrieved March 20, 2010. 
  9. ^ a b c White, Paul (2003). Creative Recording (2nd ed.). Sanctuary Publishing. p. 335. ISBN 1-86074-456-7. 
  10. ^ a b Izhaki, Roey (2008). Mixing Audio. Focal Press. p. 566. ISBN 978-0-240-52068-1. 
  11. ^ a b c d e f g h i j k l Holman, Tomlinson (2010). Sound for Film and Television (3rd Edition ed.). Oxford, United Kingdom: Elsevier Inc. p. 172. ISBN 978-0-240-81330-1. 
  12. ^ a b Rumsey, Francis; McCormick, Tim (2009). Sound and Recording (6th Edition ed.). Oxford, United Kingdom: Elsevier Inc. p. 390. ISBN 978-0-240-52163-3. 
  13. ^ Bartlett, Bruce; Bartlett, Jenny (2009). Practical Recording Techniques (5th Edition ed.). Oxford, United Kingdom: Focal Press. p. 484. ISBN 978-0-240-81144-4. 
  14. ^ Huber, David Miles; Runstein, Robert (2010). Modern Recording Techniques (7th Edition ed.). Oxford, United Kingdom: Focal Press. p. 559. ISBN 978-0-240-81069-0. 
  15. ^ http://www.digitalbits.com/reviewsdvdasacd/pinkfloyddarksidesacd.html
  16. ^ "Surround Sound Mixing". www.mix-engineer.com. Retrieved 2010-01-12.