Sound editor (filmmaking)
A sound editor is a creative professional responsible for selecting and assembling sound recordings in preparation for the final sound mixing or mastering of a television program, motion picture, video game, or any production involving recorded or synthetic sound. Sound editing developed out of the need to fix the incomplete, undramatic, or technically inferior sound recordings of early talkies, and over the decades has become a respected filmmaking craft, with sound editors implementing the aesthetic goals of motion picture sound design.
There are primarily three divisions of sound that are combined to create a final mix — dialogue, effects, and music. In larger markets such as New York and Los Angeles, sound editors often specialize in only one of these areas; thus a show will have separate dialogue, effects, and music editors. In smaller markets, sound editors are expected to know how to handle it all, often crossing over into the mixing realm as well. Editing effects is likened to creating the sonic world from scratch, while dialogue editing is likened to taking the existing sonic world and fixing it. Dialogue editing is more accurately thought of as "production sound editing", where the editor takes the original sound recorded on the set, and using a variety of techniques, makes the dialogue more understandable, as well as smoother, so the listener doesn't hear the transitions from shot to shot (often the background sounds underneath the words change dramatically from take to take). Among the challenges that effects editors face is creatively blending various elements to create believable sounds for everything the audience sees on screen. 
The essential piece of equipment used in modern sound editing is the digital audio workstation, or DAW. A DAW allows sounds, stored as computer files on a host computer, to be placed in timed synchronization with a motion picture, mixed, manipulated, and documented. The standard DAW system in use by the American film industry, as of 2012, is Avid's Pro Tools, with the majority running on Macs. Another system in use presently is Yamaha owned Steinberg's cross platform DAW Nuendo running on Macs using operating system Mac OS X but also on Windows XP. Other systems historically used for sound editing were:
- WaveFrame, manufactured by WaveFrame of Emeryville, CA
- Several DAWs have been manufactured by Fairlight
- AMS-Neve Audiofile
- AudioVision manufactured by Avid
The WaveFrame, Fairlights, and Audiofile were of the "integrated" variety of DAW, and required the purchase of expensive proprietary hardware and specialized computers (not standard PCs or Macs). Of the two surviving systems, Pro Tools still requires some proprietary hardware (either a low cost portable device such as the "Mbox" or the more expensive multichannel A/D,D/A converters for more professional high end applications), while Nuendo (a successor to Cubase) is of the "host based" variety.
Sound effects library
Sound effects editors typically use an organized catalog of sound recordings from which sound effects can be easily accessed and used in film soundtracks. There are several commercially distributed sound effects libraries available, the two most well-known publishers being Sound Ideas and The Hollywood Edge. There are also online search engines, such as Sounddogs, which allow users to purchase individual sound effects from a large online database.
Many sound effects editors make their own customized sound recordings which are accumulated into highly prized personal sound effects libraries. Often, sound effects used in films will be saved and reused in subsequent films. One particular case in point is a recording known as the "Wilhelm Scream" which has become known for its repeated use in many famous films such as The Charge at Feather River (1953), The Empire Strikes Back (1980), Raiders of the Lost Ark (1981), and Reservoir Dogs (1992). Sound designer Ben Burtt is credited with naming and popularizing the "Wilhelm Scream".
The first sound process to substantially displace silent films in the moviegoing market was the Vitaphone process. Under the Vitaphone process, a microphone recorded the sound performed on set directly to a phonograph master, which made Vitaphone recordings impossible to cut or resynchronize, as later processes would allow. This limited the Vitaphone process to capturing musical acts or one-take action scenes, like Vaudeville routines or other re-creations of stage performances; essentially, scenes that required no editing at all. However, Warner Brothers, even as early as The Jazz Singer, began experimenting with the mixing of multiple phonograph recordings and intercutting between the "master" sync take and coverage of other angles. The original mixing console used to make the master recording of The Jazz Singer, still viewable in the Warner Bros. Studio Museum, has no more than four or five knobs, but each is still visibly labeled with the basic "groups" that a modern sound designer would recognize: "music", "crowd", and so on.
Warner Bros. developed increasingly sophisticated technology to sequence greater numbers of phonograph sound effects to picture using the Vitaphone system, but these were rendered obsolete with the widespread adoption of sound-on-film processes in the early 1930s.
In a sound-on-film process, a microphone captures sound and converts it into a signal that can be photographed on film. Since the recording is imposed linearly on the medium, and the medium is easily cut and glued, sounds recorded can be easily re-sequenced and separated onto separate tracks, allowing more control in mixing. Options expanded further when optical sound recording processes were replaced with magnetic recording in the 1950s. Magnetic recording offered a better signal-to-noise ratio, allowing more tracks to be played simultaneously without increasing noise on the full mix.
The greater number of options available to the editors led to more complex and creative sound tracks, and it was in this period that a set of standard practices became established which continued until the digital era, and many of the notional concepts are still at the core of sound design, computerized or not:
- Sounds are assembled together onto tracks. Many tracks are mixed together (or "dubbed together") to create a final film.
- A track will generally contain only one "type" or group of sound. A track that contains dialogue only contains dialogue, a track that contains music should only contain music. Many tracks may carry all the sound for one group.
- Tracks may be mixed a group at a time, in a process called predubbing. All of the tracks containing dialogue may be mixed at one time, and all of the tracks containing foley may be mixed at another time. In the process of predubbing, many tracks can be mixed into one.
- Predubs are mixed together to create a final dub. On the occasion of the final dub, final decisions about the balance between different groups of sounds are made.
The Re-Recording Mixer (Dubbing Mixer in the UK) is the specialist who mixes all the audio tracks supplied by the sound editors (including 'live sounds' such as Foley) in a special Re-recording or Dubbing Suite. As well as mixing, he introduces equalization, compression and filtered sound effects, etc. while seated at a large console. Usually two or three mixers sit alongside, each controlling sections of audio, i.e., dialogue, music, effects.
In the era of optical sound tracks, it was difficult to mix more than eight tracks at once without accumulating excessive noise. At the height of magnetic recording, 200 tracks or more could be mixed together, aided by Dolby noise reduction. In the digital era there is no limit. For example, a single predub can exceed a hundred tracks, and the final dub can be the sum of a thousand tracks.
The mechanical system of sound editing remained unchanged until the early 1990s, when digital audio workstations acquired features sufficient for use in film production, mainly, the ability to synchronize with picture, and the ability to play back many tracks at once with CD-quality fidelity. The quality of 16-bit audio at a 48 kHz sampling rate allowed hundreds of tracks to be mixed together with negligible noise.
The physical manifestation of the work became computerized: sound recordings, and the decisions the editors made in assembling them, were now digitized, and could be versioned, done, undone, and archived instantly and compactly. In the magnetic recording era, sound editors owned trucks to ship their tracks to a mixing stage, and transfers to magnetic film were measured in hundreds of thousands of feet. Once the materials arrived at the stage, a dozen recordists and mix technicians required a half an hour to load the three or four dozen tracks a predub might require. In the digital era, 250 hours of stereo sound, edited and ready to mix, can be transported on a single 160 GB hard drive. As well, this 250 hours of material can be copied in four hours or less, as opposed to the old system, which, predictably, would take 250 hours.
Because of these innovations, sound editors, as of 2005, face the same issues as other computerized, "knowledge-based" professionals, including the loss of work due to outsourcing to cheaper labor markets, and the loss of royalties due to ineffective enforcement of intellectual property rights.
Animation sound editing
In the field of animation, traditionally the sound editors have been given the more prestigious title of "film editor" in screen credits. As animated films are more often than not planned to the frame, the traditional functions of a film editor are often unnecessary. Treg Brown is known to cartoon fans as the sound effects genius of Warner Bros. Animation. Other greats of the field have included Jimmy MacDonald of the Walt Disney Studios, Greg Watson and Don Douglas at Hanna-Barbera, and Joe Siracusa of UPA and various TV cartoon studios.