Jump to content

MPEG-2: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 254: Line 254:
It should also be noted according to the MPEG-LA Licensing Agreement [[MPEG-LA]] that '''any''' use of MPEG-2 technology is subject to royalties.
It should also be noted according to the MPEG-LA Licensing Agreement [[MPEG-LA]] that '''any''' use of MPEG-2 technology is subject to royalties.


* Encoders have a $2.50 charge per use since Jan 1, 2002
* Encoders have a $0.50 charge for each product.
* Decoders have a $2.50 charge per use since Jan 1, 2002
* Decoders have a $0.50 charge for each product. [http://www.mpegla.com/m2s/m2s-agreement.cfm] (information correct in January 2006)


As well as any packaged medium (DVD's / Data Streams) plus additional fees according to length of broadcast.
As well as any packaged medium (DVD's / Data Streams) plus additional fees according to length of broadcast.

Revision as of 11:22, 31 January 2007

Template:Distinguish2

MPEG-2 is a standard for "the generic coding of moving pictures and associated audio information [1]." It is widely used around the world to specify the format of the digital television signals that are broadcast by terrestrial (over-the-air), cable, and direct broadcast satellite TV systems. It also specifies the format of movies and other programs that are distributed on DVD and similar disks. The standard allows text and other data, e.g., a program guide for TV viewers, to be added to the video and audio data streams. TV stations, TV receivers, DVD players, and other equipment are all designed to this standard. MPEG-2 was the second of several standards developed by the Motion Pictures Expert Group (MPEG) and is an international standard (ISO/IEC 13818).

While MPEG-2 is the core of most digital television and DVD formats, it does not completely specify them. Regional institutions adapt it to their needs by restricting and augmenting aspects of the standard. See "Profiles and Levels."

MPEG-2 includes a Systems part (part 1) that defines two distinct (but related) container formats. One is Transport Stream, which is designed to carry digital video and audio over somewhat-unreliable media. MPEG-2 Transport Stream is commonly used in broadcast applications, such as ATSC and DVB. MPEG-2 Systems also defines Program Stream, a container format that is designed for reasonably reliable media such as disks. MPEG-2 Program Stream is used in the DVD and SVCD standards.

The Video part (part 2) of MPEG-2 is similar to MPEG-1, but also provides support for interlaced video (the format used by analog broadcast TV systems). MPEG-2 video is not optimized for low bit-rates (less than 1 Mbit/s), but outperforms MPEG-1 at 3 Mbit/s and above. All standards-conforming MPEG-2 Video decoders are fully capable of playing back MPEG-1 Video streams.

With some enhancements, MPEG-2 Video and Systems are also used in most HDTV transmission systems.

The MPEG-2 Audio part (defined in Part 3 of the standard) enhances MPEG-1's audio by allowing the coding of audio programs with more than two channels. Part 3 of the standard allows this to be done in a backwards compatible way, allowing MPEG-1 audio decoders to decode the two main stereo components of the presentation.

Part 7 of the MPEG-2 standard specifies a rather different, non-backwards-compatible audio format. Part 7 is referred to as MPEG-2 AAC. While AAC is more efficient than the previous MPEG audio standards, it is much more complex to implement, and somewhat more powerful hardware is needed for encoding and decoding.

Video coding (simplified)

An HDTV camera generates a raw video stream of more than one billion bits per second. This stream must be compressed if digital TV is to fit in the bandwidth of available TV channels and if movies are to fit on DVDs. Fortunately, video compression is practical because the data in pictures is often redundant in space and time. For example, the sky can be blue across the top of a picture and that blue sky can persist for frame after frame. Also, because of the way the eye works, it is possible to delete some data from video pictures with almost no noticeable degradation in image quality.

TV cameras used in broadcasting usually generate 50 pictures a second (in Europe and elsewhere) or 59.94 pictures a second (in North America and elsewhere). Digital television requires that these pictures be digitized so that they can be processed by computer hardware. Each picture element (a pixel) is then represented by one luminance number and two chrominance numbers. These describe the brightness and the color of the pixel (see YCbCr). Thus, each digitized picture is initially represented by three rectangular arrays of numbers.

A common (and old) trick to reduce the amount of data that must be processed per second is to separate the picture into two fields: the "top field," which is the odd numbered rows, and the "bottom field," which is the even numbered rows. The two fields are displayed alternately. This is called interlaced video. Two successive fields are called a frame. The typical frame rate is then 25 or 29.97 frames a second. If the video is not interlaced, then it is called progressive video and each picture is a frame. MPEG-2 supports both options.

Another trick to reduce the data rate is to thin out the two chrominance matrices. In effect, the remaining chrominance values represent the nearby values that are deleted. Thinning works because the eye is more responsive to brightness than to color. The 4:2:2 chrominance format indicates that half the chrominance values have been deleted. The 4:2:0 chrominance format indicates that three quarters of the chrominance values have been deleted. If no chrominance values have been deleted, the chrominance format is 4:4:4. MPEG-2 allows all three options.

MPEG-2 specifies that the raw frames be compressed into three kinds of frames: I(ntra-coded)-frames, P(redictive-coded)-frames, and B(idirectionally predictive-coded)-frames.

An I-frame is a compressed version of a single uncompressed (raw) frame. It takes advantage of spatial redundancy and of the inability of the eye to detect certain changes in the image. Unlike P-frames and B-frames, I-frames do not depend on data in the preceding or the following frames. Briefly, the raw frame is divided into 8 pixel by 8 pixel blocks. The data in each block is transformed by a discrete cosine transform. The result is an 8 by 8 matrix of coefficients. The transform converts spatial variations into frequency variations, but it does not change the information in the block; the original block can be recreated exactly by applying the inverse cosine transform. The advantage of doing this is that the image can now be simplified by quantizing the coefficients. Many of the coefficients, usually the higher frequency components, will then be zero. The penalty of this step is the loss of some subtle distinctions in brightness and color. If one applies the inverse transform to the matrix after it is quantized, one gets an image that looks very similar to the original image but that is not quite as nuanced. Next, the quantized coefficient matrix is itself compressed. Typically, one corner of the quantized matrix is filled with zeros. By starting in the opposite corner of the matrix, then zigzaging through the matrix to combine the coefficients into a string, then substituting run-length codes for consecutive zeros in that string, and then applying Huffman coding to that result, one reduces the matrix to a smaller array of numbers. It is this array that is broadcast or that is put on DVDs. In the receiver or the player, the whole process is reversed, enabling the receiver to reconstruct, to a close approximation, the original frame.

Typically, every 15th frame or so is made into an I-frame. P-frames and B-frames might follow an I-frame like this, IBBPBBPBBPBB(I), to form a GOP; however, the standard is flexible about this.

P-frames provide more compression than I-frames because they take advantage of the data in the previous I-frame or P-frame. I-frames and P-frames are called reference frames. To generate a P-frame, the previous reference frame is reconstructed, just as it would be in a TV receiver or DVD player. The frame being compressed is divided into 16 pixel by 16 pixel "macroblocks." Then, for each of those macroblocks, the reconstructed reference frame is searched to find that 16 by 16 macroblock that best matches the macroblock being compressed. The offset is encoded as a "motion vector." Frequently, the offset is zero. But, if something in the picture is moving, the offset might be something like 23 pixels to the right and 4 pixels up. The match between the two macroblocks will often not be perfect. To correct for this, the encoder computes the strings of coefficient values as described above for both macroblocks and, then, subtracts one from the other. This "residual" is appended to the motion vector and the result sent to the receiver or stored on the DVD for each macroblock being compressed. Sometimes no suitable match is found. Then, the macroblock is treated like an I-frame macroblock.

The processing of B-frames is similar to that of P-frames except that B-frames use the picture in the following reference frame as well as the picture in the preceding reference frame. As a result, B-frames usually provide more compression than P-frames. B-frames are never reference frames.

While the above generally describes MPEG-2 video compression, there are many details that are not discussed including details involving fields, chrominance formats, responses to scene changes, special codes that label the parts of the bitstream, and other pieces of information.

Audio encoding

MPEG-2 also introduces new audio encoding methods. These are

  • low bitrate encoding with halved sampling rate (MPEG-1 Layer 1/2/3 LSF)
  • multichannel encoding with up to 5.1 channels
  • MPEG-2 AAC

Profiles and Levels

MPEG-2 Profiles
Abbr. Name Frames YCbCr Streams Comment
SP Simple Profile P, I 4:2:0 1 no interlacing
MP Main Profile P, I, B 4:2:0 1
422P 4:2:2 Profile P, I, B 4:2:2 1
SNR SNR Profile P, I, B 4:2:0 1-2 SNR: Signal to Noise Ratio
SP Spatial Profile P, I, B 4:2:0 1-3 low, normal and high quality decoding
HP High Profile P, I, B 4:2:2 1-3
MPEG-2 Levels
Abbr. Name Pixel/line Lines Framerate (Hz) Bitrate (Mbit/s)
LL Low Level 352 288 30 4
ML Main Level 720 576 30 15
H-14 High 1440 1440 1152 30 60
HL High Level 1920 1152 30 80
Profile @ Level Resolution (px) Framerate max. (Hz) Sampling Bitrate (Mbit/s) Example Application
SP@LL 176 × 144 15 4:2:0 0.096 Wireless handsets
SP@ML 352 × 288 15 4:2:0 0.384 PDAs
320 × 240 24
MP@LL 352 × 288 30 4:2:0 4 Set-top boxes (STB)
MP@ML 720 × 480 30 4:2:0 15 (DVD: 9.8) DVD, SD-DVB
720 × 576 25
MP@H-14 1440 × 1080 30 4:2:0 60 (HDV: 25) HDV
1280 × 720 30
MP@HL 1920 × 1080 30 4:2:0 80 ATSC 1080i, 720p60, HD-DVB (HDTV)
1280 × 720 60
422P@LL 4:2:2
422P@ML 720 × 480 30 4:2:2 50 Sony IMX using I-frame only, Broadcast "contribution" video (I&P only)
720 × 576 25
422P@H-14 1440 × 1080 30 4:2:2 80 Potential future MPEG-2-based HD products from Sony and Panasonic
1280 × 720 60
422P@HL 1920 × 1080 30 4:2:2 300 Potential future MPEG-2-based HD products from Panasonic
1280 × 720 60

DVD

The DVD standard uses MPEG-2 video, but imposes some restrictions:

  • Allowed Resolutions
    • 720 × 480, 704 × 480, 352 × 480, 352 × 240 pixel (NTSC)
    • 720 × 576, 704 × 576, 352 × 576, 352 × 288 pixel (PAL)
  • Allowed Aspect ratio (image) (Display AR)
    • 4:3
    • 16:9
    • (1.85:1 and 2.35:1, among others, are often listed as valid DVD aspect ratios, but are actually just a 16:9 image with the top and bottom of the frame masked in black)
  • Allowed Frame rates
    • 29.97 frame/s (NTSC)
    • 25 frame/s (PAL)
Note: By using a pattern of REPEAT_FIRST_FIELD flags on the headers of encoded pictures, pictures can be displayed for either two or three fields and almost any picture display rate (minimum ⅔ of the frame rate) can be achieved. This is most often used to display 23.976 (approximately film rate) video on NTSC.
  • Audio+video bitrate
    • Video peak 9.8 Mb/s
    • Total peak 10.08 Mb/s
    • Minimum 300 kb/s
  • YUV 4:2:0
  • Additional subtitles possible
  • Closed captioning (NTSC only)
  • Audio
    • Linear Pulse Code Modulation (LPCM): 48 kHz or 96 kHz; 16- or 24-bit; up to six channels (not all combinations possible due to bitrate constraints)
    • MPEG Layer 2 (MP2): 48 kHz, up to 5.1 channels (required in PAL players only)
    • Dolby Digital (DD, also known as AC-3): 48 kHz, 32–448 kbit/s, up to 5.1 channels
    • Digital Theater Systems (DTS): 754 kb/s or 1510 kb/s (not required for DVD player compliance)
    • NTSC DVDs must contain at least one LPCM or Dolby Digital audio track.
    • PAL DVDs must contain at least one MPEG Layer 2, LPCM, or Dolby Digital audio track.
    • Players are not required to playback audio with more than two channels, but must be able to downmix multichannel audio to two channels.
  • GOP structure
    • Sequence header must be present at the beginning of every GOP
    • Maximum frames per GOP: 18 (NTSC) / 15 (PAL), i.e. 0.6 seconds both
    • Closed GOP required for multiple-angle DVDs

DVB

Application-specific restrictions on MPEG-2 video in the DVB standard:

Allowed resolutions for SDTV:

  • 720, 640, 544, 480 or 352 × 480 pixel, 24/1.001, 24, 30/1.001 or 30 frame/s
  • 352 × 240 pixel, 24/1.001, 24, 30/1.001 or 30 frame/s
  • 720, 704, 544, 480 or 352 × 576 pixel, 25 frame/s
  • 352 × 288 pixel, 25 frame/s

For HDTV:

  • 720 x 576 x 50 frames/s progressive (576p50)
  • 1280 x 720 x 25 or 50 frames/s progressive (720p50)
  • 1440 or 1920 x 1080 x 25 frames/s progressive (1080p25 - film mode)
  • 1440 or 1920 x 1080 x 25 frames/s interlace (1080i25)
  • 1920 x 1080 x 50 frames/s progressive (1080p50) possible future H.264/AVC format

ATSC

Allowed resolutions:

  • 1920 × 1080 pixel, 30 frame/s (1080i)
  • 1280 × 720 pixel, 60 frame/s (720p)
  • 720 × 576 pixel, 25 frame/s (576i, 576p)
  • 720 or 640 × 480 pixel, 30 frame/s (480i, 480p)

Note: 1080i is encoded with 1920 × 1088 pixel frames, but the last 8 lines are discarded prior to display.

ISO/IEC 13818

Part 1
Systems - describes synchronization and multiplexing of video and audio.
Part 2
Video - compression codec for interlaced and non-interlaced video signals.
Part 3
Audio - compression codec for perceptual coding of audio signals. A multichannel-enabled extension of MPEG-1 audio.
Part 4
Describes procedures for testing compliance.
Part 5
Describes systems for Software simulation.
Part 6
Describes extensions for DSM-CC (Digital Storage Media Command and Control.)
Part 7
Advanced Audio Coding (AAC)
Part 9
Extension for real time interfaces.
Part 10
Conformance extensions for DSM-CC.

(Part 8: 10-bit video extension. Primary application was studio video. Part 8 has been withdrawn due to lack of interest by industry).

Patent holders

Approximately 640 patents world wide make up the "essential" intellectual property surrounding MPEG-2. These are held by over 20 corporations and one university. Where software patentability is upheld, the use of MPEG-2 requires the payment of licensing fees to the patent holders via the MPEG Licensing Association. The patent pool is managed and administered by the private organization MPEG LA. The development of the standard itself took less time than the patent negotiations[2].



It should also be noted according to the MPEG-LA Licensing Agreement MPEG-LA that any use of MPEG-2 technology is subject to royalties.

  • Encoders have a $0.50 charge for each product.
  • Decoders have a $0.50 charge for each product. [3] (information correct in January 2006)

As well as any packaged medium (DVD's / Data Streams) plus additional fees according to length of broadcast.

In the case of GPL'ed software such as VLC (which uses the Libdvdcss library) and in which the software is not sold, the end-user bears the royalty.

See also

References

Template:MediaCompression