Video quality

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Video quality is a characteristic of a video passed through a video transmission/processing system, a formal or informal measure of perceived video degradation (typically, compared to the original video). Video processing systems may introduce some amount of distortion or artifacts in the video signal, which negatively impacts the user's perception of a system. For many stakeholders such as content providers, service providers and network operators, the assurance of video quality is an important task.

Video quality evaluation is performed to describe the quality of a set of video sequences under study. Video quality can be evaluated objectively (by mathematical models) or subjectively (by asking users for their rating). Also, the quality of a system can be determined offline (i.e., in a laboratory setting for developing new codecs or services), or in-service (to monitor and ensure a certain level of quality).

From analog to digital video[edit]

Since the world's first video sequence was recorded and transmitted, many video processing systems have been designed. Such systems encode video streams and transmit them over various kinds of networks or channels. In the ages of analog video systems, it was possible to evaluate the quality aspects of a video processing system by calculating the system's frequency response using test signals (for example, a collection of color bars and circles).

Digital video systems have almost fully replaced analog ones, and quality evaluation methods have changed. The performance of a digital video processing and transmission system can vary significantly and depends, amongst others, on the characteristics of the input video signal (e.g. amount of motion or spatial details), the settings used for encoding and transmission, and the channel fidelity or network performance.

Objective video quality[edit]

Objective video evaluation techniques are mathematical models that approximate results of subjective quality assessment, but are based on criteria and metrics that can be measured objectively and automatically evaluated by a computer program. For example, an IPTV provider may choose to monitor their service quality by means of objective metrics, rather than asking users for their opinion, or waiting for customer complaints about bad video quality.

Classification of objective video quality metrics[edit]

Objective metrics can be classified by the amount of information available about the original signal, or whether there is a signal present at all:[1]

  • Full Reference Methods (FR): FR metrics compute the quality difference by comparing the original video signal against the received video signal. Typically, every pixel from the source is compared against the corresponding pixel at the received video, with no knowledge about the encoding or transmission process in between. More elaborate algorithms may choose to combine the pixel-based estimation with other approaches such as described below. FR metrics are usually the most accurate at the expense of higher computational effort.
  • Reduced Reference Methods (RR): RR metrics extract some features of both videos and compare them to give a quality score. They are used when all the original video is not available, or when it would be practically impossible to do so, e.g. in a transmission with a limited bandwidth. This makes them more efficient than FR metrics.
  • No-Reference Methods (NR): NR metrics try to assess the quality of a distorted video without any reference to the original signal. Due to the absence of an original signal, they may be less accurate than FR or RR approaches, but are more efficient to compute.
    • Pixel-Based Methods (NR-P): Pixel-based metrics use a decoded representation of the signal and analyze the quality based on the pixel information. They typically evaluate specific degradation types only, such as blurring or other coding artifacts.
    • Parametric/Bitstream Methods (NR-B): These metrics make use of features extracted from the transmission container and/or video bitstream, e.g. MPEG-TS packet headers, motion vectors and quantization parameters. They do not have access to the original signal and require no decoding of the video, which makes them more efficient. In contrast to NR-P metrics, they have no access to the final decoded signal.
    • Hybrid Methods (Hybrid NR-P-B): Hybrid metrics combine parameters extracted from the bitstream with a decoded video signal. They are therefore a mix between NR-P and NR-B models.

Examples[edit]

The most traditional ways of evaluating quality of digital video processing system (e.g. video codec like DivX, Xvid) are FR-based. Among the oldest FR metrics are signal-to-noise ratio (SNR) and peak signal-to-noise ratio (PSNR), which are calcualted between the original video signal and signal passed through a system (e.g., an encoder or a transmission channel). PSNR is the most widely used objective video quality metric. However, PSNR values do not perfectly correlate with a perceived visual quality due to the non-linear behavior of the human visual system.

Recently a number of more precise metrics were developed. These metrics are inherently more complex than PSNR, needing more computational effort to calculate the video quality. Among those metrics are for example UQI, VQM, PEVQ, SSIM, VQuad-HD and CZD. Based on a benchmark by the Video Quality Experts Group (VQEG) in the course of the Multimedia Test Phase (2007–2008) and the HDTV Test Phase I (2009–2011) some metrics were standardized as:

  • ITU-T Rec. J.246 (RR), 2008
  • ITU-T Rec. J.247 (FR), 2008
  • ITU-T Rec. J.341 (FR), 2011
  • ITU-T Rec. J.342 (RR), 2011

The above metrics still require access to the original video bitstream before transmission, or at least part of it. In practice, an original stream may not always be available for comparison, for example when measuring the quality from the user side. For a more efficient estimation of video quality in such cases, parametric/bitstream-based metrics were also standardized as ITU-T Rec. P.1201 and P.1202.

Performance evaluation[edit]

The performance of an objective video quality metric is evaluated by computing the correlation between the objective scores and the subjective test results. The latter results consist of mean opinion scores (MOS). The most frequently used correlation coefficients are: linear correlation coefficient, Spearman's rank correlation coefficient, kurtosis, kappa coefficient and outliers ratio.

Other approaches[edit]

When estimating quality of a video codec, all the mentioned objective methods may require repeating post-encoding tests in order to determine the encoding parameters that satisfy a required level of visual quality, making them time consuming, complex and impractical for implementation in real commercial applications. There is ongoing research into developing novel objective evaluation methods which enable prediction of the perceived quality level of the encoded video before the actual encoding is performed [1].

Subjective video quality[edit]

The main goal of many objective video quality metrics is to automatically estimate the average user's (viewer's) opinion on the quality of a video processed by a system. Procedures for subjective video quality measurements are described in ITU-R recommendation BT.500 and ITU-T recommendation P.910. Their main idea is the same as in Mean Opinion Score for audio: video sequences are shown to a group of viewers and then their opinion is recorded and averaged to evaluate the quality of each video sequence. However, the testing procedure may vary depending on what kind of system is tested.

See also[edit]

References[edit]

  1. ^ Shahid, Muhammad (2014-02-16). "No-reference image and video quality assessment: a classification and review of recent approaches". EURASIP Journal on Image and Video Processing. 

Further reading[edit]