Video quality is a characteristic of avideo passed through a video transmission or processing system that describes perceived video degradation (typically compared to the original video). Video processing systems may introduce some amount of distortion or artifacts in the video signal that negatively impact the user's perception of the system. For many stakeholders invideo production and distribution, ensuring video quality is an important task.
Video quality evaluation is performed to describe the quality of a set of video sequences under study. Video quality can be evaluated objectively (bymathematical models) or subjectively (by asking users for their rating). Also, the quality of a system can be determined offline (i.e., in a laboratory setting for developing new codecs or services) orin-service (to monitor and ensure a certain level of quality).
Since the world's first video sequence was recorded and transmitted, many video processing systems have been designed. Such systemsencode video streams and transmit them over various kinds of networks or channels. In the age ofanalog video systems, it was possible to evaluate the quality aspects of a video processing system by calculating the system'sfrequency response using test signals (for example, a collection of color bars and circles).
Digital video systems have almost fully replaced analog ones, and quality evaluation methods have changed. The performance of a digital video processing and transmission system can vary significantly and depends on many factors, including the characteristics of the input video signal (e.g., amount of motion or spatial details), the settings used for encoding and transmission, and the channel fidelity ornetwork performance.
Objective video quality models aremathematical models that approximate results fromsubjective quality assessment, in which human observers are asked to rate the quality of a video.[1] In this context, the termmodel may refer to a simple statistical model in which several independent variables (e.g., thepacket loss rate on a network and the video coding parameters) are fit against results obtained in a subjective quality evaluation test usingregression techniques. A model may also be a more complicated algorithm implemented insoftware orhardware.
The termsmodel andmetric are often used interchangeably in the field to mean adescriptive statistic that provides anindicator of quality. The term “objective” refers to the fact that, in general, quality models are based on criteria that can bemeasured objectively, that is, free from human interpretation. They can be automatically evaluated by acomputer program. Unlike a panel of human observers, an objective model should always deterministically output the same quality score for a given set of input parameters.
Objective quality models are sometimes also referred to asinstrumental (quality) models,[2][3] in order to emphasize their application as measurement instruments. Some authors suggest that the term “objective” is misleading, as it “implies that instrumental measurements bear objectivity, which they only do in cases where they can be generalized.”[4]
Objective models can be classified by the amount of information available about the original signal, the received signal, or whether there is a signal present at all:[5]
Some models that are used for video quality assessment (such asPSNR orSSIM) are simplyimage quality models, whose output is calculated for every frame of a video sequence. An overview of recent no-referenceimage quality models has also been given in a journal paper by Shahid et al.[5]
The quality measure of every frame in a video (as determined by an image quality model) can then be recorded and pooled over time to assess the quality of an entire video sequence. While this method is easy to implement, it does not factor in certain kinds of degradations that develop over time, such as the moving artifacts caused bypacket loss and itsconcealment. A video quality model that considers the temporal aspects of quality degradations, likeVQM or theMOVIE Index, may be able to produce more accurate predictions of human-perceived quality.
The estimation ofvisual artifacts is a well known technique for estimating overall video quality. The majority of these artifacts arecompression artifacts caused by lossy compression. Some of the attributes typically estimated by pixel-based metrics include:
Spatial
Temporal
This section lists examples of video quality metrics.
Metric | Usage | Description | |
---|---|---|---|
Full-Reference | PSNR (Peak Signal-to-Noise Ratio) | Image | It is calculated between every frame of the original and the degraded video signal. PSNR is the most widely used objective image quality metric. However, PSNR values do not correlate well with perceived picture quality due to the complex, highly non-linear behaviour of the human visual system. |
SSIM[8] (Structural SIMilarity) | Image | SSIM is a perception-based model that considers image degradation as perceived change in structural information, while also incorporating important perceptual phenomena, including both luminance masking and contrast masking terms. | |
MOVIE Index[9] (MOtion-based Video Integrity Evaluation) | Video | The MOVIE index is a neuroscience-based model for predicting the perceptual quality of a (possibly compressed or otherwise distorted) motion picture or video against a pristine reference video. | |
VMAF[10] (Video Multimethod Assessment Fusion) | Video | VMAF uses different features to predict video quality, which are fused using aSVM-based regression to provide a single output score. These scores are then temporally pooled over the entire video sequence using thearithmetic mean to provide an overallmean opinion score (MOS). | |
VQM[11] | Video | This model has been standardized in ITU-T Rec. J.144 in 2001. | |
Reduced-Reference | SRR[12] (SSIM Reduced-Reference) | Video | SRR value is calculated as the ratio of received (target) video signal SSIM with reference video pattern SSIM values. |
ST-RRED[13] | Video | Compute wavelet coefficients of frame differences between the adjacent frames in a video sequence (modeled by a Gaussian Scale Mixture). It is used to evaluate RR entropic differences leading to temporal RRED.It in conjunction with spatial RRED indices evaluated by applying the RRED index on every frame of the video, yield the spatio-temporal RRED | |
ITU-T Rec.P.1204.4 | Video | This reduced-reference model compares features extracted from a reference video with a distorted (compressed video).[14] | |
No-Reference | NIQE[15] Naturalness Image Quality Evaluator | Image | This IQA model is founded on perceptually relevant spatial domain natural scene statistic (NSS) features extracted from local image patches that effectively capture the essential low-order statistics of natural images. |
BRISQUE[16] Blind/Referenceless Image Spatial Quality Evaluator | Image | The method extracts the pointwise statistics of local normalized luminance signals and measures image naturalness (or lack thereof) based on measured deviations from a natural image model. It also models the distribution of pairwise statistics of adjacent normalized luminance signals which provides distortion orientation information. | |
Video-BLIINDS[17] | Video | Computes statistical models on DCT coefficients of frame differences and calculates motion characterization. Pedicts score based on those features usingSVM. | |
ITU-T Rec.P.1203.1 | Video | This is a metric that is part of theP.1203 family of standards, which can use either metadata only (codec, resolution, bitrate, framerate), frame information (frame types and sizes), or the entire bitstream to analyze the quality of a compressed video. It is primarily intended to be used in the context of HTTP adaptive streaming. | |
ITU-T Rec.P.1204.3 | Video | This model uses the video bitstream to analyze compression/coding quality based on features like quantization parameters and motion vectors.[14] | |
ITU-T Rec.P.1204.5 | Video | This is a hybrid model that uses the decoded pixels and information about the video codec to determine final video quality.[14] |
Since objective video quality models are expected to predict results given by human observers, they are developed with the aid ofsubjective test results. During the development of an objective model, its parameters should be trained so as to achieve the best correlation between the objectively predicted values and the subjective scores, often available asmean opinion scores (MOS).
The most widely used subjective test materials are in the public domain and include still pictures, motion pictures, streaming video, high definition, 3-D (stereoscopic), and special-purpose picture quality-related datasets.[18] These so-called databases are created by various research laboratories around the world. Some of them have become de facto standards, including several public-domain subjective picture quality databases created and maintained by theLaboratory for Image and Video Engineering (LIVE) as well theTampere Image Database 2008. A collection of databases can be found in theQUALINET Databases repository. TheConsumer Digital Video Library (CDVL) hosts freely available video test sequences for model development.
Some databases also provide pre-computed metric scores to allow others to benchmark new metrics against existing ones. Examples can be seen in the table below
Benchmark | Number of videos | Number of metrics | Type of metrics |
---|---|---|---|
LIVE-VQC | 585 | 11 | No-reference |
KoNViD-1k | 1,200 | 11 | No-reference |
YouTube-UGC | 1,500 | 9 | No-reference |
MSU No-Reference VQA | 2,500 | 15 | No-reference |
MSU Full-Reference VQA | 2,500 | 44 | Full-reference |
LIVE-FB Large-Scale Social Video Quality | 39,000 | 6 | No-reference |
LIVE-ETRI | 437 | 5 | No-reference |
LIVE Livestream | 315 | 3 | No-reference |
In theory, a model can be trained on a set of data in such a way that it produces perfectly matching scores on that dataset. However, such a model will beover-trained and will therefore not perform well on new datasets. It is therefore advised tovalidate models against new data and use the resulting performance as a real indicator of the model's prediction accuracy.
To measure the performance of a model, some frequently used metrics are thelinear correlation coefficient,Spearman's rank correlation coefficient, and theroot mean square error (RMSE). Other metrics are thekappa coefficient and theoutliers ratio. ITU-T Rec.P.1401 gives an overview of statistical procedures to evaluate and compare objective models.
Objective video quality models can be used in various application areas. Invideo codec development, the performance of a codec is often evaluated in terms of PSNR or SSIM. For service providers, objective models can be used for monitoring a system. For example, anIPTV provider may choose to monitor their service quality by means of objective models, rather than asking users for their opinion, or waiting for customer complaints about bad video quality. Few of these standards have found commercial applications, includingPEVQ andVQuad-HD.SSIM is also part of a commercially available video quality toolset (SSIMWAVE).VMAF is used byNetflix to tune their encoding and streaming algorithms, and to quality-control all streamed content.[19][20] It is also being used by other technology companies likeBitmovin[21] and has been integrated into software such asFFmpeg.
An objective model should only be used in the context that it was developed for. For example, a model that was developed using a particular video codec is not guaranteed to be accurate for another video codec. Similarly, a model trained on tests performed on a large TV screen should not be used for evaluating the quality of a video watched on a mobile phone.
When estimating quality of a video codec, all the mentioned objective methods may require repeating post-encoding tests in order to determine the encoding parameters that satisfy a required level of visual quality, making them time consuming, complex and impractical for implementation in real commercial applications. There is ongoing research into developing novel objective evaluation methods which enable prediction of the perceived quality level of the encoded video before the actual encoding is performed.[22]
The main goal of many-objective video quality metrics is to automatically estimate the average user's (viewer's) opinion on the quality of a video processed by a system. Procedures forsubjective video quality measurements are described inITU-R recommendationBT.500 and ITU-T recommendationP.910. In such tests, video sequences are shown to a group of viewers. The viewers' opinion is recorded and averaged into themean opinion score to evaluate the quality of each video sequence. However, the testing procedure may vary depending on what kind of system is tested.
Tool | Description | Аvailability | License | Included metrics |
---|---|---|---|---|
FFmpeg | Free and open-source multimedia tool that incorporates some video quality metrics | Free | Open source | PSNR, SSIM,VMAF |
MSU VQMT | A software suite for objective video quality assessment (full reference and no reference) | Free for basic metrics Paid for HDR metrics | Proprietary | PSNR, SSIM, MS-SSIM, 3SSIM,VMAF, NIQE, VQM, Delta, MSAD, MSE MSU developed metrics: Blurring Metric, Blocking Metric, Brightness Flicking Metric, Drop Frame Metric, Noise Estimation Metric |
EPFL VQMT | Various metrics implemented in OpenCV (C++) based on existing MATLAB implementations | Free | Open source | PSNR, PSNR-HVS, PSNR-HVS-M, SSIM, MS-SSIM, VIFp |
OpenVQ | A toolkit implementing various metrics including the authors' OPVQ | Free | Open source | PSNR, SSIM, OPVQ |
Elecard | A commercial video quality estimation program | Demo version available | Proprietary | PSNR, APSNR, MSAD, MSE, SSIM, Delta, VQM, NQI,VMAF, VIF |
AviSynth | A video processing tool that can be used as a plugin or via scription | Free | Open source | SSIM |
VQ Probe | A software to calculate video quality metrics | Free | Proprietary | PSNR, SSIM,VMAF |
vmaf.dev | An online video quality calculation software implementing VMAF | Free | Open source | VMAF |