Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Visual information fidelity

From Wikipedia, the free encyclopedia
Objective full-reference image quality assessment
This article has multiple issues. Please helpimprove it or discuss these issues on thetalk page.(Learn how and when to remove these messages)
This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Visual information fidelity" – news ·newspapers ·books ·scholar ·JSTOR
(May 2017) (Learn how and when to remove this message)
This article or sectionmay have beencopied and pasted from another location,possibly in violation ofWikipedia's copyright policy. Please review the source andremedy this by editing this article to remove any non-free copyrighted content and attributing free content correctly, or flagging the content for deletion. Please be sure that the supposed source of the copyright violation is not itself aWikipedia mirror.(January 2024)
(Learn how and when to remove this message)

Visual information fidelity (VIF) is a full referenceimage quality assessment index based onnatural scene statistics and the notion of image information extracted by thehuman visual system.[1] It was developed by Hamid R Sheikh andAlan Bovik at the Laboratory for Image and Video Engineering (LIVE) at theUniversity of Texas at Austin in 2006. It is deployed in the core of theNetflixVMAF video quality monitoring system, which controls the picture quality of all encoded videos streamed by Netflix.

Model overview

[edit]

Images and videos of thethree-dimensional visual environments come from a common class: the class of natural scenes. Natural scenes from a tiny subspace in the space of all possible signals, and researchers have developed sophisticated models to characterize these statistics. Most real-worlddistortion processes disturb these statistics and make the image or video signals unnatural. The VIF index employsnatural scene statistical (NSS) models in conjunction with adistortion (channel) model to quantify the information shared between the test and the reference images. Further, the VIF index is based on thehypothesis that this shared information is an aspect of fidelity that relates well with visual quality. In contrast to prior approaches based onhuman visual system (HVS) error-sensitivity and measurement of structure,[2] this statistical approach used in aninformation-theoretic setting, yields a full reference (FR)quality assessment (QA) method that does not rely on any HVS or viewing geometry parameter, nor any constants requiring optimization, and yet is competitive with state of the art QA methods.[3]

Specifically, the reference image is modeled as being the output of astochastic 'natural' source that passes through the HVS channel and is processed later by the brain. The information content of the reference image is quantified as being the mutual information between the input and output of the HVS channel. This is the information that the brain could ideally extract from the output of the HVS. The same measure is then quantified in the presence of an image distortion channel that distorts the output of the natural source before it passes through the HVS channel, thereby measuring the information that the brain could ideally extract from the test image. This is shown pictorially in Figure 1. The two information measures are then combined to form a visual information fidelity measure that relates visual quality to relative image information.

Figure 1

System model

[edit]
This articlemay be too technical for most readers to understand. Pleasehelp improve it tomake it understandable to non-experts, without removing the technical details.(January 2018) (Learn how and when to remove this message)

Source model

[edit]

A Gaussian scale mixture (GSM) is used to statistically model thewavelet coefficients of a steerable pyramid decomposition of an image.[4] The model is described below for a given subband of the multi-scale multi-orientation decomposition and can be extended to other subbands similarly. Let the wavelet coefficients in a given subband beC={C¯i:iI}{\displaystyle {\mathcal {C}}=\{{\bar {C}}_{i}:i\in {\mathcal {I}}\}} whereI{\displaystyle {\mathcal {I}}} denotes the set of spatial indices across the subband and eachC¯i{\displaystyle {\bar {C}}_{i}} is anM{\displaystyle M} dimensionalvector. The subband is partitioned into non-overlapping blocks ofM{\displaystyle M}coefficients each, where each block corresponds toC¯i{\displaystyle {\bar {C}}_{i}}. According to the GSM model,C=SU={SiU¯i:iI},{\displaystyle {\mathcal {C}}={\mathcal {S}}\cdot {\mathcal {U}}=\{S_{i}{\bar {U}}_{i}:i\in {\mathcal {I}}\},}whereSi{\displaystyle S_{i}} is a positivescalar andU¯i{\displaystyle {\bar {U}}_{i}} is a Gaussian vector with mean zero andco-varianceCU{\displaystyle \mathbf {C} _{U}}. Further the non-overlapping blocks are assumed to be independent of each other and that the random fieldS{\displaystyle {\mathcal {S}}} is independent ofU{\displaystyle {\mathcal {U}}}.

Distortion model

[edit]

The distortion process is modeled using a combination of signalattenuation and additive noise in thewavelet domain. Mathematically, ifD={D¯i:iI}{\displaystyle {\mathcal {D}}=\{{\bar {D}}_{i}:i\in {\mathcal {I}}\}} denotes the random field from a given subband of the distorted image,G={gi:iI}{\displaystyle {\mathcal {G}}=\{g_{i}:i\in {\mathcal {I}}\}} is adeterministic scalar field andV={V¯i:iI}{\displaystyle {\mathcal {V}}=\{{\bar {V}}_{i}:i\in {\mathcal {I}}\}}, whereV¯i{\displaystyle {\bar {V}}_{i}} is a zero mean Gaussian vector with co-varianceCV=σv2I{\displaystyle \mathbf {C} _{V}=\sigma _{v}^{2}\mathbf {I} }, then

D=GC+V.{\displaystyle {\mathcal {D}}={\mathcal {G}}{\mathcal {C}}+{\mathcal {V}}.}

Further,V{\displaystyle {\mathcal {V}}} is modeled to be independent ofS{\displaystyle {\mathcal {S}}} andU{\displaystyle {\mathcal {U}}}.

HVS model

[edit]

The duality of HVS models and NSS implies that several aspects of the HVS have already been accounted for in the source model. Here, the HVS is additionally modeled based on the hypothesis that the uncertainty in theperception of visual signals limits the amount of information that can be extracted from the source and distorted image. This source of uncertainty can be modeled asvisual noise in the HVS model. In particular, the HVS noise in a given subband of the wavelet decomposition is modeled as additive white Gaussian noise. LetN={N¯i:iI}{\displaystyle {\mathcal {N}}=\{{\bar {N}}_{i}:i\in {\mathcal {I}}\}} andN={N¯i:iI}{\displaystyle {\mathcal {N}}'=\{{\bar {N}}_{i}':i\in {\mathcal {I}}\}} be random fields, whereN¯i{\displaystyle {\bar {N}}_{i}} andN¯i{\displaystyle {\bar {N}}_{i}'} are zero mean Gaussian vectors with co-varianceCN{\displaystyle \mathbf {C} _{N}} andCN{\displaystyle \mathbf {C} _{N}'}. Further, letE{\displaystyle {\mathcal {E}}} andF{\displaystyle {\mathcal {F}}} denote the visual signal at the output of the HVS. Mathematically, we haveE=C+N{\displaystyle {\mathcal {E}}={\mathcal {C}}+{\mathcal {N}}} andF=D+N{\displaystyle {\mathcal {F}}={\mathcal {D}}+{\mathcal {N}}'}. Note thatN{\displaystyle {\mathcal {N}}} andN{\displaystyle {\mathcal {N}}'} arerandom fields that are independent ofS{\displaystyle {\mathcal {S}}},U{\displaystyle {\mathcal {U}}} andV{\displaystyle {\mathcal {V}}}.

VIF index

[edit]
This sectiondoes notcite anysources. Please helpimprove this section byadding citations to reliable sources. Unsourced material may be challenged andremoved.(January 2018) (Learn how and when to remove this message)
This articlemay be too technical for most readers to understand. Pleasehelp improve it tomake it understandable to non-experts, without removing the technical details.(January 2018) (Learn how and when to remove this message)

LetC¯N=(C¯1,C¯2,,C¯N){\displaystyle {\bar {C}}^{N}=({\bar {C}}_{1},{\bar {C}}_{2},\ldots ,{\bar {C}}^{N})} denote the vector of all blocks from a given subband. LetSN,D¯N,E¯N{\displaystyle S^{N},{\bar {D}}^{N},{\bar {E}}^{N}} andF¯N{\displaystyle {\bar {F}}^{N}} be similarly defined. LetsN{\displaystyle s^{N}} denote themaximum likelihood estimate ofSN{\displaystyle S^{N}} givenCN{\displaystyle C^{N}} andCU{\displaystyle \mathbf {C} _{U}}. The amount of information extracted from the reference is obtained as

I(C¯N;E¯N|S¯N=sN)=12i=1Nlog2(|si2CU+σn2I||σn2I|),{\displaystyle I({\bar {C}}^{N};{\bar {E}}^{N}|{\bar {S}}^{N}=s^{N})={\frac {1}{2}}\sum _{i=1}^{N}\log _{2}\left({\frac {|s_{i}^{2}\mathbf {C} _{U}+\sigma _{n}^{2}\mathbf {I} |}{|\sigma _{n}^{2}\mathbf {I} |}}\right),}

while the amount of information extracted from the test image is given as

I(C¯N;F¯N|S¯N=sN)=12i=1Nlog2(|gi2si2CU+(σv2+σn2)I||(σv2+σn2)I|).{\displaystyle I({\bar {C}}^{N};{\bar {F}}^{N}|{\bar {S}}^{N}=s^{N})={\frac {1}{2}}\sum _{i=1}^{N}\log _{2}\left({\frac {|g_{i}^{2}s_{i}^{2}\mathbf {C} _{U}+(\sigma _{v}^{2}+\sigma _{n}^{2})\mathbf {I} |}{|(\sigma _{v}^{2}+\sigma _{n}^{2})\mathbf {I} |}}\right).}

Denoting theN{\displaystyle N} blocks in subbandj{\displaystyle j} of the wavelet decomposition byC¯N,j{\displaystyle {\bar {C}}^{N,j}}, and similarly for the other variables, the VIF index is defined as

VIF=jsubbandsI(C¯N,j;F¯N,jSN,j=sN,j)jsubbandsI(C¯N,j;E¯N,jSN,j=sN,j).{\displaystyle {\textrm {VIF}}={\frac {\sum _{j\in {\textrm {subbands}}}I({\bar {C}}^{N,j};{\bar {F}}^{N,j}\mid S^{N,j}=s^{N,j})}{\sum _{j\in {\textrm {subbands}}}I({\bar {C}}^{N,j};{\bar {E}}^{N,j}\mid S^{N,j}=s^{N,j})}}.}

Performance

[edit]

The Spearman's rank-order correlation coefficient (SROCC) between the VIF index scores of distorted images on the LIVE Image Quality Assessment Database and the corresponding human opinion scores is evaluated to be 0.96.[citation needed]

References

[edit]
  1. ^Sheikh, Hamid; Bovik, Alan (2006). "Image Information and Visual Quality".IEEE Transactions on Image Processing.15 (2):430–444.Bibcode:2006ITIP...15..430S.doi:10.1109/tip.2005.859378.PMID 16479813.
  2. ^Wang, Zhou; Bovik, Alan; Sheikh, Hamid; Simoncelli, Eero (2004). "Image quality assessment: From error visibility to structural similarity".IEEE Transactions on Image Processing.13 (4):600–612.Bibcode:2004ITIP...13..600W.doi:10.1109/tip.2003.819861.PMID 15376593.S2CID 207761262.
  3. ^Sheikh, Hamid R. (2006)."Image Information and Visual Quality".IEEE Transactions on Image Processing.15 (2):430–444.Bibcode:2006ITIP...15..430S.doi:10.1109/tip.2005.859378.PMID 16479813. Retrieved15 April 2024.
  4. ^Simoncelli, Eero; Freeman, William (1995). "The steerable pyramid: A flexible architecture for multi-scale derivative computation".Proceedings., International Conference on Image Processing. Vol. 3. pp. 444–447.doi:10.1109/ICIP.1995.537667.ISBN 0-7803-3122-2.S2CID 1099364.

External links

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Visual_information_fidelity&oldid=1259011360"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp