Interpreting OpenEXR Deep Pixels¶
Overview¶
Starting with version 2.0, the OpenEXR image file format supports deepimages. In a regular, or flat image, every pixel stores at most onevalue per channel. In contrast, each pixel in a deep image can store anarbitrary number of values or samples per channel. Each of those samplesis associated with a depth, or distance from the viewer. Together withthe two-dimensional pixel raster, the samples at different depths form athree-dimensional data set.
The open-source OpenEXR file I/O library defines the file format fordeep images, and it provides convenient methods for reading and writingdeep image files. However, the library does not define how deep imagesare meant to be interpreted. In order to encourage compatibility amongapplication programs and image processing libraries, this documentdescribes a standard way to represent point and volume samples in deepimages, and it defines basic compositing operations such as merging twodeep images or converting a deep image into a flat image.
Definitions¶
Flat and Deep Images, Samples¶
For a single-part OpenEXR file animage is the set of all channelsin the file. For a multi-part file an image is the set of all channelsin the same part of the file.
Aflat image has at most one stored value orsample per pixelper channel. The most common case is an RGB image, which contains threechannels, and every pixel has exactly one\(R\), one\(G\) andone\(B\) sample. Some channels in a flat image may be sub-sampled,as is the case with luminance-chroma images, where the luminance channelhas a sample at every pixel, but the chroma channels have samples onlyat every second pixel of every second scan line.
Adeep image can store an unlimited number of samples per pixel, andeach of those samples is associated with a depth, or distance from theviewer.
A pixel at pixel space location\((x,y)\) in a deep image has\(n(x,y)\) samples in each channel. The number of samples variesfrom pixel to pixel, and any non-negative number of samples, includingzero, is allowed. However, all channels in a single pixel have the samenumber of samples.
The samples in each channel are numbered from\(0\) to\(n(x,y) - 1\), and the expression\(S_{i}(c,x,y)\) refers tosample number\(i\) in channel\(c\) of the pixel at location\((x,y)\).
In the following we will for the most part discuss a single pixel. Forreadability we will omit the coordinates of the pixel; expressions suchas\(n\) and\(S_{i}(c)\) are to be understood as\(n(x,y)\)and\(S_{i}(c,x,y)\) respectively.
Channel Names and Layers¶
The channels in an image have names that serve two purposes: specifyingthe intended interpretation of each channel, and grouping the channelsinto layers.
If a channel name contains one or more periods, then the part of thechannel name that follows the last period is thebase name. If achannel name contains no periods, then the entire channel name is thebase name.
Examples:
the base name of channel
RisRthe base name of channel
L1.L2.RisR
If a channel name contains one or more periods, then the part of thechannel name before the last period is the channel’slayer name. Ifa channel name contains no periods, then the layer name is an emptystring.
Examples:
the layer name of channel
Ris the empty stringthe layer name of channel
L1.L2.RisL1.L2
The set of all channels in an image that share the same layer name iscalled alayer.
The set of all channels in an image whose layer name is the empty stringis called thebase layer.
If the name of one layer is a prefix of the name of another layer, thenthe first layerencloses the second layer, and the second layerisnested in the first layer. Since the empty string is a prefix of anyother string, the base layer encloses all other layers.
A layerdirectly encloses a second layer if there is no third layerthat is nested in the first layer and encloses the second layer.
Examples:
Layer
L1encloses layersL1.L2andL1.L2.L3Layer
L1directly encloses layerL1.L2, butL1does notdirectly encloseL1.L2.L3
Alpha, Color, Depth and Auxiliary Channels¶
A channel whose base name isA,AR,AG orAB is analpha channel. Allsamples must be greater than or equal to zero, and less than or equal toone.
A channel whose base name isR,G,B, orY is acolor channel.
A channel whose full name isZ orZBack, is adepth channel. All samples in a depth channel must be greater thanor equal to zero.
A channel that is not an alpha, color or depth channel is anauxiliarychannel.
Required Depth Channels¶
The base layer of a deep image must include a depth channel that iscalledZ.
The base layer of a deep image may include a depth channel calledZBack. If the base layer does not include one, then aZBack channel can be generated by copying theZchannel.
Layers other than the base layer may include channels calledZorZBack, but those channels are auxiliary channels anddo not determine the positions of any samples in the image.
Sample Locations, Point and Volume Samples¶
The depth samples\(S_{i}\left( Z \right)\) and\(S_{i}(ZBack)\)determine the positions of the front and the back of sample numberi in all other channels in the same pixel.
If\(S_{i}\left( Z \right) \geq S_{i}\left( \text{ZBack} \right)\),then sample numberi in all other channels covers the singledepth value\(z = S_{i}\left( Z \right)\), wherez is thedistance of the sample from the viewer. Sample numberi iscalled apoint sample.
If\(S_{i}\left( Z \right) < S_{i}\left( \text{ZBack} \right)\),then sample numberi in all other channels covers the half openinterval\(S_{i}\left( Z \right) \leq z < S_{i}\left( \text{ZBack} \right)\).Sample numberi is called avolume sample.\(S_{i}\left( Z \right)\) is the sample’sfrontand\(\ S_{i}\left( \text{ZBack} \right)\) is the sample’sback.
Point samples are used to represent the intersections of surfaces with apixel. A surface intersects a pixel at a well-defined distance from theviewer, but the surface has zero thickness. Volume samples are used torepresent the intersections of volumes with a pixel.
Required Alpha Channels¶
Every color or auxiliary channel in a deep image must have anassociated alpha channel.
The associated alpha channel for a given color or auxiliary channel,c, is found by looking for amatching alpha channel (seebelow), first in the layer that containsc, then in the directlyenclosing layer, then in the layer that directly encloses that layer,and so on, until the base layer is reached. The first matching alphachannel found this way becomes the alpha channel that is associated withc.
Each color our auxiliary channel matches an alpha channel, as shown inthe following table:
Color or auxiliary channel base name | Matching alpha channel base name |
|---|---|
|
|
|
|
|
|
|
|
(any auxiliary channel) |
|
Example: The following table shows the list of channels in a deepimage, and the associated alpha channel for each color or auxiliarychannel.
Channel name | Associated alpha channel |
|---|---|
| |
| |
| |
|
|
| |
| |
| |
|
|
|
|
|
|
Sorted, Non-Overlapping and Tidy Images¶
The samples in a pixel may or may not be sorted according to depth, andthe sample depths or depth ranges may or may not overlap each other.
A pixel in a deep image issorted if for everyi andj withi <j,
A pixel in a deep image isnon-overlapping if for every\(i\)and\(j\) with\(i \neq j\),
A pixel in a deep image istidy if it is sorted and non-overlapping.
A deep image is sorted if all of its pixels are sorted; it isnon-overlapping if all of its pixels are non-overlapping; and it is tidyif all of its pixels are tidy.
The images stored in an OpenEXR file are not required to be tidy. Somedeep image processing operations, for example, flattening a deep image,require tidy input images. However, making an image tidy losesinformation, and some kinds of data cannot be represented with tidyimages, for example, object identifiers or motion vectors for volumeobjects that pass through each other.
Some application programs that read deep images can run more efficientlywith tidy images. For example, in a 3D renderer that uses deep images asshadow maps, shadow lookups are faster if the samples in each pixel aresorted and non-overlapping.
Application programs that write deep OpenEXR files can add adeepImageState attribute to the header to let file readers know if thepixels in the image are tidy or not. The attribute is of typeDeepImageState, and can have the following values:
Value | Interpretation |
|---|---|
| Samples may not be sorted, and overlaps are possible. |
| Samples are sorted, but overlaps are possible. |
| Samples do not overlap, but may not be sorted. |
| Samples are sorted and do not overlap. |
If the header does not contain adeepImageState attribute, then filereaders should assume that the image isMESSY. The OpenEXR file I/Olibrary does not verify that the samples in the pixels are consistentwith thedeepImageState attribute. Application software that handlesdeep images may assume that the attribute value is valid, as long as thesoftware will not crash or lock up if any pixels are inconsistent withthedeepImageState.
Alpha and Color as Functions of Depth¶
Given a color channel,c, and its associated alpha channel,\(\alpha\), the samples\(S_{i}\left( c \right)\),\(S_{i}\left( \alpha \right)\),\(S_{i}\left( Z \right)\) and\(S_{i}\left( \text{ZBack} \right)\) together represent theintersection of an object with a pixel. The color of the object is\(S_{i}\left( c \right)\), its opacity is\(S_{i}\left( \alpha \right)\), and the distances of its front andback from the viewer are indicated by\(S_{i}\left( Z \right)\) and\(S_{i}\left( \text{ZBack} \right)\) respectively.
One Sample¶
We now define two functions,\(z \longmapsto \alpha_{i}(z)\), and\(z \longmapsto c_{i}(z)\), that represent the opacity and color ofthe part of the object whose distance from the viewer is no more thanz. In other words, we divide the object into two parts bysplitting it at distance\(z\);\(\alpha_{i}(z)\) and\(c_{i}(z)\) are the opacity and color of the part that is closer tothe viewer.
For a point sample,\(\alpha_{i}(z)\) and\(c_{i}(z)\) are stepfunctions:
For a volume sample, we define a helper function\(x(z)\) thatconsists of two constant segments and a linear ramp:
With this helper function,\(\alpha_{i}(z)\) and\(c_{i}(z)\)are defined as follows:
Note that the second case in the definition of\(c_{i}\left( z \right)\) is the limit of the first case as\(S_{i}\left( \alpha \right)\) approaches zero.
The figure below shows an example of\(\alpha_{i}\left( z \right)\)and\(c_{i}\left( z \right)\) for a volume sample. Alpha and colorare zero up to\(Z\), increase gradually betweenZ andZBack, and then remain constant.
Whole Pixel¶
If a pixel is tidy, then we can define two functions,\(z \longmapsto A(z)\), and\(z \longmapsto C(z)\), thatrepresent the total opacity and color of all objects whose distance fromthe viewer is no more than\(z\): if the distance\(z\) isinside a volume object, we split the object at\(z\). Then we use“over” operations to composite all objects that are no further away than\(z\).
Given a foreground object with opacity\(\alpha_{f}\) and color\(c_{f}\), and a background object with opacity\(\alpha_{b}\)and color\(c_{b}\), an “over” operation computes the total opacityand color,\(\alpha\) and\(c\), that result from placing theforeground object in front of the background object:
We define two sets of helper functions:
With these helper functions,\(A\left( z \right)\) and\(C(z)\)look like this:
The figure below shows an example of\(A(z)\) and\(C(z)\).Sample numberi is a volume sample; itsZBack isgreater than its\(Z\). Alpha and color increase gradually betweenZ andZBack and then remain constant. Samplenumber\(i + 1\), whoseZ andZBack areequal, is a point sample where alpha and color discontinuously jump to anew value.
Basic Deep Image Operations¶
Given the definitions above, we can now construct a few basic deep imageprocessing operations.
Splitting a Volume Sample¶
Our first operation is splitting volume sample numberi of apixel at a given depth,\(z\), where:
The operation replaces the original sample with two new samples. If thefirst of those new samples is composited over the second one, then thetotal opacity and color are the same as in the original sample.
For the depth channels, the new samples are:
For a color channel,c, and its associated alpha channel,\(\alpha\), the new samples are:
If it is not done exactly right, splitting a sample can lead to largerounding errors for the colors of the new samples when the opacity ofthe original sample is very small. For C++ code that splits a volumesample in a numerically stable way, seeExample: Splitting a Volume Sample.
Merging Overlapping Samples¶
In order to make a deep image tidy, we need a procedure for merging twosamples that perfectly overlap each other. Given two samples,iandj, with
and
we want to replace those samples with a single new sample that has anappropriate opacity and color.
For two overlapping volume samples, the opacity and color of the newsample should be the same as what one would get from splitting theoriginal samples into a very large number of shorter sub-samples,interleaving the sub-samples, and compositing them back together with aseries of “over” operations.
For a color channel,c, and its associated alpha channel,\(\alpha\), we can compute the opacity and color of the new sampleas follows:
where
with\(k = i\) or\(k = j\), and
Evaluating the expressions above directly can lead to large roundingerrors when the opacity of one or both of the input samples is verysmall. For C++ code that computes\(\ S_{i,new}\left( \alpha\right)\) and\(S_{i,new}\left( c \right)\) in a numerically robustway, seeExample: Merging Two Overlapping Samples.
For details on how the expressions for\(S_{i,new}\left( \alpha\right)\) and\(S_{i,new}\left( c \right)\), can be derived, seePeter Hillman’s paper, “The Theory of OpenEXR Deep Samples”
Note that the expressions for computing\(S_{i,new}\left( \alpha\right)\) and\(S_{i,new}\left( c \right)\) do not refer to depthat all. This allows us to reuse the same expressions for merging twoperfectly overlapping (that is, coincident) point samples.
A point sample cannot perfectly overlap a volume sample; therefore pointsamples are never merged with volume samples.
Making an Image Tidy¶
An image is made tidy by making each of its pixels tidy. A pixel is madetidy in three steps:
Split partially overlapping samples: if there are indices\(i\)and\(j\) such sample\(i\) is either a point or a volumesample, sample\(j\) is a volume sample, and\(S_{j}\left( Z \right) < S_{i}\left( Z \right) < S_{j}(ZBack)\),then split sample\(j\) at\(S_{i}\left( Z \right)\) as shownon page 10 of this document. Otherwise, if there are indices\(i\) and\(j\) such that samples\(i\) and\(j\) arevolume samples, and\(S_{j}\left( Z \right) < S_{i}\left( \text{ZBack} \right) < S_{j}(ZBack)\),then split sample\(j\) at\(S_{i}\left( \text{ZBack} \right)\). Repeat this until there areno more partially overlapping samples.
Merge overlapping samples: if there are indices\(i\) and\(j\) such that samples\(i\) and\(j\) overlapperfectly, then merge those two samples as shown inMergingOverlapping Samples above. Repeat this until there are no moreperfectly overlapping samples.
Sort the samples according to
ZandZBack(seeSorted, Non-Overlapping and Tidy Images).
Note that this procedure can be made more efficient by first sorting thesamples, and then splitting and merging overlapping samples in a singlefront-to-back sweep through the sample list.
Merging Two Images¶
Merging two deep images forms a new deep image that represents all ofthe objects contained in both of the original images. Conceptually, thedeep image “merge” operation is similar to the “over” operation for flatimages, except that the “merge” operation does not distinguish between aforeground and a background image.
Since deep images are not required to be tidy, the “merge” operation istrivial: for each output pixel, concatenate the sample lists of thecorresponding input pixels.
Flattening an Image¶
Flattening produces a flat image from a deep image by performing afront-to-back composite of the deep image samples. The “flatten”operation has two steps:
Make the deep image tidy.
For each pixel, composite sample
0over sample1.Composite the result over sample2, and so on, until sample\(n - 1\) is reached.Note that this is equivalent to computing\(A\left( max\left( S_{n - 1}\left( Z \right),S_{n - 1}\left( \text{ZBack} \right) \right) \right)\)for each alpha channel and\(C\left( max\left( S_{n - 1}\left( Z \right),S_{n - 1}\left( \text{ZBack} \right) \right) \right)\)for each color or auxiliary channel.
There is no single “correct” way to flatten the depth channels. The mostuseful way to handleZ andZBack depends on howthe flat image will be used. Possibilities include, among others:
Flatten the
Zchannel as if it was a color channel, usingAas the associated alpha channel. For volume samples,replaceZwith the average ofZandZBack before flattening. Either discard theZBackchannel, or use the back of the last sample,\(max\left( S_{n - 1}\left( Z \right),S_{n - 1}\left( \text{ZBack} \right) \right)\),as theZBackvalue for the flat image.Treating
Aas the alpha channel associated withZ,find the depth where\(A(z)\) becomes 1.0 and store that depth intheZchannel of the flat image. If\(A(z)\) neverreaches 1.0, then store either infinity or the maximum possiblefinite value in the flat image.Treating
Aas the alpha channel associated withZ,copy the front of the first sample with non-zero alpha and the frontof the first opaque sample into theZandZBackchannels of the flat image.
Opaque Volume Samples¶
Volume samples represent regions along the\(z\) axis of a pixelthat are filled with a medium that absorbs light and also emits lighttowards the camera. The intensity of light traveling through the mediumfalls off exponentially with the distance traveled. For example, if aone unit thick layer of fog absorbs half of the light and transmits therest, then a two unit thick layer of the same fog absorbs three quartersof the light and transmits only one quarter. Volume samples representingthese two layers would have alpha 0.5 and 0.75 respectively. As thethickness of a layer increases, the layer quickly becomes nearly opaque.A fog layer that is twenty units thick transmits less than one millionthof the light entering it, and its alpha is 0.99999905. If alpha isrepresented using 16-bit floating-point numbers, then the exact valuewill be rounded to 1.0, making the corresponding volume samplecompletely opaque. With 32-bit floating-point numbers, the alpha valuefor a 20 unit thick layer can still be distinguished from 1.0, but for a25 unit layer, alpha rounds to 1.0. At 55 units, alpha rounds to 1.0even with 64-bit floating-point numbers.
Once a sample effectively becomes opaque, the true density of thelight-absorbing medium is lost. A one-unit layer of a light fog mightabsorb half of the light while a one-unit layer of a dense fog mightabsorb three quarters of the light, but the representation of a 60-unitlayer as a volume sample is exactly the same for the light fog, thedense fog and a gray brick. For a sample that extends fromZ toZBack, the function\(\alpha(z)\) evaluates to 1.0for any\(z > Z\). Any object within this layer would be completelyhidden, no matter how close it was to the front of the layer.
Application software that writes deep images should avoid generatingvery deep volume samples. If the program is about to generate a samplewith alpha close to 1.0, then it should split the sample into multiplesub-samples with a lower opacity before storing the data in a deep imagefile. This assumes, of course, that the software has an internal volumesample representation that can distinguish very nearly opaque samplesfrom completely opaque ones, so that splitting will produce sub-sampleswith alpha significantly below 1.0.
Appendix: C++ Code¶
Example: Splitting a Volume Sample¶
1 2#include <algorithm> 3#include <cassert> 4#include <cmath> 5#include <limits> 6 7usingnamespacestd; 8 9void10splitVolumeSample(11floata,12floatc,//Opacityandcoloroforiginalsample13floatzf,14floatzb,//Frontandbackoforiginalsample15floatz,//Positionofsplit16float&af,17float&cf,//Opacityandcolorofpartcloserthanz18float&ab,19float&cb)//Opacityandcolorofpartfurtherawaythanz20{21//22//Givenavolumesamplewhosefrontandbackareatdepthszfand23//zbrespectively,splitthesampleatdepthz.Returntheopacities24//andcolorsofthetwopartsthatresultfromthesplit.25//26//Thecodebelowiswrittentoavoidexcessiveroundingerrorswhen27//theopacityoftheoriginalsampleisverysmall:28//29//Thestraightforwardcomputationoftheopacityofeitherpart30//requiresevaluatinganexpressionoftheform31//32//1-pow(1-a,x).33//34//However,ifaisverysmall,then1-aevaluatesto1.0exactly,35//andtheentireexpressionevaluatesto0.0.36//37//Wecanavoidthisbyrewritingtheexpressionas38//39//1-exp(x \*log(1-a)),40//41//andreplacingthecalltolog()withacalltothefunctionlog1p(),42//whichcomputesthelogarithmof1+xwithoutattemptingtoevaluate43//theexpression1+xwhenxisverysmall.44//45//Nowwehave46//47//1-exp(x \*log1p(-a)).48//49//However,ifaisverysmallthenthecalltoexp()returns1.0,and50//theoverallexpressionstillevaluatesto0.0.Wecanavoidthat51//byreplacingthecalltoexp()withacalltoexpm1():52//53//-expm1(x \*log1p(-a))54//55//expm1(x)computesexp(x)-1insuchawaythattheresultis56//evenifxisverysmall.57//5859assert(zb>zf&&z>=zf&&z<=zb);6061a=max(0.0f,min(a,1.0f));6263if(a==1)64{65af=ab=1;66cf=cb=c;67}68else69{70floatxf=(z-zf)/(zb-zf);71floatxb=(zb-z)/(zb-zf);7273if(a>numeric_limits<float>::min())74{75af=-expm1(xf*log1p(-a));76cf=(af/a)*c;7778ab=-expm1(xb*log1p(-a));79cb=(ab/a)*c;80}81else82{83af=a*xf;84cf=c*xf;8586ab=a*xb;87cb=c*xb;88}89}90}
Example: Merging Two Overlapping Samples¶
1 2#include <algorithm> 3#include <cassert> 4#include <cmath> 5#include <limits> 6using namespace std; 7void 8mergeOverlappingSamples ( 9 float a1,10 float c1, // Opacity and color of first sample11 float a2,12 float c2, // Opacity and color of second sample13 float& am,14 float& cm) // Opacity and color of merged sample15{16 //17 // This function merges two perfectly overlapping volume or point18 // samples. Given the color and opacity of two samples, it returns19 // the color and opacity of the merged sample.20 //21 // The code below is written to avoid very large rounding errors when22 // the opacity of one or both samples is very small:23 //24 // * The merged opacity must not be computed as 1 - (1-a1) \*25 // (1-a2). If a1 and a2 are less than about half a26 // floating-point epsilon, the expressions (1-a1) and (1-a2)27 // evaluate to 1.0 exactly, and the merged opacity becomes28 // 0.0. The error is amplified later in the calculation of the29 // merged color.30 //31 // Changing the calculation of the merged opacity to a1 + a2 -32 // a1*a2 avoids the excessive rounding error.33 //34 // * For small x, the logarithm of 1+x is approximately equal to35 // x, but log(1+x) returns 0 because 1+x evaluates to 1.036 // exactly. This can lead to large errors in the calculation of37 // the merged color if a1 or a2 is very small.38 //39 // The math library function log1p(x) returns the logarithm of40 // 1+x, but without attempting to evaluate the expression 1+x41 // when x is very small.42 //4344 a1 = max (0.0f, min (a1, 1.0f));45 a2 = max (0.0f, min (a2, 1.0f));4647 am = a1 + a2 - a1 * a2;4849 if (a1 == 1 && a2 == 1) { cm = (c1 + c2) / 2; }50 else if (a1 == 1)51 {52 cm = c1;53 }54 else if (a2 == 1)55 {56 cm = c2;57 }58 else59 {60 static const float MAX = numeric_limits<float>::max ();6162 float u1 = -log1p (-a1);63 float v1 = (u1 < a1 * MAX) ? u1 / a1 : 1;6465 float u2 = -log1p (-a2);66 float v2 = (u2 < a2 * MAX) ? u2 / a2 : 1;6768 float u = u1 + u2;69 float w = (u > 1 || am < u * MAX) ? am / u : 1;7071 cm = (c1 * v1 + c2 * v2) * w;72 }73}
