This article has multiple issues. Please helpimprove it or discuss these issues on thetalk page.(Learn how and when to remove these messages) (Learn how and when to remove this message)
|

Az-buffer, also known as adepth buffer, is a type ofdata buffer used incomputer graphics to store the depth information of fragments. The values stored represent the distance to the camera, with 0 being the closest. The encoding scheme may be flipped with the highest number being the value closest to camera.
In a3D-rendering pipeline, when an object is projected on the screen, the depth (z-value) of a generatedfragment in the projected screen image is compared to the value already stored in the buffer (depth test), and replaces it if the new value is closer. It works in tandem with therasterizer, which computes the colored values. The fragment output by the rasterizer is saved if it is not overlapped by another fragment.
Z-buffering is a technique used in almost all contemporary computers, laptops, and mobile phones for generating3D computer graphics. The primary use now is forvideo games, which require fast and accurate processing of 3D scenes.
Determining what should be displayed on the screen and what should be omitted is a multi-step process utilising varioustechniques. Using a z-buffer is the final step in this process.
Each time an object is rendered into theframebuffer the z-buffer is used to compare the z-values of the fragments with the z-value already in the z-buffer (i.e., check what is closer), if the new z-value is closer than the old value, the fragment is written into the framebuffer and this new closer value is written into the z-buffer. If the z-value is further away than the value in the z-buffer, the fragment is discarded. This is repeated for all objects and surfaces in the scene (often inparallel). In the end, the z-buffer will allow correct reproduction of the usualdepth perception: a close object hides one further away. This is calledz-culling.
Thegranularity of a z-buffer has a great influence on the scene quality: the traditional16-bit z-buffer can result inartifacts (called "z-fighting" orstitching) when two objects are very close to each other. A more modern24-bit or32-bit z-buffer behaves much better, although the problem cannot be eliminated without additional algorithms. An8-bit z-buffer is almost never used since it has too little precision.
Z-buffer data obtained from rendering a surface from a light's point-of-view permits the creation of shadows by theshadow mapping technique.[1]
Z-buffering was first described in 1974 by Wolfgang Straßer in his PhD thesis on fast algorithms for rendering occluded objects.[2] A similar solution to determining overlapping polygons is thepainter's algorithm, which is capable of handling non-opaque scene elements, though at the cost of efficiency and incorrect results.
Z-buffers are often implemented in hardware within consumergraphics cards. Z-buffering is also used (implemented as software as opposed to hardware) for producing computer-generated special effects for films.[citation needed]
Even with small enough granularity, quality problems may arise whenprecision in the z-buffer's distance values are not spread evenly over distance. Nearer values are much more precise (and hence can display closer objects better) than values that are farther away. Generally, this is desirable, but sometimes it will cause artifacts to appear as objects become more distant. A variation on z-buffering which results in more evenly distributed precision is calledw-buffering (seebelow).
At the start of a new scene, the z-buffer must be cleared to a defined value, usually 1.0, because this value is the upper limit (on a scale of 0 to 1) of depth, meaning that no object is present at this point through theviewing frustum.
The invention of the z-buffer concept is most often attributed toEdwin Catmull, although Wolfgang Straßer described this idea in his 1974 Ph.D. thesis months before Catmull's invention.[a]
On more recent PC graphics cards (1999–2005), z-buffer management uses a significant chunk of the available memorybandwidth. Various methods have been employed to reduce the performance cost of z-buffering, such aslossless compression (computer resources to compress/decompress are cheaper than bandwidth) and ultra-fast hardware z-clear that makes obsolete the "one frame positive, one frame negative" trick (skipping inter-frame clear altogether using signed numbers to cleverly check depths).
Some games, notably several games later in theNintendo 64's life cycle, decided to either minimize z-buffering (for example, rendering the background first without z-buffering and only using z-buffering for the foreground objects) or to omit it entirely, to reduce memory bandwidth requirements and memory requirements respectively.Super Smash Bros. andF-Zero X are two Nintendo 64 games that minimized z-buffering to increaseframerates. SeveralFactor 5 games also minimized or omitted z-buffering. On the Nintendo 64 z-Buffering can consume up to 4x as much bandwidth as opposed to not using z-buffering.[3]
Mechwarrior 2 on PC supportedresolutions up to 800x600[4] on the original 4 MB3dfx Voodoo due to not using z-buffering.
Inrendering, z-culling is early pixel elimination based on depth, a method that provides an increase in performance when rendering of hidden surfaces is costly. It is a direct consequence of z-buffering, where the depth of each pixel candidate is compared to the depth of the existing geometry behind which it might be hidden.
When using a z-buffer, a pixel can be culled (discarded) as soon as its depth is known, which makes it possible to skip the entire process of lighting andtexturing a pixel that would not bevisible anyway. Also, time-consumingpixel shaders will generally not be executed for the culled pixels. This makes z-culling a good optimization candidate in situations wherefillrate, lighting, texturing, or pixel shaders are the mainbottlenecks.
While z-buffering allows the geometry to be unsorted, sortingpolygons by increasing depth (thus using a reversepainter's algorithm) allows each screen pixel to be rendered fewer times. This can increase performance in fillrate-limited scenes with large amounts of overdraw, but if not combined with z-buffering it suffers from severe problems such as:
As such, a reverse painter's algorithm cannot be used as an alternative to z-culling (without strenuous re-engineering), except as an optimization to z-culling. For example, an optimization might be to keep polygons sorted according to x/y-location and z-depth to provide bounds, in an effort to quickly determine if two polygons might possibly have an occlusion interaction.
The range of depth values incamera space to be rendered is often defined between a and value of.
After aperspective transformation, the new value of, or, is defined by:
After anorthographic projection, the new value of, or, is defined by:
where is the old value of in camera space, and is sometimes called or.
The resulting values of are normalized between the values of -1 and 1, where theplane is at -1 and the plane is at 1. Values outside of this range correspond to points which are not in the viewingfrustum, and shouldn't be rendered.
Typically, these values are stored in the z-buffer of the hardware graphics accelerator infixed point format. First they are normalized to a more common range which is[0, 1] by substituting the appropriate conversion into the previous formula:
Simplifying:
Second, the above formula is multiplied by where d is the depth of the z-buffer (usually 16, 24 or 32 bits) and rounding the result to an integer:[5]
This formula can be inverted and derived in order to calculate the z-buffer resolution (the 'granularity' mentioned earlier). The inverse of the above:
where
The z-buffer resolution in terms of camera space would be the incremental value resulted from the smallest change in the integer stored in the z-buffer, which is +1 or -1. Therefore, this resolution can be calculated from the derivative of as a function of:
Expressing it back in camera space terms, by substituting by the above:
This shows that the values of are grouped much more densely near the plane, and much more sparsely farther away, resulting in better precision closer to the camera. The smaller is, the less precision there is far away—having the plane set too closely is a common cause of undesirable rendering artifacts in more distant objects.[6]
To implement a z-buffer, the values of arelinearly interpolated across screen space between thevertices of the currentpolygon, and these intermediate values are generally stored in the z-buffer infixed point format.
To implement a w-buffer,[7] the old values of in camera space, or, are stored in the buffer, generally infloating point format. However, these values cannot be linearly interpolated across screen space from the vertices—they usually have to beinverted, interpolated, and then inverted again. The resulting values of, as opposed to, are spaced evenly between and. There are implementations of the w-buffer that avoid the inversions altogether.
Whether a z-buffer or w-buffer results in a better image depends on the application.
The followingpseudocode demonstrates the process of z-buffering:
// First of all, initialize the depth of each pixel.d(i, j) = infinite // Max length// Initialize the color value for each pixel to the background colorc(i, j) = background color// For each polygon, do the following steps :for (each pixel in polygon's projection){ // Find depth i.e, z of polygon // at (x, y) corresponding to pixel (i, j) if (z < d(i, j)) { d(i, j) = z; c(i, j) = color; }}{{cite book}}: CS1 maint: location missing publisher (link)