Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Texture mapping

From Wikipedia, the free encyclopedia
Method of defining surface detail on a computer-generated graphic or 3D model
Mapping a two-dimensional texture onto a 3D model
1: 3D model without textures
2: Same model with textures

Texture mapping[1][2][3] is a term used incomputer graphics to describe how 2D images are projected onto 3D models. The most common variant is theUV unwrap, which can be described as an inverse paper cutout, where the surfaces of a 3D model are cut apart so that it can be unfolded into a 2D coordinate space (UV space).

Semantic

[edit]

Texture mapping can multiply refer to (1) the task of unwrapping a 3D model (converting the surface of a 3D model into a 2D texture map), (2) applying a 2D texture map onto the surface of a 3D model, and (3) the3D software algorithm that performs both tasks.

Atexture map refers to a 2D image ("texture") that adds visual detail to a 3D model. The image can be stored as araster graphic. A texture that stores a specific property—such as bumpiness, reflectivity, or transparency—is also referred to as acolor map orroughness map.

The coordinate space that converts from a 3D model's 3D space into a 2D space for sampling from the texture map is variously calledUV space,UV coordinates, ortexture space.

Algorithm

[edit]

The following is a simplified explanation of how an algorithm could work torender an image:

  1. For each pixel, trace the coordinates of the screen into the 3D scene.
  2. If a 3D model is hit or, more precisely, the polygon of a 3D model hits the 2D UV coordinates, then
  3. The UV Coordinates are used to read the color from the texture and apply it to the pixel.

History

[edit]

The original technique was pioneered byEdwin Catmull in 1974 as part of his doctoral thesis.[4]

Texture mapping originally referred todiffuse mapping, a method that simply mappedpixels from a texture to a3D surface ("wrapping" the image around the object). In recent decades, the advent of multi-pass rendering,multitexturing,mipmaps, and more complex mappings such asheight mapping,bump mapping,normal mapping,displacement mapping,reflection mapping,specular mapping,occlusion mapping, and many other variations on the technique (controlled by amaterials system) have made it possible to simulate near-photorealism inreal time by vastly reducing the number ofpolygons and lighting calculations needed to construct a realistic and functional 3D scene.

Examples ofmultitexturing:
1: Untextured sphere, 2: Texture and bump maps, 3: Texture map only, 4: Opacity and texture maps

Texture maps

[edit]
"Texture maps" redirects here. For the album by Steve Roach, seeTexture Maps: The Lost Pieces Vol. 3.

Atexture map[5][6] is an image applied ("mapped") to the surface of a shape orpolygon.[7] This may be abitmap image or aprocedural texture. They may be stored in commonimage file formats, referenced by3D model formats ormaterial definitions, and assembled intoresource bundles.

They may have one to three dimensions, although two dimensions are most common for visible surfaces. For use with modern hardware, texture map data may be stored inswizzled or tiled orderings to improvecache coherency.Rendering APIs typically manage texture map resources (which may be located indevice memory) as buffers or surfaces, and may allow 'render to texture' for additional effects such as post processing orenvironment mapping.

Texture maps usually containRGB color data (either stored asdirect color,compressed formats, orindexed color), and sometimes an additional channel foralpha blending (RGBA) especially forbillboards and decal overlay textures. It is possible to use thealpha channel (which may be convenient to store in formats parsed by hardware) for other uses such asspecularity.

Multiple texture maps (orchannels) may be combined for control overspecularity,normals,displacement, orsubsurface scattering, e.g. for skin rendering.

Multiple texture images may be combined intexture atlases orarray textures to reduce state changes for modern hardware. (They may be considered a modern evolution oftile map graphics). Modern hardware often supportscube map textures with multiple faces for environment mapping.

Creation

[edit]

Texture maps may be acquired byscanning ordigital photography, designed inimage manipulation software such asGIMP orPhotoshop, or painted onto 3D surfaces directly in a3D paint tool such asMudbox orZBrush.

Texture application

[edit]

This process is akin to applying patterned paper to a plain white box. Every vertex in a polygon is assigned atexture coordinate (which in the 2D case is also known asUV coordinates).[8] This may be done through explicit assignment ofvertex attributes, manually edited in a 3D modelling package throughUV unwrapping tools. It is also possible to associate a procedural transformation from 3D space to texture space with thematerial. This might be accomplished viaplanar projection or, alternatively,cylindrical orspherical mapping. More complex mappings may consider the distance along a surface to minimize distortion. These coordinates are interpolated across the faces of polygons to sample the texture map during rendering. Textures may be repeated or mirrored to extend a finite rectangular bitmap over a larger area, or they may have a one-to-one unique "injective" mapping from every piece of a surface (which is important forrender mapping andlight mapping, also known asbaking).

Texture space

[edit]

Texture mapping maps the model surface (orscreen space during rasterization) intotexture space; in this space, the texture map is visible in its undistorted form.UV unwrapping tools typically provide a view in texture space for manual editing of texture coordinates. Some rendering techniques such assubsurface scattering may be performed approximately by texture-space operations.

Multitexturing

[edit]

Multitexturing is the use of more than one texture at a time on a polygon.[9] For instance, alight map texture may be used to light a surface as an alternative to recalculating that lighting every time the surface is rendered.Microtextures ordetail textures are used to add higher frequency details, anddirt maps add weathering and variation; this can greatly reduce the apparent periodicity of repeating textures. Modern graphics may use more than 10 layers, which are combined usingshaders, for greater fidelity. Another multitexture technique isbump mapping, which allows a texture to directly control the facing direction of a surface for the purposes of its lighting calculations; it can give a very good appearance of a complex surface (such as tree bark or rough concrete) that takes on lighting detail in addition to the usual detailed coloring. Bump mapping has become popular in video games, as graphics hardware has become powerful enough to accommodate it in real-time.[10]

Texture filtering

[edit]

The way that samples (e.g. when viewed aspixels on the screen) are calculated from thetexels (texture pixels) is governed bytexture filtering. The cheapest method is to use thenearest-neighbour interpolation, butbilinear interpolation ortrilinear interpolation betweenmipmaps are two commonly used alternatives which reducealiasing orjaggies. In the event of a texture coordinate being outside the texture, it is eitherclamped orwrapped.Anisotropic filtering better eliminates directional artefacts when viewing textures from oblique viewing angles.

Texture streaming

[edit]

Texture streaming is a means of usingdata streams for textures, where each texture is available in two or more different resolutions, as to determine which texture should be loaded into memory and used based on draw distance from the viewer and how much memory is available for textures. Texture streaming allows a rendering engine to use low resolution textures for objects far away from the viewer's camera, and resolve those into more detailed textures, read from a data source, as the point of view nears the objects.

Baking

[edit]

As an optimization, it is possible to render detail from a complex, high-resolution model or expensive process (such asglobal illumination) into a surface texture (possibly on a low-resolution model). This technique is calledbaking (orrender mapping) and is most commonly used forlight maps, but may also be used to generatenormal maps anddisplacement maps. Some computer games (e.g.Messiah) have used this technique. The originalQuake software engine used on-the-fly baking to combine light maps and colour maps in a process calledsurface caching.

Baking can be used as a form oflevel of detail generation, where a complex scene with many different elements and materials may be approximated by a single element with a single texture, which is then algorithmically reduced for lower rendering cost and fewerdrawcalls. It is also used to take high-detail models from3D sculpting software andpoint cloud scanning and approximate them withmeshes more suitable for realtime rendering.

Rasterisation algorithms

[edit]

Various techniques have evolved in software and hardware implementations. Each offers different trade-offs in precision, versatility, and performance.

Affine texture mapping

[edit]
Because affine texture mapping does not take into account the depth information about a polygon's vertices, where the polygon is not perpendicular to the viewer, it produces a noticeable defect, especially when rasterized as triangles.

Affine texture mapping linearly interpolates texture coordinates across a surface, making it the fastest form of texture mapping. Some software and hardware (such as the originalPlayStation)project vertices in 3D space onto the screen during rendering andlinearly interpolate the texture coordinatesin screen space between them. This may be done by incrementingfixed-pointUV coordinates or by anincremental error algorithm akin toBresenham's line algorithm.

In contrast to perpendicular polygons, this leads to noticeable distortion with perspective transformations (as shown in the figure: the checker box texture appears bent), especially as primitives near thecamera. This distortion can be reduced by subdividing polygons into smaller polygons.

Using quad primitives for rectangular objects can look less incorrect than if those rectangles were split into triangles. However, since interpolating four points adds complexity to the rasterization, most early implementations preferred triangles only. Some hardware, such as theforward texture mapping used by the NvidiaNV1, offered efficient quad primitives. With perspective correction, triangles become equivalent to quad primitives and this advantage disappears.

For rectangular objects, especially when perpendicular to the view, linearly interpolating across a quad can give an affine result that is superior to the same rectangle split into two affine triangles.

For rectangular objects that are at right angles to the viewer (like floors and walls), the perspective only needs to be corrected in one direction across the screen rather than both. The correct perspective mapping can be calculated at the left and right edges of the floor. Affine linear interpolation across that horizontal span will look correct because every pixel along that line is the same distance from the viewer.

Perspective correctness

[edit]

Perspective correct texturing accounts for the vertices' positions in 3D space rather than simply interpolating coordinates in 2D screen space.[11] While achieving the correct visual effect, perspective correct texturing is more expensive to calculate.[11]

To perform perspective correction of the texture coordinatesu{\displaystyle u} andv{\displaystyle v}, withz{\displaystyle z} being the depth component from the viewer's point of view, it is possible to take advantage of the fact that the values1z{\displaystyle {\frac {1}{z}}},uz{\displaystyle {\frac {u}{z}}}, andvz{\displaystyle {\frac {v}{z}}} are linear in screen space across the surface being textured. In contrast, the originalz{\displaystyle z},u{\displaystyle u}, andv{\displaystyle v}, before the division, are not linear across the surface in screen space. It is therefore possible to linearly interpolate these reciprocals across the surface, computing corrected values at each pixel, to produce a perspective correct texture mapping.

To do this, the reciprocals at each vertex of the geometry (three points for a triangle) are calculated. Vertexn{\displaystyle n} has reciprocalsunzn{\displaystyle {\frac {u_{n}}{z_{n}}}},vnzn{\displaystyle {\frac {v_{n}}{z_{n}}}}, and1zn{\displaystyle {\frac {1}{z_{n}}}}. Then, linear interpolation can be done on these reciprocals between then{\displaystyle n} vertices (e.g., usingbarycentric coordinates), resulting in interpolated values across the surface. At a given point, this yields the interpolatedui,vi{\displaystyle u_{i},v_{i}} and1zi{\displaystyle {\frac {1}{z_{i}}}} (reciprocalzi{\displaystyle z_{i}}). However, as our division byz{\displaystyle z} altered their coordinate system, thisui,vi{\displaystyle u_{i},v_{i}} cannot be used as texture coordinates. To correct back to theu,v{\displaystyle u,v} space, the correctedz{\displaystyle z} is calculated by taking the reciprocal once again:zcorrect=11zi{\displaystyle z_{correct}={\frac {1}{\frac {1}{z_{i}}}}}. This is then used to correct theui,vi{\displaystyle u_{i},v_{i}} coordinates:ucorrect=uizi{\displaystyle u_{correct}=u_{i}\cdot z_{i}} andvcorrect=vizi{\displaystyle v_{correct}=v_{i}\cdot z_{i}}.[12]

This correction makes it so that the difference from pixel to pixel between texture coordinates is smaller in parts of the polygon that are closer to the viewer (stretching the texture wider) and is larger in parts that are farther away (compressing the texture).

Affine texture mapping directly interpolates a texture coordinateuα{\displaystyle u_{\alpha }} between two endpointsu0{\displaystyle u_{0}} andu1{\displaystyle u_{1}}:uα=(1α)u0+αu1{\displaystyle u_{\alpha }=(1-\alpha )u_{0}+\alpha u_{1}}where0α1{\displaystyle 0\leq \alpha \leq 1}.

Perspective correct mapping interpolates after dividing by depthz{\displaystyle z}, then uses its interpolated reciprocal to recover the correct coordinate:uα=(1α)u0z0+αu1z1(1α)1z0+α1z1{\displaystyle u_{\alpha }={\frac {(1-\alpha ){\frac {u_{0}}{z_{0}}}+\alpha {\frac {u_{1}}{z_{1}}}}{(1-\alpha ){\frac {1}{z_{0}}}+\alpha {\frac {1}{z_{1}}}}}}3D graphics hardware typically supports perspective correct texturing.

Various techniques have evolved for rendering texture mapped geometry into images with different quality and precision trade-offs, which can be applied to both software and hardware.

Classic software texture mappers generally only performed simple texture mapping with one lighting effect at most (typically applied through alookup table), and the perspective correctness was about 16 times more expensive.[compared to?]

Restricted camera rotation

[edit]
TheDoom engine did not permit ramped floors or slanted walls. This requires only one perspective correction per horizontal or vertical span rather than one per-pixel.

TheDoom engine restricted the world to vertical walls and horizontal floors and ceilings, with a camera that could only rotate about the vertical axis. This meant the walls would be a constant depth coordinate along a vertical line and the floors and ceilings would have a constant depth along a horizontal line. After performing one perspective correction calculation for the depth, the rest of the line could use fast affine mapping. Some later renderers of this era simulated a small amount of camerapitch withshearing which allowed the appearance of greater freedom while using the same rendering technique.

Some engines were able to render texture mappedheightmaps (e.g.Nova Logic'sVoxel Space, and the engine forOutcast) viaBresenham-like incremental algorithms, producing the appearance of a texture mapped landscape without the use of traditional geometric primitives.[13]

Subdivision for perspective correction

[edit]

Every triangle can be further subdivided into groups of about 16 pixels in order to achieve two goals: keeping the arithmetic mill busy at all times and producing faster arithmetic results.[vague]

World space subdivision

[edit]

For perspective texture mapping without hardware support, a triangle is broken down into smaller triangles for rendering and affine mapping is used on them. The reason this technique works is that the distortion of affine mapping becomes much less noticeable on smaller polygons. TheSony PlayStation made extensive use of this because it only supported affine mapping in hardware and had a relatively high triangle throughput compared to its peers.

Screen space subdivision

[edit]
Screen space subdivision techniques. Top left:Quake-like, top right: bilinear, bottom left: const-z

Software renderers generally prefer screen subdivision because it has less overhead. Additionally, they try to do linear interpolation along a line of pixels to simplify the set-up (compared to 2D affine interpolation), thus lessening the overhead further. Another reason is that affine texture mapping does not fit into the low number ofCPU registers of thex86 CPU; the68000 andRISC processors are much more suited for that approach.

A different approach was taken forQuake, which would calculate perspective correct coordinates only once every 16 pixels of a scanline and linearly interpolate between them, effectively running at the speed of linear interpolation because the perspective correct calculation runs in parallel on the co-processor.[14] As the polygons are rendered independently, it may be possible to switch between spans and columns or diagonal directions depending on the orientation of thepolygon normal to achieve a more constant z, but the effort seems not to be worth it.[original research?]

Other techniques

[edit]

One other technique is to approximate the perspective with a faster calculation, such as a polynomial. A second uses the1zi{\textstyle {\frac {1}{z_{i}}}} value of the last two drawn pixels to linearly extrapolate the next value. For the latter, the division is then done starting from those values so that all that has to be divided is a small remainder.[15] However, the amount of bookkeeping needed makes this technique too slow on most systems.[citation needed]

A third technique, used by theBuild Engine (used, most notably, inDuke Nukem 3D), builds on the constant distance trick used by theDoom engine by finding and rendering along the line of constant distance for arbitrary polygons.

Hardware implementations

[edit]

Texture mapping hardware was originally developed for simulation (e.g. as implemented in theEvans and Sutherland ESIG and Singer-Link Digital Image Generators DIG) and professionalgraphics workstations (such asSilicon Graphics) and broadcastdigital video effects machines such as theAmpex ADO. Texture mapping hardware later appeared inarcade cabinets, consumervideo game consoles, and PCvideo cards in the mid-1990s.

Inflight simulations, texture mapping provided important motion and altitude cues necessary for pilot training not available on untextured surfaces. Additionally, texture mapping was implemented so that real-time processing of prefiltered texture patterns stored in memory could be accessed by the video processor in real-time.[16]

Moderngraphics processing units (GPUs) provide specialisedfixed function units calledtexture samplers, ortexture mapping units, to perform texture mapping, usually withtrilinear filtering or better multi-tapanisotropic filtering and hardware for decoding specific formats such asDXTn. As of 2016, texture mapping hardware is ubiquitous as mostSOCs contain a suitable GPU.

Some hardware implementations combine texture mapping withhidden-surface determination intile-based deferred rendering orscanline rendering; such systems only fetch the visibletexels at the expense of using greater workspace for transformed vertices. Most systems have settled on thez-buffering approach, which can still reduce the texture mapping workload with front-to-backsorting.

On earlier graphics hardware, there were two competing paradigms of how to deliver a texture to the screen:

  1. Forward texture mapping iterates through each texel on the texture and decides where to place it on the screen.
  2. Inverse texture mapping instead iterates through pixels on the screen and decides what texel to use for each.

Of these methods, inverse texture mapping has become standard in modern hardware.

Inverse texture mapping

[edit]

With this method, a pixel on the screen is mapped to a point on the texture. Each vertex of arendering primitive is projected to a point on the screen, and each of these points ismapped to a u,v texel coordinate on the texture. A rasterizer will interpolate between these points to fill in each pixel covered by the primitive.

The primary advantage of this method is that each pixel covered by a primitive will be traversed exactly once. Once a primitive's vertices are transformed, the amount of remaining work scales directly with how many pixels it covers on the screen.

The main disadvantage is that thememory access pattern in thetexture space will not be linear if the texture is at an angle to the screen. This disadvantage is often addressed bytexture caching techniques, such as theswizzled texture memory arrangement.

The linear interpolation can be used directly for simple and efficientaffine texture mapping, but can also be adapted forperspective correctness.

Forward texture mapping

[edit]

Forward texture mapping maps each texel of the texture to a pixel on the screen. After transforming a rectangular primitive to a place on the screen, a forward texture mapping renderer iterates through each texel on the texture,splatting each one onto a pixel of theframe buffer. This was used by some hardware, such as the3DO, theSega Saturn and theNV1.

The primary advantage is that the texture will be accessed in a simple linear order, allowing very efficient caching of the texture data. However, this benefit is also its disadvantage: as a primitive gets smaller on screen, it still has to iterate over every texel in the texture, causing many pixels to be overdrawn redundantly.

This method is also well suited for rendering quad primitives rather than reducing them to triangles, which provided an advantage when perspective correct texturing was not available in hardware. This is because the affine distortion of a quad looks less incorrect than the same quad split into two triangles(see the§ Affine texture mapping section above). The NV1 hardware also allowed a quadratic interpolation mode to provide an even better approximation of perspective correctness.

UV mapping became an important technique for 3D modelling and assisted inclipping the texture correctly when the primitive went past the edge of the screen, but existing hardware did not provide effective implementations of this. These shortcomings could have been addressed with further development, but GPU design has mostly shifted toward using the inverse mapping technique.

Applications

[edit]

Beyond 3D rendering, the availability of texture mapping hardware has inspired its use for accelerating other tasks:

Tomography

[edit]

It is possible to use texture mapping hardware to accelerate both thereconstruction ofvoxel data sets fromtomographic scans, and tovisualize the results.[17]

User interfaces

[edit]

Many user interfaces use texture mapping to accelerate animated transitions of screen elements, e.g.Exposé inMac OS X.

See also

[edit]

References

[edit]
  1. ^Wang, Huamin."Texture Mapping"(PDF).department of Computer Science and Engineering.Ohio State University. Archived fromthe original(PDF) on 2016-03-04. Retrieved2016-01-15.
  2. ^"Texture Mapping"(PDF).www.inf.pucrs.br. RetrievedSeptember 15, 2019.
  3. ^"CS 405 Texture Mapping".www.cs.uregina.ca. Retrieved22 March 2018.
  4. ^Catmull, E. (1974).A subdivision algorithm for computer display of curved surfaces(PDF) (PhD thesis). University of Utah. Archived fromthe original(PDF) on 2014-11-14. Retrieved2015-09-03.
  5. ^Fosner, Ron (January 1999)."DirectX 6.0 Goes Ballistic With Multiple New Features And Much Faster Code".Microsoft.com. Archived fromthe original on October 31, 2016. RetrievedSeptember 15, 2019.
  6. ^Hvidsten, Mike (Spring 2004)."The OpenGL Texture Mapping Guide".homepages.gac.edu. Archived fromthe original on 23 May 2019. Retrieved22 March 2018.
  7. ^Jon Radoff, Anatomy of an MMORPG,"Anatomy of an MMORPG".radoff.com. August 22, 2008.Archived from the original on 2009-12-13. Retrieved2009-12-13.
  8. ^Roberts, Susan."How to use textures". Archived from the original on 24 September 2021. Retrieved20 March 2021.
  9. ^Blythe, David.Advanced Graphics Programming Techniques Using OpenGL. Siggraph 1999. (PDF) (see:Multitexture)
  10. ^Real-Time Bump Map Synthesis, Jan Kautz1, Wolfgang Heidrichy2 and Hans-Peter Seidel1, (1Max-Planck-Institut für Informatik,2University of British Columbia)
  11. ^ab"The Next Generation 1996 Lexicon A to Z: Perspective Correction".Next Generation. No. 15.Imagine Media. March 1996. p. 38.
  12. ^Kalms, Mikael (1997)."Perspective Texturemapping".www.lysator.liu.se. Retrieved2020-03-27.
  13. ^"Voxel terrain engine", introduction. In a coder's mind, 2005 (archived 2013).
  14. ^Abrash, Michael.Michael Abrash's Graphics Programming Black Book Special Edition. The Coriolis Group, Scottsdale Arizona, 1997.ISBN 1-57610-174-6 (PDFArchived 2007-03-11 at theWayback Machine) (Chapter 70, pg. 1282)
  15. ^US 5739818, Spackman, John Neil, "Apparatus and method for performing perspectively correct interpolation in computer graphics", issued 1998-04-14 
  16. ^Yan, Johnson (August 1985). "Advances in Computer-Generated Imagery for Flight Simulation".IEEE Computer Graphics and Applications.5 (8):37–51.doi:10.1109/MCG.1985.276213.
  17. ^"texture mapping for tomography".

Software

[edit]

External links

[edit]
Local
Environment
Retrieved from "https://en.wikipedia.org/w/index.php?title=Texture_mapping&oldid=1305528825"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp