
This tutorial covers (single-step)parallax mapping.
It extends and is based onSection “Lighting of Bumpy Surfaces”. Note that this tutorial is meant to teach you how this technique works. If you want to actually use parallax mapping in Unity, you should use a built-in shader that supports it.
The normal mapping technique presented inSection “Lighting of Bumpy Surfaces” only changes the lighting of a flat surface to create the illusion of bumps and dents. If one looks straight onto a surface (i.e. in the direction of the surface normal vector), this works very well. However, if one looks onto a surface from some other angle (as in the image to the left), the bumps should also stick out of the surface while the dents should recede into the surface. Of course, this could be achieved by geometrically modeling bumps and dents; however, this would require to process many more vertices. On the other hand, single-step parallax mapping is a very efficient techniques similar to normal mapping, which doesn't require additional triangles but can still move virtual bumps by several pixels to make them stick out of a flat surface. However, the technique is limited to bumps and dents of small heights and requires some fine-tuning for best results.

Parallax mapping was proposed in 2001 by Tomomichi Kaneko et al. in their paper “Detailed shape representation with parallax mapping” (ICAT 2001). The basic idea is to offset the texture coordinates that are used for the texturing of the surface (in particular normal mapping). If this offset of texture coordinates is computed appropriately, it is possible to move parts of the texture (e.g. bumps) as if they were sticking out of the surface.
The illustration to the left shows the view vectorV in the direction to the viewer and the surface normal vectorN in the point of a surface that is rasterized in a fragment shader. Parallax mapping proceeds in 3 steps:
For the computation of we require the height of a height map at the rasterized point, which is implemented in the example by a texture lookup in the A component of the texture property_ParallaxMap, which should be a gray-scale image representing heights as discussed inSection “Lighting of Bumpy Surfaces”. We also require the view directionV in the local surface coordinate system formed by the normal vector ( axis), the tangent vector ( axis), and the binormal vector ( axis), which was also introducedSection “Lighting of Bumpy Surfaces”. To this end we compute a transformation from local surface coordinates to object space with:
whereT,B andN are given in object coordinates. (InSection “Lighting of Bumpy Surfaces” we had a similar matrix but with vectors in world coordinates.)
We compute the view directionV in object space (as the difference between the rasterized position and the camera position transformed from world space to object space) and then we transform it to the local surface space with the matrix which can be computed as:
This is possible becauseT,B andN are orthogonal and normalized. (Actually, the situation is a bit more complicated because we won't normalize these vectors but use their length for another transformation; see below.) Thus, in order to transformV from object space to the local surface space, we have to multiply it with the transposed matrix. This is actually good, because in Cg it is easier to construct the transposed matrix asT,B andN are the row vectors of the transposed matrix.
Once we have theV in the local surface coordinate system with the axis in the direction of the normal vectorN, we can compute the offsets (in direction) and (in direction) by using similar triangles (compare with the illustration):
and .
Thus:
and .
Note that it is not necessary to normalizeV because we use only ratios of its components, which are not affected by the normalization.
Finally, we have to transform and into texture space. This would be quite difficult if Unity wouldn't help us: the tangent attributetangent is actually appropriately scaled and has a fourth componenttangent.w for scaling the binormal vector such that the transformation of the view directionV scales and appropriately to have and in texture coordinate space without further computations.
The implementation shares most of the code withSection “Lighting of Bumpy Surfaces”. In particular, the same scaling of the binormal vector with the fourth component of thetangent attribute is used in order to take the mapping of the offsets from local surface space to texture space into account:
float3binormal=cross(input.normal,input.tangent.xyz)*input.tangent.w;
We have to add an output parameter for the view vectorV in the local surface coordinate system (with the scaling of axes to take the mapping to texture space into account). This parameter is calledviewDirInScaledSurfaceCoords. It is computed by transforming the view vector in object coordinates (viewDirInObjectCoords) with the matrix (localSurface2ScaledObjectT) as explained above:
float3viewDirInObjectCoords=mul(modelMatrixInverse,float4(_WorldSpaceCameraPos,1.0)).xyz-input.vertex.xyz;float3x3localSurface2ScaledObjectT=float3x3(input.tangent.xyz,binormal,input.normal);// vectors are orthogonaloutput.viewDirInScaledSurfaceCoords=mul(localSurface2ScaledObjectT,viewDirInObjectCoords);// we multiply with the transpose to multiply with// the "inverse" (apart from the scaling)
The rest of the vertex shader is the same as for normal mapping, seeSection “Lighting of Bumpy Surfaces” except that the view direction in world coordinates is computed in the vertex shader instead of the fragment shader, which is necessary to keep the number of arithmetic operations in the fragment shader small enough for some GPUs.
In the fragment shader, we first query the height map for the height of the rasterized point. This height is specified by the A component of the texture_ParallaxMap. The values between 0 and 1 are transformed to the range -_Parallax/2 to +_Parallax with a shader property_Parallax in order to offer some user control over the strength of the effect (and to be compatible with the fallback shader):
floatheight=_Parallax*(-0.5+tex2D(_ParallaxMap,_ParallaxMap_ST.xy*input.tex.xy+_ParallaxMap_ST.zw).x);
The offsets and are then computed as described above. However, we also clamp each offset between a user-specified interval -_MaxTexCoordOffset and_MaxTexCoordOffset in order to make sure that the offset stays in reasonable bounds. (If the height map consists of more or less flat plateaus of constant height with smooth transitions between these plateaus,_MaxTexCoordOffset should be smaller than the thickness of these transition regions; otherwise the sample point might be in a different plateau with a different height, which would mean that the approximation of the intersection point is arbitrarily bad.) The code is:
float2texCoordOffsets=clamp(height*input.viewDirInScaledSurfaceCoords.xy/input.viewDirInScaledSurfaceCoords.z,-_MaxTexCoordOffset,+_MaxTexCoordOffset);
In the following code, we have to apply the offsets to the texture coordinates in all texture lookups; i.e., we have to replacefloat2(input.tex) (or equivalentlyinput.tex.xy) by(input.tex.xy + texCoordOffsets), e.g.:
float4encodedNormal=tex2D(_BumpMap,_BumpMap_ST.xy*(input.tex.xy+texCoordOffsets)+_BumpMap_ST.zw);
The rest of the fragment shader code is just as it was forSection “Lighting of Bumpy Surfaces”.
As discussed in the previous section, most of this code is taken fromSection “Lighting of Bumpy Surfaces”. Note that if you want to use the code on a mobile device with OpenGL ES, make sure to change the decoding of the normal map as described in that tutorial.
The part about parallax mapping is actually only a few lines. Most of the names of the shader properties were chosen according to the fallback shader; the user interface labels are much more descriptive.
Shader"Cg parallax mapping"{Properties{_BumpMap("Normal Map",2D)="bump"{}_ParallaxMap("Heightmap (in A)",2D)="black"{}_Parallax("Max Height",Float)=0.01_MaxTexCoordOffset("Max Texture Coordinate Offset",Float)=0.01_Color("Diffuse Material Color",Color)=(1,1,1,1)_SpecColor("Specular Material Color",Color)=(1,1,1,1)_Shininess("Shininess",Float)=10}CGINCLUDE// common code for all passes of all subshaders#include"UnityCG.cginc"uniformfloat4_LightColor0;// color of light source (from "Lighting.cginc")// User-specified propertiesuniformsampler2D_BumpMap;uniformfloat4_BumpMap_ST;uniformsampler2D_ParallaxMap;uniformfloat4_ParallaxMap_ST;uniformfloat_Parallax;uniformfloat_MaxTexCoordOffset;uniformfloat4_Color;uniformfloat4_SpecColor;uniformfloat_Shininess;structvertexInput{float4vertex:POSITION;float4texcoord:TEXCOORD0;float3normal:NORMAL;float4tangent:TANGENT;};structvertexOutput{float4pos:SV_POSITION;float4posWorld:TEXCOORD0;// position of the vertex (and fragment) in world spacefloat4tex:TEXCOORD1;float3tangentWorld:TEXCOORD2;float3normalWorld:TEXCOORD3;float3binormalWorld:TEXCOORD4;float3viewDirWorld:TEXCOORD5;float3viewDirInScaledSurfaceCoords:TEXCOORD6;};vertexOutputvert(vertexInputinput){vertexOutputoutput;float4x4modelMatrix=unity_ObjectToWorld;float4x4modelMatrixInverse=unity_WorldToObject;output.tangentWorld=normalize(mul(modelMatrix,float4(input.tangent.xyz,0.0)).xyz);output.normalWorld=normalize(mul(float4(input.normal,0.0),modelMatrixInverse).xyz);output.binormalWorld=normalize(cross(output.normalWorld,output.tangentWorld)*input.tangent.w);// tangent.w is specific to Unityfloat3binormal=cross(input.normal,input.tangent.xyz)*input.tangent.w;// appropriately scaled tangent and binormal// to map distances from object space to texture spacefloat3viewDirInObjectCoords=mul(modelMatrixInverse,float4(_WorldSpaceCameraPos,1.0)).xyz-input.vertex.xyz;float3x3localSurface2ScaledObjectT=float3x3(input.tangent.xyz,binormal,input.normal);// vectors are orthogonaloutput.viewDirInScaledSurfaceCoords=mul(localSurface2ScaledObjectT,viewDirInObjectCoords);// we multiply with the transpose to multiply with// the "inverse" (apart from the scaling)output.posWorld=mul(modelMatrix,input.vertex);output.viewDirWorld=normalize(_WorldSpaceCameraPos-output.posWorld.xyz);output.tex=input.texcoord;output.pos=UnityObjectToClipPos(input.vertex);returnoutput;}// fragment shader with ambient lightingfloat4fragWithAmbient(vertexOutputinput):COLOR{// parallax mapping: compute height and// find offset in texture coordinates// for the intersection of the view ray// with the surface at this heightfloatheight=_Parallax*(-0.5+tex2D(_ParallaxMap,_ParallaxMap_ST.xy*input.tex.xy+_ParallaxMap_ST.zw).x);float2texCoordOffsets=clamp(height*input.viewDirInScaledSurfaceCoords.xy/input.viewDirInScaledSurfaceCoords.z,-_MaxTexCoordOffset,+_MaxTexCoordOffset);// normal mapping: lookup and decode normal from bump map// in principle we have to normalize tangentWorld,// binormalWorld, and normalWorld again; however, the// potential problems are small since we use this// matrix only to compute "normalDirection",// which we normalize anywaysfloat4encodedNormal=tex2D(_BumpMap,_BumpMap_ST.xy*(input.tex.xy+texCoordOffsets)+_BumpMap_ST.zw);float3localCoords=float3(2.0*encodedNormal.a-1.0,2.0*encodedNormal.g-1.0,0.0);localCoords.z=sqrt(1.0-dot(localCoords,localCoords));// approximation without sqrt: localCoords.z =// 1.0 - 0.5 * dot(localCoords, localCoords);float3x3local2WorldTranspose=float3x3(input.tangentWorld,input.binormalWorld,input.normalWorld);float3normalDirection=normalize(mul(localCoords,local2WorldTranspose));float3lightDirection;floatattenuation;if(0.0==_WorldSpaceLightPos0.w)// directional light?{attenuation=1.0;// no attenuationlightDirection=normalize(_WorldSpaceLightPos0.xyz);}else// point or spot light{float3vertexToLightSource=_WorldSpaceLightPos0.xyz-input.posWorld.xyz;floatdistance=length(vertexToLightSource);attenuation=1.0/distance;// linear attenuationlightDirection=normalize(vertexToLightSource);}float3ambientLighting=UNITY_LIGHTMODEL_AMBIENT.rgb*_Color.rgb;float3diffuseReflection=attenuation*_LightColor0.rgb*_Color.rgb*max(0.0,dot(normalDirection,lightDirection));float3specularReflection;if(dot(normalDirection,lightDirection)<0.0)// light source on the wrong side?{specularReflection=float3(0.0,0.0,0.0);// no specular reflection}else// light source on the right side{specularReflection=attenuation*_LightColor0.rgb*_SpecColor.rgb*pow(max(0.0,dot(reflect(-lightDirection,normalDirection),input.viewDirWorld)),_Shininess);}returnfloat4(ambientLighting+diffuseReflection+specularReflection,1.0);}// fragement shader for pass 2 without ambient lightingfloat4fragWithoutAmbient(vertexOutputinput):COLOR{// parallax mapping: compute height and// find offset in texture coordinates// for the intersection of the view ray// with the surface at this heightfloatheight=_Parallax*(-0.5+tex2D(_ParallaxMap,_ParallaxMap_ST.xy*input.tex.xy+_ParallaxMap_ST.zw).x);float2texCoordOffsets=clamp(height*input.viewDirInScaledSurfaceCoords.xy/input.viewDirInScaledSurfaceCoords.z,-_MaxTexCoordOffset,+_MaxTexCoordOffset);// normal mapping: lookup and decode normal from bump map// in principle we have to normalize tangentWorld,// binormalWorld, and normalWorld again; however, the// potential problems are small since we use this// matrix only to compute "normalDirection",// which we normalize anywaysfloat4encodedNormal=tex2D(_BumpMap,_BumpMap_ST.xy*(input.tex.xy+texCoordOffsets)+_BumpMap_ST.zw);float3localCoords=float3(2.0*encodedNormal.a-1.0,2.0*encodedNormal.g-1.0,0.0);localCoords.z=sqrt(1.0-dot(localCoords,localCoords));// approximation without sqrt: localCoords.z =// 1.0 - 0.5 * dot(localCoords, localCoords);float3x3local2WorldTranspose=float3x3(input.tangentWorld,input.binormalWorld,input.normalWorld);float3normalDirection=normalize(mul(localCoords,local2WorldTranspose));float3lightDirection;floatattenuation;if(0.0==_WorldSpaceLightPos0.w)// directional light?{attenuation=1.0;// no attenuationlightDirection=normalize(_WorldSpaceLightPos0.xyz);}else// point or spot light{float3vertexToLightSource=_WorldSpaceLightPos0.xyz-input.posWorld.xyz;floatdistance=length(vertexToLightSource);attenuation=1.0/distance;// linear attenuationlightDirection=normalize(vertexToLightSource);}float3diffuseReflection=attenuation*_LightColor0.rgb*_Color.rgb*max(0.0,dot(normalDirection,lightDirection));float3specularReflection;if(dot(normalDirection,lightDirection)<0.0)// light source on the wrong side?{specularReflection=float3(0.0,0.0,0.0);// no specular reflection}else// light source on the right side{specularReflection=attenuation*_LightColor0.rgb*_SpecColor.rgb*pow(max(0.0,dot(reflect(-lightDirection,normalDirection),input.viewDirWorld)),_Shininess);}returnfloat4(diffuseReflection+specularReflection,1.0);}ENDCGSubShader{Pass{Tags{"LightMode"="ForwardBase"}// pass for ambient light and first light sourceCGPROGRAM#pragma vertex vert#pragma fragment fragWithAmbient// the functions are defined in the CGINCLUDE partENDCG}Pass{Tags{"LightMode"="ForwardAdd"}// pass for additional light sourcesBlendOneOne// additive blendingCGPROGRAM#pragma vertex vert#pragma fragment fragWithoutAmbient// the functions are defined in the CGINCLUDE partENDCG}}}
Congratulations! If you actually understand the whole shader, you have come a long way. In fact, the shader includes lots of concepts (transformations between coordinate systems, the Phong reflection model, normal mapping, parallax mapping, ...). More specifically, we have seen:
If you still want to know more