One advantage of ray tracing compared to scan line or rasterization-basedrendering methods is that it is easy to employ unusual image projections.We have great freedom in how the image sample positions are mapped into raydirections, since the rendering algorithm does not depend on propertiessuch as straight lines in the scene always projecting to straight lines inthe image.
In this section, we will describe a camera model that traces rays in alldirections around a point in the scene, giving a view of everything thatis visible from that point. TheSphericalCamera supports two spherical parameterizations fromSection 3.8 to map pointsin the image to associated directions.Figure 5.16 shows this camera in action with theSan Miguel model.
SphericalCamera does not derive fromProjectiveCamera since theprojections that it uses are nonlinear and cannot be captured by a single matrix.
The first mapping thatSphericalCamera supports is theequirectangular mapping that was defined inSection 3.8.3. In the implementation here, values range from 0 at the top of the image to at the bottomof the image, and values range from 0 to, moving from left toright across the image.
The equirectangular mapping is easy to evaluate and has the advantage thatlines of constant latitude and longitude on the sphere remain straight.However, it preserves neither area nor angles between curves on the sphere(i.e., it is notconformal). These issues are especially evident atthe top and bottom of the image inFigure 5.16(a).
Therefore, theSphericalCamera also supports the equal-area mappingfrom Section 3.8.3.With this mapping, any finite solid angle of directions on the sphere maps to the same area inthe image, regardless of where it is on the sphere. (This mapping is alsoused by theImageInfiniteLight, which is described inSection 12.5.2, and so images rendered using thiscamera can be used as light sources.) The equal-area mapping’s use withtheSphericalCamera is shown inFigure 5.16(b).
An enumeration reflects which mapping should be used.
The main task of theGenerateRay() method is to apply the requestedmapping. The rest of it follows the earlierGenerateRay() methods.
For the use of both mappings, coordinates in NDC space are found bydividing the raster space sample location by the image’s overallresolution. Then, after the mapping is applied, the andcoordinates are swapped to account for the fact that both mappings aredefined with as the “up” direction, while is “up” in cameraspace.
For the equirectangular mapping, the coordinates are scaled tocover the range and the spherical coordinate formula isused to compute the ray direction.
The values for theCameraSample may be slightly outside ofthe range, due to the pixel sample filter function. A call toWrapEqualAreaSquare() takes care of handling the boundary conditionsbeforeEqualAreaSquareToSphere() performs the actual mapping.