Movatterモバイル変換


[0]ホーム

URL:


5.3 Spherical Camera

One advantage of ray tracing compared to scan line or rasterization-basedrendering methods is that it is easy to employ unusual image projections.We have great freedom in how the image sample positions are mapped into raydirections, since the rendering algorithm does not depend on propertiessuch as straight lines in the scene always projecting to straight lines inthe image.

In this section, we will describe a camera model that traces rays in alldirections around a point in the scene, giving a view of everything thatis visible from that point. TheSphericalCamera supports two spherical parameterizations fromSection 3.8 to map pointsin the image to associated directions.Figure 5.16 shows this camera in action with theSan Miguel model.

<<SphericalCamera Definition>>= 
classSphericalCamera : publicCameraBase { public: <<SphericalCamera::Mapping Definition>> 
enum Mapping { EquiRectangular, EqualArea };
<<SphericalCamera Public Methods>> 
SphericalCamera(CameraBaseParameters baseParameters, Mapping mapping) :CameraBase(baseParameters), mapping(mapping) { <<Compute minimum differentials forSphericalCamera>> 
FindMinimumDifferentials(this);
} static SphericalCamera *Create(const ParameterDictionary &parameters, constCameraTransform &cameraTransform,Film film,Medium medium, const FileLoc *loc, Allocator alloc = {}); PBRT_CPU_GPU pstd::optional<CameraRay> GenerateRay(CameraSample sample,SampledWavelengths &lambda) const; PBRT_CPU_GPU pstd::optional<CameraRayDifferential> GenerateRayDifferential(CameraSample sample,SampledWavelengths &lambda) const { returnCameraBase::GenerateRayDifferential(this, sample, lambda); } PBRT_CPU_GPUSampledSpectrum We(constRay &ray,SampledWavelengths &lambda,Point2f *pRaster2 = nullptr) const { LOG_FATAL("We() unimplemented for SphericalCamera"); return {}; } PBRT_CPU_GPU void PDF_We(constRay &ray, Float *pdfPos, Float *pdfDir) const { LOG_FATAL("PDF_We() unimplemented for SphericalCamera"); } PBRT_CPU_GPU pstd::optional<CameraWiSample> SampleWi(constInteraction &ref,Point2f u,SampledWavelengths &lambda) const { LOG_FATAL("SampleWi() unimplemented for SphericalCamera"); return {}; } std::string ToString() const;
private: <<SphericalCamera Private Members>> 
Mapping mapping;
};

Figure 5.16: TheSan Miguel scene rendered with theSphericalCamera, which traces rays in all directions from the cameraposition. (a) Rendered using an equirectangular mapping. (b) Renderedwith an equal-area mapping.(Scene courtesy of Guillermo M. Leal Llaguno.)

SphericalCamera does not derive fromProjectiveCamera since theprojections that it uses are nonlinear and cannot be captured by a single4 times 4 matrix.

<<SphericalCamera Public Methods>>= 
SphericalCamera(CameraBaseParameters baseParameters, Mapping mapping) :CameraBase(baseParameters), mapping(mapping) { <<Compute minimum differentials forSphericalCamera>> 
FindMinimumDifferentials(this);
}

The first mapping thatSphericalCamera supports is theequirectangular mapping that was defined inSection 3.8.3. In the implementation here,theta values range from 0 at the top of the image to pi at the bottomof the image, and phi values range from 0 to2 pi, moving from left toright across the image.

The equirectangular mapping is easy to evaluate and has the advantage thatlines of constant latitude and longitude on the sphere remain straight.However, it preserves neither area nor angles between curves on the sphere(i.e., it is notconformal). These issues are especially evident atthe top and bottom of the image inFigure 5.16(a).

Therefore, theSphericalCamera also supports the equal-area mappingfrom Section 3.8.3.With this mapping, any finite solid angle of directions on the sphere maps to the same area inthe image, regardless of where it is on the sphere. (This mapping is alsoused by theImageInfiniteLight, which is described inSection 12.5.2, and so images rendered using thiscamera can be used as light sources.) The equal-area mapping’s use withtheSphericalCamera is shown inFigure 5.16(b).

An enumeration reflects which mapping should be used.

<<SphericalCamera::Mapping Definition>>= 
enumMapping {EquiRectangular,EqualArea };

<<SphericalCamera Private Members>>= 
Mappingmapping;

The main task of theGenerateRay() method is to apply the requestedmapping. The rest of it follows the earlierGenerateRay() methods.

<<SphericalCamera Method Definitions>>= 
pstd::optional<CameraRay>SphericalCamera::GenerateRay(CameraSample sample,SampledWavelengths &lambda) const { <<Compute spherical camera ray direction>> 
Point2f uv(sample.pfilm.x /film.FullResolution().x, sample.pfilm.y /film.FullResolution().y);Vector3f dir; if (mapping ==EquiRectangular) { <<Compute ray direction using equirectangular mapping>> 
Float theta = Pi * uv[1], phi = 2 * Pi * uv[0]; dir =SphericalDirection(std::sin(theta), std::cos(theta), phi);
} else { <<Compute ray direction using equal-area mapping>>  } pstd::swap(dir.y, dir.z);
Ray ray(Point3f(0, 0, 0), dir,SampleTime(sample.time), medium); returnCameraRay{RenderFromCamera(ray)};}

For the use of both mappings,left-parenthesis u comma v right-parenthesis coordinates in NDC space are found bydividing the raster space sample location by the image’s overallresolution. Then, after the mapping is applied, they andzcoordinates are swapped to account for the fact that both mappings aredefined withz as the “up” direction, whiley is “up” in cameraspace.

<<Compute spherical camera ray direction>>= 
Point2f uv(sample.pfilm.x /film.FullResolution().x, sample.pfilm.y /film.FullResolution().y);Vector3f dir;if (mapping ==EquiRectangular) { <<Compute ray direction using equirectangular mapping>> 
Float theta = Pi * uv[1], phi = 2 * Pi * uv[0]; dir =SphericalDirection(std::sin(theta), std::cos(theta), phi);
} else { <<Compute ray direction using equal-area mapping>> }pstd::swap(dir.y, dir.z);

For the equirectangular mapping, theleft-parenthesis u comma v right-parenthesis coordinates are scaled tocover theleft-parenthesis theta comma phi right-parenthesis range and the spherical coordinate formula isused to compute the ray direction.

<<Compute ray direction using equirectangular mapping>>= 
Float theta = Pi * uv[1], phi = 2 * Pi * uv[0];dir =SphericalDirection(std::sin(theta), std::cos(theta), phi);

Theleft-parenthesis u comma v right-parenthesis values for theCameraSample may be slightly outside ofthe rangeleft-bracket 0 comma 1 right-bracket squared, due to the pixel sample filter function. A call toWrapEqualAreaSquare() takes care of handling the boundary conditionsbeforeEqualAreaSquareToSphere() performs the actual mapping.

<<Compute ray direction using equal-area mapping>>= 
uv =WrapEqualAreaSquare(uv);dir =EqualAreaSquareToSphere(uv);


[8]ページ先頭

©2009-2025 Movatter.jp