Amacro photograph showing thedefocused effect of ashallow depth of field on a tilted page of textThis photo was taken with an aperture off/22, creating a mostly in-focus background.The same scene as above with an aperture off/1.8. Notice how much blurrier the background appears in this photo.
Thedepth of field (DOF) is the distance between the nearest and the farthest objects that are in acceptably sharpfocus in an image captured with acamera. See also the closely relateddepth of focus.
Effect of aperture on blur and DOF (Depth of Field). The points in focus (2) project points onto the image plane (5), but points at different distances (1 and3) project blurred images, orcircles of confusion. Decreasing the aperture size (4) reduces the size of the blur spots for points not in the focused plane, so that the blurring is imperceptible, and all points are within theDOF.
For cameras that can only focus on one object distance at a time, depth of field is the distance between the nearest and the farthest objects that are in acceptably sharp focus in the image.[1] "Acceptably sharp focus" is defined using a property called the "circle of confusion".
The depth of field can be determined byfocal length, distance to subject (object to be imaged), the acceptable circle of confusion size, and aperture.[2] The approximate depth of field can be given by:
for a given maximum acceptable circle of confusion diameterc, focal lengthf,f-numberN, and distance to subjectu.[3][4]
As distance or the size of the acceptable circle of confusion increases, the depth of field increases; however, increasing the size of the aperture (i.e., reducingf-number) or increasing the focal length reduces the depth of field. Depth of field changes linearly withf-number and circle of confusion, but changes in proportion to the square of the distance to the subject and inversely in proportion to the square of the focal length. As a result, photos taken at extremely close range (i.e., so smallu) have a proportionally much smaller depth of field.
Motion pictures make limited use of aperture control; to produce a consistent image quality from shot to shot, cinematographers usually choose a single aperture setting for interiors (e.g., scenes inside a building) and another for exteriors (e.g., scenes in an area outside a building), and adjust exposure through the use of camera filters or light levels. Aperture settings are adjusted more frequently in still photography, where variations in depth of field are used to produce a variety of special effects.
Depth of field for different values of aperture using 50mm objective lens and full-frame DSLR camera. Focus point is on the first blocks column.[13][better source needed]
Precise focus is only possible at an exact distance from a lens;[a] at that distance, a point object will produce a small spot image. Otherwise, a point object will produce a larger or blur spot image that is typically and approximately a circle. When this circular spot is sufficiently small, it is visually indistinguishable from a point, and appears to be in focus. The diameter of the largest circle that is indistinguishable from a point is known as theacceptable circle of confusion, or informally, simply as the circle of confusion.
The acceptable circle of confusion depends on how the final image will be used. The circle of confusion as 0.25 mm for an image viewed from 25 cm away is generally accepted.[14]
For35mm motion pictures, the image area on the film is roughly 22 mm by 16 mm. The limit of tolerable error was traditionally set at 0.05 mm (0.0020 in) diameter, while for16 mm film, where the size is about half as large, the tolerance is stricter, 0.025 mm (0.00098 in).[15] More modern practice for 35 mm productions set the circle of confusion limit at 0.025 mm (0.00098 in).[16]
Traditional depth-of-field formulas can be hard to use in practice. As an alternative, the same effective calculation can be done without regard to the focal length andf-number.[b]Moritz von Rohr and later Merklinger observe that the effective absolute aperture diameter can be used for similar formula in certain circumstances.[19]
Moreover, traditional depth-of-field formulas assume equal acceptable circles of confusion for near and far objects. Merklinger[c] suggested that distant objects often need to be much sharper to be clearly recognizable, whereas closer objects, being larger on the film, do not need to be so sharp.[19] The loss of detail in distant objects may be particularly noticeable with extreme enlargements. Achieving this additional sharpness in distant objects usually requires focusing beyond thehyperfocal distance, sometimes almost at infinity. For example, if photographing a cityscape with atraffic bollard in the foreground, this approach, termed theobject field method by Merklinger, would recommend focusing very close to infinity, and stopping down to make the bollard sharp enough. With this approach, foreground objects cannot always be made perfectly sharp, but the loss of sharpness in near objects may be acceptable if recognizability of distant objects is paramount.
Other authors such asAnsel Adams have taken the opposite position, maintaining that slight unsharpness in foreground objects is usually more disturbing than slight unsharpness in distant parts of a scene.[20]
Another approach is focus sweep. The focal plane is swept across the entire relevant range during a single exposure. This creates a blurred image, but with a convolution kernel that is nearly independent of object depth, so that the blur is almost entirely removed after computational deconvolution. This has the added benefit of dramatically reducing motion blur.[22]
Light Scanning Photomacrography (LSP) is another technique used to overcome depth of field limitations in macro and micro photography. This method allows for high-magnification imaging with exceptional depth of field. LSP involves scanning a thin light plane across the subject that is mounted on a moving stage perpendicular to the light plane. This ensures the entire subject remains in sharp focus from the nearest to the farthest details, providing comprehensive depth of field in a single image. Initially developed in the 1960s and further refined in the 1980s and 1990s, LSP was particularly valuable in scientific and biomedical photography before digital focus stacking became prevalent.[23][24]
Other technologies use a combination of lens design and post-processing:Wavefront coding is a method by which controlled aberrations are added to the optical system so that the focus and depth of field can be improved later in the process.[25]
The lens design can be changed even more: in colourapodization the lens is modified such that each colour channel has a different lens aperture. For example, the red channel may bef/2.4, green may bef/2.4, whilst the blue channel may bef/5.6. Therefore, the blue channel will have a greater depth of field than the other colours. The image processing identifies blurred regions in the red and green channels and in these regions copies the sharper edge data from the blue channel. The result is an image that combines the best features from the differentf-numbers.[26]
At the extreme, aplenoptic camera captures4D light field information about a scene, so the focus and depth of field can be altered after the photo is taken.
Hansma and Peterson have discussed determining the combined effects of defocus and diffraction using a root-square combination of the individual blur spots.[30][31] Hansma's approach determines thef-number that will give the maximum possible sharpness; Peterson's approach determines the minimumf-number that will give the desired sharpness in the final image and yields a maximum depth of field for which the desired sharpness can be achieved.[d] In combination, the two methods can be regarded as giving a maximum and minimumf-number for a given situation, with the photographer free to choose any value within the range, as conditions (e.g., potential motion blur) permit. Gibson gives a similar discussion, additionally considering blurring effects of camera lens aberrations, enlarging lens diffraction and aberrations, the negative emulsion, and the printing paper.[27][e] Couzin gave a formula essentially the same as Hansma's for optimalf-number, but did not discuss its derivation.[32]
Minox LX camera with hyperfocal red dotNikon 28mmf/2.8 lens with markings for the depth of field. The lens is set at the hyperfocal distance forf/22. The orange mark corresponding tof/22 is at the infinity mark (∞). Focus is acceptable from under0.7 m to infinity.Minolta 100–300 mm zoom lens. The depth of field, and thus hyperfocal distance, changes with the focal length as well as the f-stop. This lens is set to the hyperfocal distance forf/32 at a focal length of100 mm.
Inoptics andphotography,hyperfocal distance is a distance from a lens beyond which all objects can be brought into an "acceptable"focus. As the hyperfocal distance is the focus distance giving the maximum depth of field, it is the most desirable distance to set the focus of afixed-focus camera.[41] The hyperfocal distance is entirely dependent upon what level of sharpness is considered to be acceptable.
The hyperfocal distance has a property called "consecutive depths of field", where a lens focused at an object whose distance from the lens is at the hyperfocal distanceH will hold a depth of field fromH/2 to infinity, if the lens is focused toH/2, the depth of field will be fromH/3 toH; if the lens is then focused toH/3, the depth of field will be fromH/4 toH/2, etc.
Thomas Sutton and George Dawson first wrote about hyperfocal distance (or "focal range") in 1867.[42] Louis Derr in 1906 may have been the first to derive a formula for hyperfocal distance.Rudolf Kingslake wrote in 1951 about the two methods of measuring hyperfocal distance.
Some cameras have their hyperfocal distance marked on the focus dial. For example, on theMinox LX focusing dial there is a red dot between2 m and infinity; when the lens is set at the red dot, that is, focused at the hyperfocal distance, the depth of field stretches from2 m to infinity. Some lenses have markings indicating the hyperfocal range for specificf-stops, also called adepth-of-field scale.[43]
Zeiss Ikon Contessa with red marks for hyperfocal distance 20 ft atf/8
This section covers some additional formula for evaluating depth of field; however they are all subject to significant simplifying assumptions: for example, they assume theparaxial approximation ofGaussian optics. They are suitable for practical photography, lens designers would use significantly more complex ones.
theharmonic mean of the near and far distances. In practice, this is equivalent to thearithmetic mean for shallow depths of field.[44] Sometimes, view camera users refer to the differencevN −vF as thefocus spread.[45]
If a subject is at distances and the foreground or background is at distanceD, let the distance between the subject and the foreground or background be indicated by
The blur disk diameterb of a detail at distancexd from the subject can be expressed as a function of the subject magnificationms, focal lengthf,f-numberN, or alternatively theapertured, according to
The minus sign applies to a foreground object, and the plus sign applies to a background object.
The blur increases with the distance from the subject; whenb is less than the circle of confusion, the detail is within the depth of field.
^Peterson does not give a closed-form expression for the minimumf-number, though such an expression obtains from simple algebraic manipulation of his Equation 3.
^The analytical section at the end ofGibson (1975) was originally published as "Magnification and Depth of Detail in Photomacrography" in theJournal of the Photographic Society of America, Vol. 26, No. 6, June 1960.
^Xiong, Yalin, and Steven A. Shafer. "Depth from focusing and defocusing." Computer Vision and Pattern Recognition, 1993. Proceedings CVPR'93., 1993 IEEE Computer Society Conference on. IEEE, 1993.
Lefkowitz, Lester (1979).The Manual of Close-Up Photography. Garden City, NY: Amphoto.ISBN0-8174-2456-3.OCLC4883084.
Merklinger, Harold M. (1992).The INs and OUTs of FOCUS: An Alternative Way to Estimate Depth-of-Field and Sharpness in the Photographic Image (v. 1.0.3 ed.). Bedford, Nova Scotia: Seaboard Printing Limited.ISBN0-9695025-0-8.OCLC23651332. Version 1.03e available inPDF athttp://www.trenholm.org/hmmerk/ .
Merklinger, Harold M. (1993).Focusing the View Camera: A Scientific Way to Focus the View Camera and Estimate Depth of Field (v. 1.0 ed.). Bedford, Nova Scotia: Seaboard Printing Limited.ISBN0-9695025-2-4.OCLC1072495227. Version 1.6.1 available inPDF athttp://www.trenholm.org/hmmerk/.
Peterson, Stephen (March–April 1996). "Image Sharpness and Focusing the View Camera".Photo Techniques:51–53. Available as GIF images on theLarge Format page.
Ray, Sidney F. (2000). "The Geometry of Image Formation". In Jacobson, Ralph E.; Ray, Sidney F.; Atteridge, Geoffrey G.; Axford, Norman R. (eds.).The Manual of Photography: Photographic and Digital Imaging (9th ed.). Oxford: Focal Press.ISBN0-240-51574-9.OCLC44267873.
Williams, Charles S.; Becklund, Orville (1989).Introduction to the Optical Transfer Function. New York: Wiley. pp. 293–300. Reprinted 2002, Bellingham, WA: SPIE Press,ISBN0-8194-4336-0.
Williams, John B. (1990).Image Clarity: High-Resolution Photography. Boston: Focal Press.ISBN0-240-80033-8.OCLC19514912.