Super-resolution imaging (SR) is a class of techniques that improve theresolution of animaging system. Inoptical SR thediffraction limit of systems is transcended, while ingeometrical SR the resolution of digitalimaging sensors is enhanced.
Several concepts are fundamental to super-resolution imaging:
Diffraction limit: the capacity of an optical instrument to reproduce the details of an object in an image has limits that are imposed by laws of physics: thediffraction equations in thewave theory of light,[3] or theuncertainty principle for photons inquantum mechanics.[4] Information transfer can never be increased beyond this boundary, but packets outside the limits can be cleverly swapped for (or multiplexed with) some inside it.[5] Super-resolution microscopy does not so much “break” as “circumvent” the diffraction limit. New procedures probing electro-magnetic disturbances at the molecular level (in the so-called near field)[6] remain fully consistent withMaxwell's equations.
Spatial frequencydomain: A succinct expression of the diffraction limit is given in the spatial frequency domain. InFourier optics light distributions are expressed as superpositions of a series of grating light patterns in a range of fringe widths - these widths represent the spatial frequencies. It is generally taught that diffraction theory stipulates an upper limit, the cut-off spatial-frequency, beyond which pattern elements fail to be transferred into the optical image, i.e., are not resolved. But in fact what is set by diffraction theory is the width of the passband, not a fixed upper limit. No laws of physics are broken when a spatial frequency band beyond the cut-off spatial frequency is swapped for one inside it: this has long been implemented indark-field microscopy. Nor are information-theoretical rules broken when superimposing several bands,[7][8][9] disentangling them in the received image needs assumptions of object invariance during multiple exposures, i.e., the substitution of one kind of uncertainty for another.
Information: When the term super-resolution is used in techniques based on the inference of object details using a statistical treatment of the image within standard resolution limits (for example, averaging multiple exposures), it involves an exchange of one kind of information (extracting signal from noise) for another (the assumption that the target has remained invariant). Recent breakthroughs incorporatequantum-transformer hybrids into super-resolution, such asQUIET‑SR, a 2025 model that employs shifted quantum window attention within a transformer to enhance image detail while respecting diffraction and information-theory limits Similarly,frequency-integrated transformers (e.g., FIT) enrich super-resolution by explicitly combining spatial and frequency-domain information via FFT-based attention, improving reconstruction across scales
Resolution and localization: True resolution involves the distinction of whether a target, e.g. a star or a spectral line, is single or double, ordinarily requiring separable peaks in the image. When a target is known to be single, its location can be determined with higher precision than the image width by finding the centroid (center of gravity) of its image light distribution. The wordultra-resolution had been proposed for this process[10] but it did not catch on, and the high-precision localization procedure is typically referred to as super-resolution.
This section needs to beupdated. The reason given is:We should update this to include progress in improving super-resolution with machine learning and neural networks. Please help update this article to reflect recent events or newly available information.(January 2023)
Substituting spatial-frequency bands: Though the bandwidth allowable by diffraction is fixed, it can be positioned anywhere in the spatial-frequency spectrum.Dark-field illumination in microscopy is an example. See alsoaperture synthesis.
The "structured illumination" technique of super-resolution is related tomoiré patterns. The target, a band of fine fringes (top row), is beyond the diffraction limit. When a band of somewhat coarser resolvable fringes (second row) is artificially superimposed, the combination (third row) featuresmoiré components that are within the diffraction limit and hence contained in the image (bottom row) allowing the presence of the fine fringes to be inferred, even though they are not themselves represented in the image.
An image is formed using the normal passband of the optical device. Then, some known light structure (for example, a set oflight fringes) is superimposed on the target.[8][9] The image now contains components resulting from the combination of the target and the superimposed light structure, e.g.moiré fringes, and carries information about target detail which simple unstructured illumination does not. The “superresolved” components, however, need disentangling to be revealed. For an example, see structured illumination (figure to left).
Multiple parameter use within traditional diffraction limit
If a target has no special polarization or wavelength properties, two polarization states or non-overlapping wavelength regions can be used to encode target details, one in a spatial-frequency band inside the cut-off limit the other beyond it. Both would use normal passband transmission but are then separately decoded to reconstitute target structure with extended resolution.
Super-resolution microscopy is generally discussed within the realm of conventional optical imagery. However, modern technology allows the probing of electromagnetic disturbance within molecular distances of the source,[6] which has superior resolution properties. See alsoevanescent waves and the development of the newsuper lens.
Compared to a single image marred by noise during its acquisition or transmission (left), thesignal-to-noise ratio is improved by suitable combination of several separately-obtained images (right). This can be achieved only within the intrinsic resolution capability of the imaging process for revealing such detail.
Known defects in a given imaging situation, such asdefocus oraberrations, can sometimes be mitigated in whole or in part by suitable spatial-frequency filtering of even a single image. Such procedures all stay within the diffraction-mandated passband, and do not extend it.
Both features extend over 3 pixels but in different amounts, enabling them to be localized with precision superior to pixel dimension.
The location of a single source can be determined by computing the "center of gravity" (centroid) of the light distribution extending over several adjacent pixels (see figure on the left). Provided that there is enough light, this can be achieved with arbitrary precision, very much better than pixel width of the detecting apparatus and the resolution limit for the decision of whether the source is single or double. This technique, which requires the presupposition that all the light comes from a single source, is at the basis of what has become known assuper-resolution microscopy, e.g.stochastic optical reconstruction microscopy (STORM), where fluorescent probes attached to molecules givenanoscale distance information. It is also the mechanism underlying visualhyperacuity.[11]
Bayesian induction beyond traditional diffraction limit
Some object features, though beyond the diffraction limit, may be known to be associated with other object features that are within the limits and hence contained in the image. Then conclusions can be drawn, using statistical methods, from the available image data about the presence of the full object.[12] The classical example is Toraldo di Francia's proposition[13] of judging whether an image is that of a single or double star by determining whether its width exceeds the spread from a single star. This can be achieved at separations well below the classical resolution bounds, and requires the prior limitation to the choice "single or double?"
The approach can take the form ofextrapolating the image in the frequency domain, by assuming that the object is ananalytic function, and that we can exactly know thefunction values in someinterval. This method is severely limited by the ever-present noise in digital imaging systems, but it can work forradar,astronomy,microscopy ormagnetic resonance imaging.[14] More recently, a fast single image super-resolution algorithm based on a closed-form solution to problems has been proposed and demonstrated to accelerate most of the existing Bayesian super-resolution methods significantly.[15]
Geometrical SR reconstructionalgorithms are possible if and only if the input low resolution images have been under-sampled and therefore containaliasing. Because of this aliasing, the high-frequency content of the desired reconstruction image is embedded in the low-frequency content of each of the observed images. Given a sufficient number of observation images, and if the set of observations vary in their phase (i.e. if the images of the scene are shifted by a sub-pixel amount), then the phase information can be used to separate the aliased high-frequency content from the true low-frequency content, and the full-resolution image can be accurately reconstructed.[16]
In practice, this frequency-based approach is not used for reconstruction, but even in the case of spatial approaches (e.g. shift-add fusion[17]), the presence of aliasing is still a necessary condition for SR reconstruction.
There are many both single-frame and multiple-frame variants of SR. Multiple-frame SR uses the sub-pixel shifts between multiple low resolution images of the same scene. It creates an improved resolution image fusing information from all low resolution images, and the created higher resolution images are better descriptions of the scene. Single-frame SR methods attempt to magnify the image without producing blur. These methods use other parts of the low resolution images, or other unrelated images, to guess what the high-resolution image should look like. Algorithms can also be divided by their domain:frequency orspace domain. Originally, super-resolution methods worked well only on grayscale images,[18] but researchers have found methods to adapt them to color camera images.[17] Recently, the use of super-resolution for 3D data has also been shown.[19]
^abGustaffsson, M., 2000. Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. J. Microscopy 198, 82–87.
^Cox, I.J., Sheppard, C.J.R., 1986. Information capacity and resolution in an optical system. J.opt. Soc. Am. A 3, 1152–1158
^Johnson, Justin; Alahi, Alexandre; Fei-Fei, Li (2016-03-26). "Perceptual Losses for Real-Time Style Transfer and Super-Resolution".arXiv:1603.08155 [cs.CV].
Farsiu, S.; Robinson, D.; Elad, M.; Milanfar, P. (August 2004). "Advances and Challenges in Super-Resolution".International Journal of Imaging Systems and Technology.14 (2):47–57.doi:10.1002/ima.20007.S2CID12351561.
Chan, Wai-San; Lam, Edmund; Ng, Michael K.; Mak, Giuseppe Y. (September 2007). "Super-resolution reconstruction in a computational compound-eye imaging system".Multidimensional Systems and Signal Processing.18 (2–3):83–101.Bibcode:2007MSySP..18...83C.doi:10.1007/s11045-007-0022-3.S2CID16452552.
Berliner, L.; Buffa, A. (2011). "Super-resolution variable-dose imaging in digital radiography: quality and dose reduction with a fluoroscopic flat-panel detector".Int J Comput Assist Radiol Surg.6 (5):663–673.doi:10.1007/s11548-011-0545-9.PMID21298404.