Disclosure of Invention
Aiming at the defects of the existing super-resolution technology, the invention provides a super-resolution microscopic imaging method and device based on active time modulation frequency mixing excitation irradiation.
According to the invention, a spatial light modulation technology is adopted to modulate each element on the pixel matrix according to different frequencies, and finally, a model established by an algorithm is used for analyzing the image information collected by the CCD detector through an inverted microscopic imaging system to obtain a sample information value covered in the image information, so that the function of super-resolution imaging is realized. The method comprises the following specific steps:
assuming that the modulation frequency v (x, y) of the modulation is set at the position (x, y) of the excitation light source array, the position (x) is0,y0) The molecular excitation spectrum of (a) is:
ΣI(x-x0,y-y0)·v(x,y), (1)
where I is the intensity of light at position (x, y) versus position (x)0,y0) Influence distribution function of (1);
(1) and for each array pixel light source position, modulating by using low-frequency exciting light in a manner that:
ILaser_Power(xLaser,yLaser,v)=ILaser_Power(xLaser,yLaser)·sin(2πv(xLaser,yLaser)) (2)
wherein, ILaser_Power(xLaser,yLaser) Is the laser intensity at the (x, y) position, v (x)Laser,yLaser) Is the modulation frequency that modulates the laser amplitude according to a sinusoidal function.
In a single pixel (x)0,y0) The sample on the sample is influenced by different low-frequency light intensity modulation laser beams around, and the emission light intensity of the sample is as follows:
wherein, ILaser_Power(xLaser-x0,yLaser-y0) Is the laser excitation position (x)Laser,yLaser) For position (x)0,y0) P (x) of the light intensity contribution function of0,y0) Is a sample pairThe intensity amplitude of the light.
Finally, through diffraction, the light intensity of any pixel point (x, y) is influenced by molecules in the surrounding diffraction range.
(2) Shooting a video for a period of time through pixel array modulation excitation; then, the two-dimensional image data (x, y) is developed into a three-dimensional or four-dimensional video image (x, y, ω) by analyzing each pixel by a time Fourier transformx,ωy) Wherein, ω isx,ωyRespectively, in the (x, y) direction, it can be seen that not only the position information (x, y) but also the two-dimensional frequency information (ω), is comprised herex,ωy);
(3) Finally, through Fourier transformation, pixel information corresponding to each modulation frequency is extracted, an equation model is constructed, and therefore specific (frequency) information under a single pixel of the super-resolution image which is covered by diffraction is analyzed.
In step (3), the process of constructing an equation model and analyzing specific (frequency) information of the super-resolution image under a single pixel, which is covered by diffraction, is as follows:
the method comprises the following steps: the following assumptions were made from the actual optical system model:
(1) assuming that the change amount of the lens to the phase factor is zero, the lens is regarded as a thin lens;
(2) considering the imaging condition of incoherent light source due to different frequency of each light source, the imaging process is equivalent to an optical transfer function, and diffraction is caused by an entrance pupil or an exit pupil;
(3) the sample surface and the CCD receiving surface do not consider the loss of high-frequency information, and the diffusion matrix is approximate to a function in a Gaussian matrix form;
(4) the time function of each pixel can be completely recovered on the assumption that enough images are acquired;
step two: according to the assumption that the actual optical system is simulated, the specific frequency analysis algorithm process is as follows:
each pixel is written in the form of a matrix of pixels, if there is a value at the corresponding location, indicating that there is a corresponding frequency spread over the pixel. And setting that each initial pixel only has the corresponding frequency, mixing different frequency components on a matrix in a diffusion range through one-time diffusion convolution calculation, and carrying the upper weight factors by the different frequency components according to the corresponding positions of the diffusion matrix. The values at each pixel are as follows:
where (2 × a +1) is the dimension of the diffusion matrix, which is equal to 1/3 (if the pixel matrix is set to 100, the diffusion matrix is 33 × 33, i.e., a — 16) about the dimension of the pixel matrix; the size of the diffusion lift-off matrix D is (2a +1) × (2a +1), (x, y) are pixel positions;
and carrying sample information on the pixel position by the pixel on different positions obtained after the primary diffusion point diffusion function. After carrying the sample information, performing diffusion convolution calculation on all pixels again, and for each pixel, overlapping the contribution of the surrounding pixels to the pixel according to a diffusion matrix to obtain an actual frequency value on each pixel; and after two times of diffusion convolution calculation, adding all elements on the obtained pixels to obtain the final expression of each pixel containing the sample information.
Finally the values at all pixel positions are added:
where n x n is the image matrix dimension.
Then Fourier transform is carried out according to the designed frequency value, and then a proper range integral is taken for a given frequency, namely the following processing is carried out:
firstly, Fourier transform (formula (6)) is carried out, and then the screening property of the impact function is utilized to obtain the intensity information value (formula (7)) carried by the specific frequency from the integral frequency band; wherein, ω isn,mChanging the frequency at the position of the matrix (n, m) into a delta shock function by utilizing cosine function Fourier change, wherein the screening property of the shock function can obtain amplitude information contained in one frequency band information through integration; a and b are respectively the minimum value and the maximum value of the modulation frequency band.
Each frequency value corresponds to an equation (diffraction superposition) that contains information for the sample at multiple locations. By processing the formula (5) by the formulas (6) and (7) (also called frequency analysis algorithm), the formula carrying the image information variable corresponding to each modulation frequency can be obtained, and since the dimension of the image matrix is n × n, n × n formulas containing unknown numbers are finally obtained. Analyzing all pixel information by the image time sequence set obtained by the CCD detector according to the formula (6) and the formula (7), obtaining an actual value corresponding to each modulation frequency information, forming an equation with the formula obtained by the corresponding modulation frequency obtained in the previous step, and forming a linear equation set by n equations; and solving the linear equation set to obtain a final result, and restoring a sample information value.
Based on the method, the invention also provides a super-resolution microscopic imaging device based on the active time modulation mixing excitation irradiation. The super-resolution microscopic imaging device comprises: the system comprises alaser light source 1, afirst lens 2, asecond lens 3, afirst reflector 4, asecond reflector 5, athird lens 6, afourth lens 7, athird reflector 8, a firstlinear polarizer 9, a half-wave plate 10, a spatial light modulator 11, a secondlinear polarizer 12, afifth lens 13, a half-mirror 14, anobjective lens 14, an objective table 16, afourth reflector 17 and aCCD detector 18; the components are connected in turn by optical paths to form a super-resolution microscopic imaging device; wherein:
laser output from thelaser light source 1 sequentially passes through thefirst lens 2 and thesecond lens 3 for beam expansion, and the expanded light sequentially passes through thefirst reflector 4 and thesecond reflector 5 and enters thethird lens 6 and thefourth lens 7 for secondary beam expansion; the twice-expanded laser enters a firstlinear polarizer 9 through athird reflector 8, and linear polarized light obtained through the first linear polarizer is rotated through a half-wave plate 10 until the linear polarized light is parallel to the long edge of the spatial light modulator; at the moment, the incident light can be modulated by the spatial light modulator 11, the emergent light enters thefifth lens 13 after being adjusted by the secondlinear polarizer 12, and parallel light is focused on the back focal plane of the lens; part of the light enters theobjective lens 15 through thehalf mirror 14 and is irradiated on the sample on thestage 16, and the light reflected on the sample passes through thehalf mirror 14 and thefourth mirror 17 and enters theCCD detector 18. A continuous time series of raw image sets, each pixel containing different frequency information and sample information, is captured (captured) by theCCD detector 18. And the generated original image sequence is subjected to a frequency analysis algorithm to extract frequency information of the original image set. The original image set also has intensity information data of the detected radiation of the sample, and the frequency analysis algorithm can convert the rule of the intensity information changing along with time into different frequencies with different amplitude intensity information through Fourier transform.
In the invention, after laser output from a laser light source is expanded twice, linear polarized light is obtained through a linear polarizer and is adjusted to a specific position through a half wave plate and then is incident to a spatial light modulator, the emitted modulated light passes through the linear polarizer and then is focused to the back focal plane of an objective lens through a lens, then partial light enters the objective lens through a half-mirror and is irradiated on a sample of an objective table, and reflected light carrying sample information enters a CCD detector through the half-mirror and a reflecting mirror. And analyzing the frequency information on each pixel point by the image information collected by the CCD detector through an algorithm constructed by a computer, and finally restoring the sample information.
According to the invention, the sample information can be finally obtained only by analyzing the frequency information contained in each pixel point through the model constructed by the algorithm, the sample does not need to be dyed, the device is convenient to construct, the operation is simple, the cost is low, and the method can be applied to the research of various optical super-resolution cell biological imaging. The method has the greatest advantage that the sample information can be restored only by algorithm analysis, so that super-resolution microscopic imaging is realized.
Detailed Description
As shown in fig. 1, the schematic structural diagram of a super-resolution micro-imaging device based on active time modulation mixing excitation irradiation includes: the system comprises alaser light source 1, afirst lens 2, asecond lens 3, afirst reflector 4, asecond reflector 5, athird lens 6, afourth lens 7, athird reflector 8, a firstlinear polarizer 9, a half-wave plate 10, a spatial light modulator 11, a secondlinear polarizer 12, afifth lens 13, a half-mirror 14, anobjective lens 15, an objective table 16, afourth reflector 17 and aCCD detector 18;
the laser beam output from thelaser light source 1 is expanded by thefirst lens 2 and thesecond lens 3, and the expanded light enters thethird lens 6 and thefourth lens 7 via the first reflectingmirror 4 and the second reflectingmirror 5 to be expanded for the second time. The twice expanded laser enters a firstlinear polarizer 9 through athird reflector 8, linear polarized light obtained through the firstlinear polarizer 9 rotates through a half-wave plate 10 until the linear polarized light is parallel to the long side of a spatial light modulator 11, at the moment, incident light can be modulated by the spatial light modulator 11, emergent light enters afifth lens 13 after being adjusted by a secondlinear polarizer 12, parallel light is focused on the back focal plane of the lens, partial light enters anobjective lens 15 through a half-mirror, the partial light irradiates on a sample of an objective table 16, and light reflected on the sample enters aCCD detector 18 through the half-mirror 14 and afourth reflector 17.
When the CCD detector receives the images, a time sequence of images is taken, and the two-dimensional image data (x, y) is developed into a three-dimensional or four-dimensional video image (x, y, omega)x,ωy). All pixel positions are correspondedThe information is subjected to Fourier transform, different frequencies are integrated to obtain information values corresponding to the frequencies, each frequency value corresponds to a formula (diffraction superposition) containing a plurality of position sample information, and finally a model established through a corresponding algorithm is used for solving.
In the following embodiment, in a solid laser with a center wavelength of 488 under a normal temperature environment, laser light output from thelaser light source 1 is expanded via thefirst lens 2 and thesecond lens 3, the expanded light enters thethird lens 6 and thefourth lens 7 via the first reflectingmirror 4 and the second reflectingmirror 5 to be expanded for the second time, focal lengths of the second lens and the fourth lens are two times of the first focal length and the third focal length, and a light spot is expanded twice each time due to expansion. The twice-expanded laser enters a firstlinear polarizer 9 through athird reflector 8, linear polarized light obtained through the firstlinear polarizer 9 rotates through a half-wave plate 10 until the linear polarized light is parallel to the long edge of the spatial light modulator 11, at the moment, incident light can be modulated by the spatial light modulator 11, emergent light enters afifth lens 13 after being adjusted by a secondlinear polarizer 12, parallel light is focused on the back focal plane of the lens, and partial light enters anobjective lens 15 through a half-mirror and irradiates on a sample on an objective table 16. The sample is selected to be quantum dots with the diameter of 15-20 nanometers, and can be excited by laser with the diameter of 488 nanometers. The 8. mu.M quantum dots were pipetted 8. mu.L onto a 0.17mm glass slide and placed on the sample stage. The light reflected on the sample passes through thehalf mirror 14 and thefourth mirror 17 to enter theCCD detector 18.
When the CCD detector receives the images, a time sequence of images is taken, and the two-dimensional image data (x, y) is developed into a three-dimensional or four-dimensional video image (x, y, omega)x,ωy). And carrying out Fourier transform on the information corresponding to all the pixel positions, integrating different frequencies to obtain information values corresponding to all the frequencies, wherein each frequency value corresponds to a formula (diffraction superposition) containing a plurality of position sample information, and finally solving through a model established by a corresponding algorithm. The specific algorithm is to calculate out a point spread function, set up an led array matrix (wavelength is not limited), the modulation frequency of each led is different, and set up 9 × 9 in the actual operation to be randomThe frequency matrix is as follows:
the PSF point spread matrix is set as follows (and can also be set to gaussian):
when the algorithm simulation light path is verified, an image is input as shown in fig. 2, and then the result of superposition of image information at various frequencies at different times can be seen after two times of diffusion, as shown in fig. 4. The 9 x 9 optical path algorithm verifies that the calculations are fast, and the individual stage time is shown in table 1 (unit: seconds).
TABLE 1
The above embodiments are used for explaining and understanding the technical solutions of the present invention, and do not limit the ideas and technical solutions of the present invention.