Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the embodiments of the present invention, it should be noted that the terms "first" and "second" are used for the sake of clarity in describing the numbering of the components of the product and do not represent any substantial difference, unless explicitly stated or limited otherwise. The directions of "up", "down", "left" and "right" are all based on the directions shown in the attached drawings. Specific meanings of the above terms in the embodiments of the present invention can be understood by those of ordinary skill in the art according to specific situations.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Fig. 1 is a flowchart of an embodiment of an endoscopic image processing method according to the present invention, the endoscopic image processing method shown in fig. 1 including the steps of:
in step S110, a specific region is identified in a target image, which is an image arbitrarily selected from a series of images of an object under inspection taken under illumination by an illumination unit.
Step S120, calculating the offset range that the specific area can generate in the other images in the series of images.
Step S130, aligning the series of images according to the specific area within the offset range.
The lighting unit can adopt a xenon lamp as a light source, and can also adopt a broad spectrum LED or a tungsten lamp or other broad spectrums as the light source. The light source filters out various illumination lights in sequence through the filter set, the filter wheel and the filter set control mechanism. Specifically, narrow band filters with different central wavelengths or broadband filters with different wavelength ranges or no filters are selected as required to form a filter set; the optical filters are embedded on the optical filter wheel in a certain sequence, for example, the central wavelength is from small to large; the light transmittance of the filters may be different to adjust the intensity of each illumination light; the optical filter control mechanism controls the optical filter wheel to rotate, and the optical filter is switched, so that various required illumination lights can be filtered out. FIG. 2 is a spectrum diagram of a combination of multiple types of illumination light irradiated by the illumination unit in the embodiment shown in FIG. 1, which includes 25 types of narrow-band light 201-225 with only one central wavelength, continuously covering the visible light range of 400-760nm, the filters are designed to have different transmittances, and the illumination intensity is adjusted to equalize the brightness of a series of images captured by the image capturing unit in the endoscope body as much as possible.
In addition, the light source of the illumination unit can be realized by using a plurality of LEDs corresponding to a plurality of illumination lights, and in this embodiment, the optical filter can be omitted, and each LED in the illumination light path can be sequentially lightened by an electronic control mode, or each LED can be sequentially switched into the illumination light path by a mechanical mode. In addition, when an LED is used as the light source, the LED can be attached to the distal end of the endoscope body, but the distal end space of the endoscope body is narrow, and the number of LEDs that can be attached is limited.
It should be noted that the above-mentioned multiple types of illumination light are not limited to the 25 narrow-band lights shown in fig. 2, and besides, each type of illumination light may have more than one central wavelength and bandwidth, i.e., each type of illumination light may be a narrow-band light having one or more central wavelengths, and/or any combination of wide-band lights having one or more wavelength ranges, especially illumination lights that may be sensitive to a specific condition or may highlight some aspect of the feature of the object under examination. In the capture of a series of images, each illumination light may appear multiple times at different intensities. Fig. 3 is a schematic spectral diagram of another combination of multiple illumination lights illuminated by an illumination unit, including anarrowband light 201 with one center wavelength, anarrowband light 226 with two center wavelengths, abroadband light 227 with one wavelength range, and abroadband light 228 with two wavelength ranges.
Fig. 4 is a diagram of a series of images in fig. 1 corresponding to the plurality of illumination lights shown in fig. 2. Theillumination light 201 is guided by the endoscope body and irradiates on the inspected object, and the camera module takes acorresponding image 301 to complete one-time illumination imaging; the lighting unit switches the next kind oflighting light 202, and the camera module takes anotherimage 302 corresponding to the next kind of lighting light to complete another lighting imaging; the illumination unit continues to switch the next illumination light 203, the camera module takes acorresponding image 303, and the illumination imaging is completed again; in this way, the illumination unit sequentially switches the illumination light, the camera module sequentially captures corresponding one image until the illumination unit switches to theillumination light 225, and the camera module finishes capturing the corresponding oneimage 325, so as to finish illumination and imaging of a series of images, wherein the series of images form a group of images 330. Let Δ t be the time interval between two adjacent illumination images. It should be noted that the endoscope image processing method provided by the present invention is also applicable to a plurality of images captured under illumination of a single wavelength.
According to the time sequence of image shooting, if the target image is the first image shot, the offset range of the specific region in the second image can be calculated first, the second image is aligned with the target image in the calculated offset range, the accurate corresponding region of the specific region in the second image is obtained, then the offset range of the third image compared with the corresponding region is calculated, the third image is aligned to the target image in the offset range, and the steps are repeated until all the images are aligned to the target image. In addition, the offset range of each image other than the target image relative to the target image may be calculated, and then the images may be aligned to the target image one by one within the offset range. Of course, if the target image is the ith image taken, the offset range is calculated and aligned for the images after the (i + 1) th image by the same method, and the offset range of the previous i-1 th image is calculated by the reverse operation method.
For convenience of description, the following description is specifically made with reference to a series of images in fig. 4 and taking the first image as the target image, and details of the case of taking other images as the target object are not repeated.
FIG. 5 is a schematic diagram of an embodiment of identifying a particular region in a target image. A certain area is manually designated as a specific area in the target image. Specifically, anarea 403 is defined in afirst image 301 of a group of images 330 by means of amouse 401, and thearea 403 is set as a specific area. Whereregion 403 contains onepolyp 402 of interest. The image of the specific area is designated as a target image for subsequent alignment. In addition, a specific region may be defined in any other image of the group of images 330. In addition to themouse 401, the specific area may be manually designated by using any other input device such as a touch panel, a keyboard, a voice input device, a camera, a scanner, or the like; the user may designate a specific region by clicking, sketching, inputting coordinates, inputting a region, selecting a preset region, moving a selection box of a specific shape, and the like.
When the user designates the region through the input device, only a particularly small region may be designated, for example, as shown in fig. 6, theregion 409 manually designated in the target image has only one pixel point. One pixel point is very susceptible to noise interference and subsequent alignment cannot be performed by only one pixel point, and for this case, thearea 409 is taken as the center, and is enlarged to be thearea 410 of 5 × 5 pixels, which is the final specific area. Alternatively, it may be enlarged to 9x9 or 13x13 or other slightly larger area than originally.
Alternatively, other ways of identifying a particular region in the target image may also be used. FIG. 7 is a schematic diagram of another embodiment of identifying a particular region in a target image without the use of an input device. Specifically, asquare area 404 with the side length of 100 pixels is preset in the center of the real-time image, and is prompted by a dashed box; before a group of images 330 corresponding to multiple illumination lights of an object to be inspected are shot, the direction and the position of the top end of an endoscope body are adjusted, so that a square area is aligned with a region of interest, namely apolyp 402, and thesquare area 404 is filled with thepolyp 402; the image is then taken and thesquare area 404 in thefirst image 301 is taken as the specific area. Thepolyp 402 may also underfill thesquare area 404, as long as it falls within thesquare area 404. In addition, an area of any shape and size may be preset at any position of the real-time image, and the area of any image may be used as the specific area.
Fig. 8 is a flowchart of another embodiment of identifying a specific area in a target image, and fig. 9 is a diagram illustrating a processing result corresponding to the flowchart shown in fig. 8, again without an input device. Step S410, edge detection is carried out on afirst image 301 of the group of taken images 330 by using a canny operator, a sobel operator, a laplacian operator, a roberts operator or a prewitt operator; step S420, based on the closed contour extraction method of the edge direction keeping hypothesis at the breakpoint, connecting the broken edges to obtain a closed contour, extracting the closed contour, and extracting the contour of thepolyp 402; step S430, then expand the extracted polyp contour by a certain proportion or number of pixels to obtainregion 408, andselect region 408 as the specific region.
In addition, two or more of user input position specifying means, preset position specifying means and automatic recognition position specifying means can be combined to more quickly and accurately specify the region of interest as a specific region. In one embodiment, a plurality of areas are selected by the automatic identification position designation means, and then an area is designated from the plurality of areas by the user input position designation means, and the area is completed and selected as the specific area.
Thespecific areas 403, 404, 408 illustrated in fig. 5, 7, 9 are generally larger and need not be enlarged using the method illustrated in fig. 6.
The specific area is often an area that the user pays attention to, and in the embodiments shown in fig. 5, 6, 7, and 9, only one area is designated as the specific area, but the user may need to designate another area as the specific area to be used as a reference area of the attention area. For example, when a user wants to acquire a spectrum curve of a lesion and compare the spectrum curve with a spectrum curve of a normal region, the alignment effect of the region of interest should be ensured first, and the alignment effect of the reference region should be considered second (which may be worse than the region of interest). Fig. 10 is a schematic diagram of identifying a plurality of specific regions in a target image, thespecific region 501 being a region of interest, containing apolyp 402; thespecific region 502 is a reference region, and no lesion is found. To ensure the alignment effect of theattention area 501 and to take into account the alignment effect of thereference area 502, different alignment weights are applied to theattention area 501 and thereference area 502 for subsequent alignment. For example, one embodiment of applying weight is that a user inputs the attention degree of each specific area according to a predetermined rule through an input device; the alignment weight of the corresponding ratio is applied as a ratio of the attention of each specific region. The specific area as the reference area is often small, so that the area of each specific area can be calculated, and the alignment weight of the corresponding ratio is applied according to the area ratio, so as to serve as an implementation scheme for automatically applying the alignment weight. When calculating the offset range, the offset range calculation is performed for each specific area.
In the time interval Δ t between two adjacent illumination imaging, a specific region in the target image may shift under the influence of organ peristalsis, endoscope body shake or rotation, and the like, and the shift range that can be generated needs to be calculated in the image to be aligned. For the different methods for identifying specific regions, only the specific region shown in fig. 5 is specifically described below, and other methods for identifying specific regions are the same and will not be described again.
FIG. 11 is a schematic diagram of one embodiment of calculating the offset range for the particular region shown in FIG. 5. Firstly, determining fourpixel points 601 with minimum x coordinate, maximum x coordinate, minimum y coordinate and maximum y coordinate in a specifiedarea 403 designated in afirst image 301 of a group of images 330; reading thedeviation range circle 602 of the fourpixel points 601 at the time delta t, and determining fourpositions 603 of the fourpixel points 601 which may deviate from thespecific area 403 in the directions x and y and are the maximum; arectangular area 604 with an edge parallel to the x-axis and the y-axis is determined from the fourpositions 603, and therectangular area 604 is the calculated offset range that thespecific area 403 can generate in thesecond image 302. It should be noted that the number of the pixel points is not less than three, and three pixel points may determine a triangular region, and the triangular region is used as an offset range that thespecific region 403 can generate in thesecond image 302; in the case of five or six points or more, a polygonal area can be determined, and the polygonal area is used as the offsetrange 403 that can be generated in thesecond image 302.
Thedeviation range circle 602 of the pixel point at the Δ t time is obtained by calibration in advance, stored in the storage module, and read from the storage module as needed. Within the offsetrange 604, the image to be aligned 302 is aligned to thetarget image 301 by thespecific region 403. After the alignment, an accurate corresponding region of thespecific region 403 in thesecond image 302 can be obtained, and further, an offset range that can be generated by thespecific region 403 in thethird image 303 can be estimated according to the corresponding region; then, within the offset range, the image to be aligned 303 is aligned to thetarget image 301 by thespecific region 403. By doing so, the images 302-325 can be sequentially aligned to thetarget image 301 within the corresponding offset range according to thespecific region 403.
FIG. 12 is a schematic diagram of another embodiment of calculating the offset range for the particular region shown in FIG. 5. The offset range circles 602 of eachpixel 601 at the time of Δ t on the boundary of thespecific region 403 are respectively read from the storage module, all the offset range circles 602 are overlapped, anew region 605 is determined from the outer boundary of the overlapped region, that is, the calculated offset range that thespecific region 403 can generate in thesecond image 302, and aregion 606 is also determined from the inner boundary of the overlapped region, which can be used as an additional limiting condition during alignment, that is, the limitation of inward offset of thepixel 601 on the boundary of thespecific region 403. It should be noted that a plurality of pixel points selected at intervals on the boundary of thespecific region 403 may also be adopted, the offset range circles 602 of the pixel points at the Δ t time are respectively read, and the boundary circumscribed about the offset range circles 602 at the same time is used as the calculated offset range that thespecific region 403 can generate in thesecond image 302, where the number of the pixel points is not less than three, and the three offset range circles 602 may uniquely determine an outer circle or an outer boundary formed by enlarging the boundary of thespecific region 403 in proportion. Of course, a plurality of pixel points may be all set on the boundary of thespecific region 403; or a portion of thespecific area 403 may be adjacent to a boundary of thespecific area 403, and another portion of the specific area is located on the boundary of thespecific area 403.
During image taking, human respiration, heartbeat, gastrointestinal peristalsis and the like can cause movement or deformation of an object to be checked; the user operates the endoscope at the far end, and although the endoscope body is kept stable as much as possible, the top end of the endoscope body still slightly shakes or rotates; in addition to barrel distortion of the endoscope, the above factors combine to cause a significant offset between each image in a series of images captured continuously, but the offset is not infinite, but has an estimable range, and by estimating the offset range, the images are aligned within the offset range, which can greatly reduce the calculation amount of alignment and improve the accuracy of alignment.
Fig. 13 and 14 illustrate how the offsetrange circle 602 of a pixel point at time at is scaled. As shown in fig. 13, 15 × 15pixels 601 to be calibrated are selected and uniformly distributed in the whole image range, and fig. 14 is a schematic diagram of a calibration process of an offset range circle of the pixels to be calibrated at the time Δ t. Under a typical observation distance, thefirst pixel point 601 to be calibrated is aligned with a punctate part with obvious characteristics in the object to be checked, such as the polyp of the stomach, and two images are taken at a time interval not less than delta t. Manually finding out acentral position point 701 of the small polyp in the 1 st image, finding out acentral position point 702 of the small polyp in the second image, determining the coordinate offset of theposition point 702 relative to theposition point 701, and completing the acquisition of the first group of calibration data of the first pixel point to be calibrated 601. In this way, 100 sets of calibration data are obtained for the first pixel point to be calibrated 601, and the typical observation distance and the point position with the obvious characteristic are randomly transformed before each set of calibration data is obtained. Drawing the 100 sets of calibration data together, maintaining the coordinate offset of all position points 702 relative to positionpoints 701, and making all position points 701 coincide; a circle is drawn with 701 as the center of the circle, which includes all the location points 702 exactly, resulting in acircle 703 with radius r. Since there is an error when finding out the central position points 701 and 702 of the point-like portions with obvious features, it is also necessary to multiply the radius R by a margin coefficient, for example, 1.2, to obtain acircle 602 with a radius R, which is the offsetrange circle 602 of thepixel point 601 to be calibrated at the time Δ t. In this way, eachpixel 601 to be calibrated is sequentially calibrated. For the uncalibrated pixel point, according to the distance to the adjacent calibrated pixel point, the reverse distance weighted interpolation is carried out on the radius of the offset range circle of the adjacent calibrated pixel point, so as to obtain the radius of the offset range circle of the pixel point, and further obtain the offsetrange circle 602 of the pixel point.
The number of the pixels to be calibrated is not limited to 15 × 15, and can be adjusted, increased or decreased according to the specific image resolution; the pixel points to be calibrated can be distributed in the whole image range in other arrangement modes. The 100 sets of calibration data do not need to be acquired at one time, and can be accumulated in clinical examination or extracted from clinical video and image data, and the number of the data sets is not limited to 100. The margin coefficient can be reasonably adjusted according to the data acquisition condition and the data group number. The calibration can be performed separately for different types of endoscopes and organs, so as to improve the calibration accuracy, such as gastroscopes, stomatoscopes, colonoscopes, colons and the like.
It should be noted that, in a similar manner to the above calibration of the Δ t time offset range circle, the calibration of the N Δ t (N ≧ 2) time offset range circle may be performed, and then the N Δ t time offset range circle is used to determine the offset range of the specific region for each image of the N-1 images. For example, deviation range circles in multiple times such as Δ t, 2 Δ t, 3 Δ t and the like may be calibrated at the same time, a series of acquired images are divided into two types, i.e., images with relatively rich features and images with relatively fuzzy features, the images with relatively rich features are processed, each image carries acquisition time information, according to the time interval distance between the image and a target image, the deviation range circle in a corresponding time period is selected to determine the deviation range that a specific area can generate in the image, and then alignment is performed. The process is repeated until the images with relatively rich characteristics are all aligned, and then the images with relatively fuzzy characteristics are sequentially aligned by the same method.
It should be noted that, in the above calibration, not only the offset range of the pixel point can be defined by a circle to obtain an offset range circle, but also an offset range square can be obtained by any other shape, such as a square.
In addition, for the method for identifying the specific region in the target image shown in fig. 7, since the specific region is preset, the offset range of the specific region in the other images in the series of images can be calculated in advance according to the calibrated offset range within the time of N Δ t (N ≧ 1), so that after the series of images are captured, step S120 shown in fig. 1 can be omitted, and step S130 can be directly performed.
FIG. 15 is a flow chart of one embodiment of the alignment step shown in FIG. 1. Fig. 16 is a schematic diagram of aligning the image to the specific region shown in fig. 5 within the offset range shown in fig. 11 according to the method shown in fig. 15. The alignment step specifically includes: step S810, searching Feature points by using a Speed Up Robust Feature (SURF) algorithm in thespecific region 403 in thetarget image 301 and the offsetrange 604 in the image to be aligned 302, respectively, where theFeature point 810 in fig. 16 is one of many Feature points; step S820, matching characteristic point pairs by using a Fast Nearest neighbor search Library (Fast Library for approximate neighbor Neighbors, FLANN for short) to obtain a plurality of characteristic point pairs such as 804-809; step S830, using a Random Sample Consensus (RANSAC) algorithm to screen out feature point pairs, such as screening out feature point pairs 805-808; step 840, using the coordinates of the screened feature point pairs, the simultaneous equations solve the homography transformation matrix from theimage 302 to theimage 301, and the homography transformation matrix is used as an alignment transformation matrix, that is, an alignment mapping relation is obtained, so that theimage 302 to be aligned is aligned to thetarget image 301 according to thespecific region 403 in the offsetrange 604. After the alignment, an accuratecorresponding region 821 of thespecific region 403 in the 2nd image 302 can be obtained, and further, the offset range that can be generated by thespecific region 403 in the 3rd image 303 can be estimated according to thecorresponding region 821; further, theimage 303 to be aligned is aligned to thetarget image 301 by thespecific region 403 within the offset range in theimage 303. By doing so, the images 302-325 can be sequentially aligned to thetarget image 301 within the corresponding offset range according to thespecific region 403.
The endoscope image processing method provided by the invention only searches the characteristic points in the specific area and the corresponding offset range and carries out subsequent processing, compared with the processing of the whole image, the calculation amount is greatly reduced, and the alignment effect of the specific area is ensured by means of the limitation of the offset range. Alternatively, the alignment transformation unit 803 may also use the coordinates of the screened feature point pairs, solve a similarity transformation or affine transformation matrix using a simultaneous equation system, and use the similarity transformation or affine transformation matrix as an alignment transformation matrix to align the image to be aligned with the target image.
In addition, the remote sensing image registration based on the improved SIFT algorithm of Marr wavelet or the infrared and visible light image registration algorithm based on saliency and ORB can be used for aligning the image to be aligned to the target image according to a specific region within the offset range. The remote sensing image registration based on the improved SIFT algorithm of the Marr wavelet is to extract the characteristics of a target image and an image to be aligned by using the Marr wavelet under the scale space theory, perform primary alignment on the characteristic points of the target image and the image to be aligned by using the Euclidean distance, and perform fine alignment on a primary alignment result according to a random sampling consistency method. The infrared and visible light image registration algorithm based on significance and ORB obtains a significance structure diagram of an image by using an optimized HC-GHS significance detection algorithm, then performs feature point detection on the significance structure diagram by using an ORB algorithm, screens out feature points with strong robustness by using a Taylor series, performs grouping matching according to the directions of the feature points, and finally realizes the matching of the feature points by using Hamming distance. Alternatively, the specific area may be used as an alignment template, swept in a corresponding offset range, and aligned using mutual information as a similarity measure; alternatively, the alignment is performed using a gray-based elastic alignment method such as B-spline free deformation or demons algorithm.
Fig. 17 is a flowchart of another example of the alignment step shown in fig. 1, and compared with the embodiment shown in fig. 15, before searching for the feature point, step S800 is performed, in which distortion coefficients stored in a storage module are read according to a specific endoscope model, and distortion correction is performed on a specific region and a corresponding offset range to eliminate barrel distortion caused by an image pickup module of the endoscope, wherein the distortion coefficients can be obtained by calibrating the endoscope in advance by using a zhang' S camera calibration method through a chessboard calibration board. After the image alignment is completed, step S850 is performed, that is, the alignment mapping relationship is converted to the distortion correction by using the distortion coefficient again, so as to achieve the alignment of the image to be aligned to the target image.
When a plurality of specific regions are identified, different alignment weights need to be applied to the plurality of specific regions. The following takes the image alignment flow shown in fig. 15 as an example to describe how to perform alignment using the alignment weights during image alignment. First, in screening pairs of feature points, it is ensured that the ratio of the number of pairs of feature points screened from each specific region is equal to the ratio of the alignment weights. For example, in fig. 10, the alignment weights of thespecific region 501 and thespecific region 502 are 1 and 0.33, respectively, and 4 pairs of feature points need to be filtered, 3 pairs are filtered from thespecific region 501, and only 1 pair is filtered from thespecific region 502. Thus, when the simultaneous equations are used for solving the aligned homography transformation matrix, thespecific region 501 plays a leading role, thespecific region 502 plays an auxiliary role, the alignment effect of thespecific region 501 is ensured, and the alignment effect of thespecific region 502 is considered. In addition, any group of stored images can be called out from the storage module, a specific area is reassigned, the offset range is re-estimated, and the alignment is re-carried out.
Fig. 18 is a flowchart of another embodiment of the endoscopic image processing method according to the present invention, and compared to the endoscopic image processing method shown in fig. 1, the better alignment effect can be achieved by performing step S1010, i.e., preliminary image alignment, before performing alignment, and performing step S1020, i.e., image alignment adjustment, after the preliminary alignment. Fig. 19 is a schematic diagram of a preliminary alignment of a series of images in the manner shown in fig. 18. The method comprises the steps of conducting preliminary alignment on a group ofimages 1130 in a corresponding offset range according to aspecific area 1131 by adopting similarity transformation, namely solving 24 similarity transformation matrixes from images 1102 to 1125 to be aligned (only 1112, 1113 and 1114 are shown in FIG. 19) to atarget image 1101, preliminarily aligning the images 1102 to 1125 to be aligned to thetarget image 1101, wherein theimages 1101 to 1125 correspond to the 25 types ofillumination light 201 to 225 shown in FIG. 2 respectively. After the preliminary alignment, the corresponding regions 1132 to 1155 (only 1142, 1143, 1144 are shown in fig. 19) of thespecific region 1101 in the images 1102 to 1125 to be aligned can be obtained.
Because human tissue (including diseased tissue) is not sensitive to certain wavelengths of light, images taken when not sensitive to light illumination may be darker or less characteristic. Such asimage 1113 in fig. 19, the image is dark and the characteristic information is not noticeable. However, information about such images is necessary in the calculation of the spectral curves, and the accuracy of the alignment affects the accuracy of the subsequent spectral pathology analysis. Although there is a limitation of the offset range, since the feature information is not obvious, a large deviation may still occur during the preliminary alignment, for example, aspecific region 1101 has an obvious deviation in acorresponding region 1143 in theimage 1113; however, the image shift caused by various factors during the image taking process is continuous and should not be excessively abrupt. Thus, an offset trajectory for a set ofimages 1130 can be calculated from the 24 similarity transformation matrices of the preliminary alignment, and the alignment mapping for images with significant deviations can be adjusted based on the offset trajectory.
Specifically, the similarity transformation can be decomposed into three basic transformations of a displacement transformation, a rotation transformation, and a scaling transformation. Fig. 20, 21, and 22 are schematic diagrams of shift conversion, rotation conversion, and scaling conversion included in the similarity conversion, where 1201 is an image before conversion, 1202 is an image after shift conversion, 1203 is an image after rotation conversion, and 1204 is an image after scaling conversion. Accordingly, the offset trajectory of the set ofimages 1130 may consist of a displacement trajectory, a rotation trajectory, and a zoom trajectory.
FIG. 23 is a flowchart of an embodiment of the image alignment adjustment step shown in FIG. 18. Specifically, in step S1310, the offset trajectories, i.e., the displacement trajectory, the rotation trajectory and the scaling trajectory, of the set ofimages 1130 are calculated according to 24 similarity transformation matrices from the images 1102 to 1125 to thetarget image 1101. Fig. 24, 25, 26 are schematic diagrams of displacement trajectory, rotation trajectory and zoom trajectory after preliminary alignment, respectively; the abscissa is the center wavelength of 25 kinds of illumination light, and theillumination light 212, 213, 214 correspond to theimages 1112, 1113, 1114 in fig. 19, respectively; the ordinate represents the displacement amount, rotation amount, and zoom amount, respectively, and 1401 represents the displacement locus, 1402 represents the rotation locus, and 1403 represents the zoom locus. Step S1320, calculating a variance of each position point constituting the offset trajectory, or calculating a gradient of two adjacent position points, determining aposition point 1410 at which the variance or the gradient exceeds a set threshold as an abnormal position point, and determining animage 1113 corresponding to theabnormal position point 1410 as an image with a significant deviation on the corresponding offset trajectory. Step S1330 of interpolating a new displacement transformation matrix using the displacement transformation matrices of theimage 1112 and theimage 1114 to theimage 1101; interpolating a new rotational transformation matrix using the rotational transformation matrices ofimage 1112 andimage 1114 toimage 1101; a new scaling transformation matrix is interpolated using the scaling transformation matrices forimage 1112 andimage 1114 toimage 1101. Step S1340, synthesizing the new displacement transformation matrix, rotation transformation matrix and scaling transformation matrix into a new similarity transformation matrix as a new alignment transformation matrix from theimage 1113 to theimage 1101, and completing the adjustment of the alignment mapping relationship of theimage 1113 with significant deviation. If the abnormal locus is detected only in one or two offset tracks, only the corresponding basic transformation matrix needs to be adjusted, and then a new similarity transformation matrix is synthesized; or only one offset trajectory is calculated, and only one fundamental transformation dimension is detected and adjusted.
In addition, affine transformation or homography transformation can be adopted when a series of images are preliminarily aligned according to a specific area in the offset range, and besides the displacement track, the rotation track and the scaling track, other offset tracks, such as a shearing track or a perspective track, can be calculated to detect and adjust corresponding basic transformation dimensions.
The aligned set of images is arranged in the order of the center wavelengths of the corresponding illumination light by means of a software program to generate a spectral data cube for a specific region. In addition, a user can select any pixel point through input equipment, or automatically select a gray scale gravity center pixel point in a specific area, and the center wavelength is used as an abscissa and the gray scale value is used as an ordinate to generate a spectrum curve of the pixel point. The spectral curves of all pixel points in a specific area can be directly generated; or firstly, the specific area and the corresponding area in other images are respectively subjected to gray level averaging, and then the mean value spectrum curve of the specific area is generated. For the case where a plurality of specific regions are specified as shown in fig. 10, a spectrum data cube or a spectrum curve for each specific region may be generated separately.
All or part of the aligned set of images are fused by means of a software program to generate an image. One embodiment is: and respectively calculating the gray level variances of the specific area and the corresponding areas in other images, sorting all the images from large to small according to the variances, and automatically selecting the images which are arranged in the first 50% for fusion, namely selecting the images with relatively large gray level differences for fusion. Another embodiment is: and (3) re-screening feature points from the specific area and the corresponding areas in other images according to the same threshold standard, sorting the feature points from large to small according to the number of the contained feature points, and automatically selecting the images ranked in the top 30% for fusion, namely selecting the images with rich feature information for fusion. In addition, a user can select any number of images through input equipment, and the images are subjected to pseudo-color assignment and are fused to generate a color image. Or, all the images are respectively regarded as one color component, white balance is carried out, and white light images are generated through fusion.
In addition, an embodiment of the present invention provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor. The processor is used for calling the program instruction in the memory to execute the following method: identifying a specific region in a target image, wherein the target image is an image arbitrarily selected from a series of images of an object under examination taken under illumination of an illumination unit; calculating the offset range of the specific area in other images in the series of images, and aligning the series of images according to the specific area in the offset range.
The electronic device provided by the embodiment of the invention can execute the specific steps in the endoscope image processing method and can achieve the same technical effect, and the specific description is not provided herein.
In the above embodiments, the memory is a separate module, including the above mentioned memory module, and the memory may be implemented based on a non-volatile memory, such as an Electrically Erasable Programmable Read Only Memory (EEPROM), a Flash memory (Flash memory), and the like, and may also be a hard disk, an optical disk, or a magnetic tape. In another embodiment, the memory is used only for storing the endoscopic image processing method instructed by the program, the above-mentioned memory module belongs to another memory unit independent of the memory, and when the endoscopic image processing method is executed, if data is required to be called, the data is called from the memory module.
FIG. 27 is a schematic configuration diagram of an endoscope system according to an embodiment of the present invention. Fig. 27 provides an endoscope system including anillumination unit 1501, anendoscope body 1502, and anelectronic apparatus 1506. Among them, theillumination unit 1501 generates a plurality of types of illumination light, which are guided by theendoscope 1502 and are respectively irradiated onto theobject 1520 to be inspected. Theendoscope 1502 includes animage pickup module 1504, and theimage pickup module 1504 picks up a series ofimages 1530 of theobject 1520 corresponding to the plurality of types of illumination light; thecamera module 1504 has anoptical imaging lens 1505, and theoptical imaging lens 1505 is located at the tip of theendoscope body 1502. A processor in the electronic device executes an endoscopic image processing method, designating a particular region in any of the series ofimages 1530; estimating the range of offsets that the particular region can produce within the other images in the series ofimages 1530; aligning a series ofimages 1530 to the particular region within the offset range; generating a spectral data cube or spectral curve for the particular region using the aligned series of images; and fusing all or part of the aligned series of images to generate an image. A memory storing the image, the specific region, the shift range, the spectral data, and the like. The electronic device may be provided with a display device, or may be provided with an external display device for displaying the image, the specific region, the spectral data, and the like. The display device can be realized by a Sony medical display LMD-2451MC or other displays matched with the image resolution. In order to facilitate manual operation of a user, the electronic device can be further provided with input equipment, and other optional input auxiliary equipment such as a touch panel, a keyboard, a voice input device, a camera, a scanner and the like can be selected. Theobject 1520 in fig. 27 is for example only, and the actual object may be any other organ or part to which the endoscope is applied, such as the stomach or the intestine.
The electronic device may be implemented based on an FPGA (Field programmable Gate Array). In addition, the System on Chip (SoC), an Application Specific Integrated Circuit (ASIC), or an embedded processor may also be used, or a computer may also be used directly, or one or more of the above schemes may also be combined.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.