Movatterモバイル変換


[0]ホーム

URL:


CN111161852A - Endoscope image processing method, electronic equipment and endoscope system - Google Patents

Endoscope image processing method, electronic equipment and endoscope system
Download PDF

Info

Publication number
CN111161852A
CN111161852ACN201911399032.2ACN201911399032ACN111161852ACN 111161852 ACN111161852 ACN 111161852ACN 201911399032 ACN201911399032 ACN 201911399032ACN 111161852 ACN111161852 ACN 111161852A
Authority
CN
China
Prior art keywords
images
image
series
alignment
offset range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911399032.2A
Other languages
Chinese (zh)
Other versions
CN111161852B (en
Inventor
李宗州
谢天宇
王希光
付野
王晨曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shuangyiqi Electronics Co ltd
Peking University
Original Assignee
Beijing Shuangyiqi Electronics Co ltd
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shuangyiqi Electronics Co ltd, Peking UniversityfiledCriticalBeijing Shuangyiqi Electronics Co ltd
Priority to CN201911399032.2ApriorityCriticalpatent/CN111161852B/en
Publication of CN111161852ApublicationCriticalpatent/CN111161852A/en
Application grantedgrantedCritical
Publication of CN111161852BpublicationCriticalpatent/CN111161852B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明涉及医疗器械领域,尤其涉及一种内窥镜图像处理方法、电子设备及内窥镜系统。该内窥镜图像处理方法包括:在目标图像中标识特定区域,其中,目标图像是从被检查对象在照明单元照射下所拍摄的一系列图像中任意选取的一幅图像;计算所述特定区域在所述一系列图像中的其他幅图像内所能产生的偏移范围;在所述偏移范围内将所述一系列图像按照所述特定区域进行对准。本发明大大降低了对准内窥镜摄取的一系列光谱图像的计算量,提高了特定区域的对准效果;对准后,可生成准确的光谱数据立方体或光谱曲线,或融合生成特征鲜明的图像,提高光谱病理分析和诊断的准确性。

Figure 201911399032

The invention relates to the field of medical instruments, in particular to an endoscope image processing method, an electronic device and an endoscope system. The endoscopic image processing method includes: identifying a specific area in a target image, wherein the target image is an image arbitrarily selected from a series of images captured by an object under inspection under illumination by an illumination unit; calculating the specific area An offset range that can be generated in other images in the series of images; the series of images is aligned according to the specific area within the offset range. The invention greatly reduces the calculation amount of a series of spectral images captured by the alignment endoscope, and improves the alignment effect of a specific area; images to improve the accuracy of spectral pathology analysis and diagnosis.

Figure 201911399032

Description

Endoscope image processing method, electronic equipment and endoscope system
Technical Field
The present invention relates to the field of medical devices, and in particular, to an endoscope image processing method, an electronic device, and an endoscope system.
Background
An endoscope system is an important medical instrument for diagnosing and treating early diseases of a human cavity, and comprises an endoscope body and an illumination unit. When the endoscope is used, a soft or hard tubular endoscope body is inserted into a human body, and imaging is performed under the irradiation of the illumination unit, so that the tissue morphology and the pathological change condition of human organs are observed through images to perform diagnosis. Since human tissues (including pathological tissues) are sensitive to light with certain specific wavelengths and not sensitive to light with certain specific wavelengths, in order to highlight the characteristic information submerged under white light irradiation, light with a plurality of different wavelengths is respectively irradiated on an object to be examined, a series of images are taken, a spectral map or a spectral data cube of the human tissues (including pathological tissues) is obtained, and the dependence on sampling and pathological examination in diagnosis is reduced by researching the characteristics of a spectral curve and analyzing spectral pathology.
At present, a method for analyzing and processing a series of images is to arbitrarily select a plurality of images from the images to perform image recognition, alignment and pseudo color assignment, and finally form a color image. However, when a series of images of an object to be examined are continuously captured, the movement or deformation of the object to be examined can be caused by human respiration, heartbeat, gastrointestinal peristalsis and the like during the interval time of two illumination imaging, and the shaking or rotation of the endoscope can be caused by the operation of the endoscope at the far end of a user, so that obvious offset exists between each image in the series of images continuously captured; in addition, the endoscope has a large angle of view, has barrel distortion, and increases the displacement between images. At present, the selected images are directly aligned in the whole image range, the calculated amount is large, the alignment effect of a concerned area is difficult to guarantee, and then a spectrum curve accurate in a certain position cannot be obtained, so that the accuracy of subsequent spectrum pathological analysis is greatly reduced, and even the errors are completely avoided.
Disclosure of Invention
In view of the above-mentioned drawbacks of the background art, an object of the present invention is to provide an endoscope image processing method, an electronic device, and an endoscope system, so as to solve the problems that the image processing effect is poor in the use of the existing endoscope system, and accurate data and good images cannot be provided for the follow-up study.
In order to solve the above-described technical problem, the present invention provides an endoscopic image processing method including:
identifying a specific region in a target image, wherein the target image is an image arbitrarily selected from a series of images of an object under examination taken under illumination of an illumination unit; calculating the offset range which can be generated by the specific area in other images in the series of images; aligning the series of images to the specific region within the offset range.
In order to solve the above technical problem, the present invention further provides an electronic device, which includes a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the endoscopic image processing method as described above when executing the program.
In order to solve the above technical problem, the present invention further provides an endoscope system, which includes an endoscope body, and further includes the electronic device as described above, wherein the processor is in communication connection with the camera module in the endoscope body.
The technical scheme of the invention has the following beneficial effects:
the endoscope image processing method of the invention, mark the particular region in an image selected arbitrarily in a series of pictures shot, calculate the range of excursion that the particular region can produce in other pictures in a series of pictures, align a series of pictures according to the particular region in the range of excursion; the endoscope image processing method calculates the offset range of the specific area in other images in a series of images, and aligns the series of images according to the specific area in the offset range which can be generated, thereby greatly reducing the calculated amount and improving the alignment effect of the specific area; the aligned images are used to generate an accurate spectrum data cube or spectrum curve, so as to provide accurate data for diagnosis and research; after alignment, images with rich characteristic information can be fused, and the fused images have distinct characteristics and are beneficial to rapid and accurate diagnosis.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of one embodiment of a method of endoscopic image processing according to the present invention;
FIG. 2 is a schematic spectral diagram of a combination of illumination light from the illumination unit of the embodiment shown in FIG. 1;
FIG. 3 is a schematic spectral diagram of another combination of illumination light from the illumination unit of the embodiment shown in FIG. 1;
FIG. 4 is a view of the series of images of FIG. 1 corresponding to the various illumination lights shown in FIG. 2;
FIG. 5 is a schematic view of an embodiment of identifying a particular region in a target image;
FIG. 6 is a schematic diagram of a target image in which a region is manually designated to have only one pixel;
FIG. 7 is a schematic view of another embodiment of identifying a particular region in a target image;
FIG. 8 is a flow diagram of yet another embodiment of identifying a particular region in a target image;
FIG. 9 is a diagram illustrating processing results corresponding to the flow illustrated in FIG. 8;
FIG. 10 is a schematic diagram of identifying a plurality of specific regions in a target image;
FIG. 11 is a schematic diagram of one embodiment of calculating a range of offsets for the particular region shown in FIG. 5;
FIG. 12 is a schematic diagram of another embodiment of calculating the offset range for the particular region shown in FIG. 5;
FIG. 13 illustrates a selected pixel point to be calibrated within a real-time image;
FIG. 14 is a schematic diagram of a calibration process of an offset range circle of a pixel point to be calibrated at a time Δ t;
FIG. 15 is a flow chart of one embodiment of the alignment step shown in FIG. 1;
FIG. 16 is a schematic illustration of alignment according to the method shown in FIG. 15;
FIG. 17 is a flow chart of another embodiment of the alignment step shown in FIG. 1;
FIG. 18 is a flow chart of another embodiment of an endoscopic image processing method of the present invention;
FIG. 19 is a schematic illustration of a preliminary alignment of a series of images in the manner shown in FIG. 18;
FIG. 20 is a schematic diagram of a displacement transform encompassed by a similarity transform;
FIG. 21 is a schematic diagram of a rotation transform comprised by a similarity transform;
FIG. 22 is a schematic illustration of a scaling transform encompassed by a similarity transform;
FIG. 23 is a flowchart of one embodiment of the image alignment adjustment step of FIG. 18;
FIG. 24 is a schematic illustration of displacement trajectories after a preliminary alignment of the set of images shown in FIG. 19;
FIG. 25 is a schematic illustration of the rotational trajectories of the set of images shown in FIG. 19 after initial alignment;
FIG. 26 is a schematic illustration of the zoom trajectory after initial alignment of the set of images shown in FIG. 19;
FIG. 27 is a schematic configuration diagram of an endoscope system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the embodiments of the present invention, it should be noted that the terms "first" and "second" are used for the sake of clarity in describing the numbering of the components of the product and do not represent any substantial difference, unless explicitly stated or limited otherwise. The directions of "up", "down", "left" and "right" are all based on the directions shown in the attached drawings. Specific meanings of the above terms in the embodiments of the present invention can be understood by those of ordinary skill in the art according to specific situations.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Fig. 1 is a flowchart of an embodiment of an endoscopic image processing method according to the present invention, the endoscopic image processing method shown in fig. 1 including the steps of:
in step S110, a specific region is identified in a target image, which is an image arbitrarily selected from a series of images of an object under inspection taken under illumination by an illumination unit.
Step S120, calculating the offset range that the specific area can generate in the other images in the series of images.
Step S130, aligning the series of images according to the specific area within the offset range.
The lighting unit can adopt a xenon lamp as a light source, and can also adopt a broad spectrum LED or a tungsten lamp or other broad spectrums as the light source. The light source filters out various illumination lights in sequence through the filter set, the filter wheel and the filter set control mechanism. Specifically, narrow band filters with different central wavelengths or broadband filters with different wavelength ranges or no filters are selected as required to form a filter set; the optical filters are embedded on the optical filter wheel in a certain sequence, for example, the central wavelength is from small to large; the light transmittance of the filters may be different to adjust the intensity of each illumination light; the optical filter control mechanism controls the optical filter wheel to rotate, and the optical filter is switched, so that various required illumination lights can be filtered out. FIG. 2 is a spectrum diagram of a combination of multiple types of illumination light irradiated by the illumination unit in the embodiment shown in FIG. 1, which includes 25 types of narrow-band light 201-225 with only one central wavelength, continuously covering the visible light range of 400-760nm, the filters are designed to have different transmittances, and the illumination intensity is adjusted to equalize the brightness of a series of images captured by the image capturing unit in the endoscope body as much as possible.
In addition, the light source of the illumination unit can be realized by using a plurality of LEDs corresponding to a plurality of illumination lights, and in this embodiment, the optical filter can be omitted, and each LED in the illumination light path can be sequentially lightened by an electronic control mode, or each LED can be sequentially switched into the illumination light path by a mechanical mode. In addition, when an LED is used as the light source, the LED can be attached to the distal end of the endoscope body, but the distal end space of the endoscope body is narrow, and the number of LEDs that can be attached is limited.
It should be noted that the above-mentioned multiple types of illumination light are not limited to the 25 narrow-band lights shown in fig. 2, and besides, each type of illumination light may have more than one central wavelength and bandwidth, i.e., each type of illumination light may be a narrow-band light having one or more central wavelengths, and/or any combination of wide-band lights having one or more wavelength ranges, especially illumination lights that may be sensitive to a specific condition or may highlight some aspect of the feature of the object under examination. In the capture of a series of images, each illumination light may appear multiple times at different intensities. Fig. 3 is a schematic spectral diagram of another combination of multiple illumination lights illuminated by an illumination unit, including anarrowband light 201 with one center wavelength, anarrowband light 226 with two center wavelengths, abroadband light 227 with one wavelength range, and abroadband light 228 with two wavelength ranges.
Fig. 4 is a diagram of a series of images in fig. 1 corresponding to the plurality of illumination lights shown in fig. 2. Theillumination light 201 is guided by the endoscope body and irradiates on the inspected object, and the camera module takes acorresponding image 301 to complete one-time illumination imaging; the lighting unit switches the next kind oflighting light 202, and the camera module takes anotherimage 302 corresponding to the next kind of lighting light to complete another lighting imaging; the illumination unit continues to switch the next illumination light 203, the camera module takes acorresponding image 303, and the illumination imaging is completed again; in this way, the illumination unit sequentially switches the illumination light, the camera module sequentially captures corresponding one image until the illumination unit switches to theillumination light 225, and the camera module finishes capturing the corresponding oneimage 325, so as to finish illumination and imaging of a series of images, wherein the series of images form a group of images 330. Let Δ t be the time interval between two adjacent illumination images. It should be noted that the endoscope image processing method provided by the present invention is also applicable to a plurality of images captured under illumination of a single wavelength.
According to the time sequence of image shooting, if the target image is the first image shot, the offset range of the specific region in the second image can be calculated first, the second image is aligned with the target image in the calculated offset range, the accurate corresponding region of the specific region in the second image is obtained, then the offset range of the third image compared with the corresponding region is calculated, the third image is aligned to the target image in the offset range, and the steps are repeated until all the images are aligned to the target image. In addition, the offset range of each image other than the target image relative to the target image may be calculated, and then the images may be aligned to the target image one by one within the offset range. Of course, if the target image is the ith image taken, the offset range is calculated and aligned for the images after the (i + 1) th image by the same method, and the offset range of the previous i-1 th image is calculated by the reverse operation method.
For convenience of description, the following description is specifically made with reference to a series of images in fig. 4 and taking the first image as the target image, and details of the case of taking other images as the target object are not repeated.
FIG. 5 is a schematic diagram of an embodiment of identifying a particular region in a target image. A certain area is manually designated as a specific area in the target image. Specifically, anarea 403 is defined in afirst image 301 of a group of images 330 by means of amouse 401, and thearea 403 is set as a specific area. Whereregion 403 contains onepolyp 402 of interest. The image of the specific area is designated as a target image for subsequent alignment. In addition, a specific region may be defined in any other image of the group of images 330. In addition to themouse 401, the specific area may be manually designated by using any other input device such as a touch panel, a keyboard, a voice input device, a camera, a scanner, or the like; the user may designate a specific region by clicking, sketching, inputting coordinates, inputting a region, selecting a preset region, moving a selection box of a specific shape, and the like.
When the user designates the region through the input device, only a particularly small region may be designated, for example, as shown in fig. 6, theregion 409 manually designated in the target image has only one pixel point. One pixel point is very susceptible to noise interference and subsequent alignment cannot be performed by only one pixel point, and for this case, thearea 409 is taken as the center, and is enlarged to be thearea 410 of 5 × 5 pixels, which is the final specific area. Alternatively, it may be enlarged to 9x9 or 13x13 or other slightly larger area than originally.
Alternatively, other ways of identifying a particular region in the target image may also be used. FIG. 7 is a schematic diagram of another embodiment of identifying a particular region in a target image without the use of an input device. Specifically, asquare area 404 with the side length of 100 pixels is preset in the center of the real-time image, and is prompted by a dashed box; before a group of images 330 corresponding to multiple illumination lights of an object to be inspected are shot, the direction and the position of the top end of an endoscope body are adjusted, so that a square area is aligned with a region of interest, namely apolyp 402, and thesquare area 404 is filled with thepolyp 402; the image is then taken and thesquare area 404 in thefirst image 301 is taken as the specific area. Thepolyp 402 may also underfill thesquare area 404, as long as it falls within thesquare area 404. In addition, an area of any shape and size may be preset at any position of the real-time image, and the area of any image may be used as the specific area.
Fig. 8 is a flowchart of another embodiment of identifying a specific area in a target image, and fig. 9 is a diagram illustrating a processing result corresponding to the flowchart shown in fig. 8, again without an input device. Step S410, edge detection is carried out on afirst image 301 of the group of taken images 330 by using a canny operator, a sobel operator, a laplacian operator, a roberts operator or a prewitt operator; step S420, based on the closed contour extraction method of the edge direction keeping hypothesis at the breakpoint, connecting the broken edges to obtain a closed contour, extracting the closed contour, and extracting the contour of thepolyp 402; step S430, then expand the extracted polyp contour by a certain proportion or number of pixels to obtainregion 408, andselect region 408 as the specific region.
In addition, two or more of user input position specifying means, preset position specifying means and automatic recognition position specifying means can be combined to more quickly and accurately specify the region of interest as a specific region. In one embodiment, a plurality of areas are selected by the automatic identification position designation means, and then an area is designated from the plurality of areas by the user input position designation means, and the area is completed and selected as the specific area.
Thespecific areas 403, 404, 408 illustrated in fig. 5, 7, 9 are generally larger and need not be enlarged using the method illustrated in fig. 6.
The specific area is often an area that the user pays attention to, and in the embodiments shown in fig. 5, 6, 7, and 9, only one area is designated as the specific area, but the user may need to designate another area as the specific area to be used as a reference area of the attention area. For example, when a user wants to acquire a spectrum curve of a lesion and compare the spectrum curve with a spectrum curve of a normal region, the alignment effect of the region of interest should be ensured first, and the alignment effect of the reference region should be considered second (which may be worse than the region of interest). Fig. 10 is a schematic diagram of identifying a plurality of specific regions in a target image, thespecific region 501 being a region of interest, containing apolyp 402; thespecific region 502 is a reference region, and no lesion is found. To ensure the alignment effect of theattention area 501 and to take into account the alignment effect of thereference area 502, different alignment weights are applied to theattention area 501 and thereference area 502 for subsequent alignment. For example, one embodiment of applying weight is that a user inputs the attention degree of each specific area according to a predetermined rule through an input device; the alignment weight of the corresponding ratio is applied as a ratio of the attention of each specific region. The specific area as the reference area is often small, so that the area of each specific area can be calculated, and the alignment weight of the corresponding ratio is applied according to the area ratio, so as to serve as an implementation scheme for automatically applying the alignment weight. When calculating the offset range, the offset range calculation is performed for each specific area.
In the time interval Δ t between two adjacent illumination imaging, a specific region in the target image may shift under the influence of organ peristalsis, endoscope body shake or rotation, and the like, and the shift range that can be generated needs to be calculated in the image to be aligned. For the different methods for identifying specific regions, only the specific region shown in fig. 5 is specifically described below, and other methods for identifying specific regions are the same and will not be described again.
FIG. 11 is a schematic diagram of one embodiment of calculating the offset range for the particular region shown in FIG. 5. Firstly, determining fourpixel points 601 with minimum x coordinate, maximum x coordinate, minimum y coordinate and maximum y coordinate in a specifiedarea 403 designated in afirst image 301 of a group of images 330; reading thedeviation range circle 602 of the fourpixel points 601 at the time delta t, and determining fourpositions 603 of the fourpixel points 601 which may deviate from thespecific area 403 in the directions x and y and are the maximum; arectangular area 604 with an edge parallel to the x-axis and the y-axis is determined from the fourpositions 603, and therectangular area 604 is the calculated offset range that thespecific area 403 can generate in thesecond image 302. It should be noted that the number of the pixel points is not less than three, and three pixel points may determine a triangular region, and the triangular region is used as an offset range that thespecific region 403 can generate in thesecond image 302; in the case of five or six points or more, a polygonal area can be determined, and the polygonal area is used as the offsetrange 403 that can be generated in thesecond image 302.
Thedeviation range circle 602 of the pixel point at the Δ t time is obtained by calibration in advance, stored in the storage module, and read from the storage module as needed. Within the offsetrange 604, the image to be aligned 302 is aligned to thetarget image 301 by thespecific region 403. After the alignment, an accurate corresponding region of thespecific region 403 in thesecond image 302 can be obtained, and further, an offset range that can be generated by thespecific region 403 in thethird image 303 can be estimated according to the corresponding region; then, within the offset range, the image to be aligned 303 is aligned to thetarget image 301 by thespecific region 403. By doing so, the images 302-325 can be sequentially aligned to thetarget image 301 within the corresponding offset range according to thespecific region 403.
FIG. 12 is a schematic diagram of another embodiment of calculating the offset range for the particular region shown in FIG. 5. The offset range circles 602 of eachpixel 601 at the time of Δ t on the boundary of thespecific region 403 are respectively read from the storage module, all the offset range circles 602 are overlapped, anew region 605 is determined from the outer boundary of the overlapped region, that is, the calculated offset range that thespecific region 403 can generate in thesecond image 302, and aregion 606 is also determined from the inner boundary of the overlapped region, which can be used as an additional limiting condition during alignment, that is, the limitation of inward offset of thepixel 601 on the boundary of thespecific region 403. It should be noted that a plurality of pixel points selected at intervals on the boundary of thespecific region 403 may also be adopted, the offset range circles 602 of the pixel points at the Δ t time are respectively read, and the boundary circumscribed about the offset range circles 602 at the same time is used as the calculated offset range that thespecific region 403 can generate in thesecond image 302, where the number of the pixel points is not less than three, and the three offset range circles 602 may uniquely determine an outer circle or an outer boundary formed by enlarging the boundary of thespecific region 403 in proportion. Of course, a plurality of pixel points may be all set on the boundary of thespecific region 403; or a portion of thespecific area 403 may be adjacent to a boundary of thespecific area 403, and another portion of the specific area is located on the boundary of thespecific area 403.
During image taking, human respiration, heartbeat, gastrointestinal peristalsis and the like can cause movement or deformation of an object to be checked; the user operates the endoscope at the far end, and although the endoscope body is kept stable as much as possible, the top end of the endoscope body still slightly shakes or rotates; in addition to barrel distortion of the endoscope, the above factors combine to cause a significant offset between each image in a series of images captured continuously, but the offset is not infinite, but has an estimable range, and by estimating the offset range, the images are aligned within the offset range, which can greatly reduce the calculation amount of alignment and improve the accuracy of alignment.
Fig. 13 and 14 illustrate how the offsetrange circle 602 of a pixel point at time at is scaled. As shown in fig. 13, 15 × 15pixels 601 to be calibrated are selected and uniformly distributed in the whole image range, and fig. 14 is a schematic diagram of a calibration process of an offset range circle of the pixels to be calibrated at the time Δ t. Under a typical observation distance, thefirst pixel point 601 to be calibrated is aligned with a punctate part with obvious characteristics in the object to be checked, such as the polyp of the stomach, and two images are taken at a time interval not less than delta t. Manually finding out acentral position point 701 of the small polyp in the 1 st image, finding out acentral position point 702 of the small polyp in the second image, determining the coordinate offset of theposition point 702 relative to theposition point 701, and completing the acquisition of the first group of calibration data of the first pixel point to be calibrated 601. In this way, 100 sets of calibration data are obtained for the first pixel point to be calibrated 601, and the typical observation distance and the point position with the obvious characteristic are randomly transformed before each set of calibration data is obtained. Drawing the 100 sets of calibration data together, maintaining the coordinate offset of all position points 702 relative to positionpoints 701, and making all position points 701 coincide; a circle is drawn with 701 as the center of the circle, which includes all the location points 702 exactly, resulting in acircle 703 with radius r. Since there is an error when finding out the central position points 701 and 702 of the point-like portions with obvious features, it is also necessary to multiply the radius R by a margin coefficient, for example, 1.2, to obtain acircle 602 with a radius R, which is the offsetrange circle 602 of thepixel point 601 to be calibrated at the time Δ t. In this way, eachpixel 601 to be calibrated is sequentially calibrated. For the uncalibrated pixel point, according to the distance to the adjacent calibrated pixel point, the reverse distance weighted interpolation is carried out on the radius of the offset range circle of the adjacent calibrated pixel point, so as to obtain the radius of the offset range circle of the pixel point, and further obtain the offsetrange circle 602 of the pixel point.
The number of the pixels to be calibrated is not limited to 15 × 15, and can be adjusted, increased or decreased according to the specific image resolution; the pixel points to be calibrated can be distributed in the whole image range in other arrangement modes. The 100 sets of calibration data do not need to be acquired at one time, and can be accumulated in clinical examination or extracted from clinical video and image data, and the number of the data sets is not limited to 100. The margin coefficient can be reasonably adjusted according to the data acquisition condition and the data group number. The calibration can be performed separately for different types of endoscopes and organs, so as to improve the calibration accuracy, such as gastroscopes, stomatoscopes, colonoscopes, colons and the like.
It should be noted that, in a similar manner to the above calibration of the Δ t time offset range circle, the calibration of the N Δ t (N ≧ 2) time offset range circle may be performed, and then the N Δ t time offset range circle is used to determine the offset range of the specific region for each image of the N-1 images. For example, deviation range circles in multiple times such as Δ t, 2 Δ t, 3 Δ t and the like may be calibrated at the same time, a series of acquired images are divided into two types, i.e., images with relatively rich features and images with relatively fuzzy features, the images with relatively rich features are processed, each image carries acquisition time information, according to the time interval distance between the image and a target image, the deviation range circle in a corresponding time period is selected to determine the deviation range that a specific area can generate in the image, and then alignment is performed. The process is repeated until the images with relatively rich characteristics are all aligned, and then the images with relatively fuzzy characteristics are sequentially aligned by the same method.
It should be noted that, in the above calibration, not only the offset range of the pixel point can be defined by a circle to obtain an offset range circle, but also an offset range square can be obtained by any other shape, such as a square.
In addition, for the method for identifying the specific region in the target image shown in fig. 7, since the specific region is preset, the offset range of the specific region in the other images in the series of images can be calculated in advance according to the calibrated offset range within the time of N Δ t (N ≧ 1), so that after the series of images are captured, step S120 shown in fig. 1 can be omitted, and step S130 can be directly performed.
FIG. 15 is a flow chart of one embodiment of the alignment step shown in FIG. 1. Fig. 16 is a schematic diagram of aligning the image to the specific region shown in fig. 5 within the offset range shown in fig. 11 according to the method shown in fig. 15. The alignment step specifically includes: step S810, searching Feature points by using a Speed Up Robust Feature (SURF) algorithm in thespecific region 403 in thetarget image 301 and the offsetrange 604 in the image to be aligned 302, respectively, where theFeature point 810 in fig. 16 is one of many Feature points; step S820, matching characteristic point pairs by using a Fast Nearest neighbor search Library (Fast Library for approximate neighbor Neighbors, FLANN for short) to obtain a plurality of characteristic point pairs such as 804-809; step S830, using a Random Sample Consensus (RANSAC) algorithm to screen out feature point pairs, such as screening out feature point pairs 805-808; step 840, using the coordinates of the screened feature point pairs, the simultaneous equations solve the homography transformation matrix from theimage 302 to theimage 301, and the homography transformation matrix is used as an alignment transformation matrix, that is, an alignment mapping relation is obtained, so that theimage 302 to be aligned is aligned to thetarget image 301 according to thespecific region 403 in the offsetrange 604. After the alignment, an accuratecorresponding region 821 of thespecific region 403 in the 2nd image 302 can be obtained, and further, the offset range that can be generated by thespecific region 403 in the 3rd image 303 can be estimated according to thecorresponding region 821; further, theimage 303 to be aligned is aligned to thetarget image 301 by thespecific region 403 within the offset range in theimage 303. By doing so, the images 302-325 can be sequentially aligned to thetarget image 301 within the corresponding offset range according to thespecific region 403.
The endoscope image processing method provided by the invention only searches the characteristic points in the specific area and the corresponding offset range and carries out subsequent processing, compared with the processing of the whole image, the calculation amount is greatly reduced, and the alignment effect of the specific area is ensured by means of the limitation of the offset range. Alternatively, the alignment transformation unit 803 may also use the coordinates of the screened feature point pairs, solve a similarity transformation or affine transformation matrix using a simultaneous equation system, and use the similarity transformation or affine transformation matrix as an alignment transformation matrix to align the image to be aligned with the target image.
In addition, the remote sensing image registration based on the improved SIFT algorithm of Marr wavelet or the infrared and visible light image registration algorithm based on saliency and ORB can be used for aligning the image to be aligned to the target image according to a specific region within the offset range. The remote sensing image registration based on the improved SIFT algorithm of the Marr wavelet is to extract the characteristics of a target image and an image to be aligned by using the Marr wavelet under the scale space theory, perform primary alignment on the characteristic points of the target image and the image to be aligned by using the Euclidean distance, and perform fine alignment on a primary alignment result according to a random sampling consistency method. The infrared and visible light image registration algorithm based on significance and ORB obtains a significance structure diagram of an image by using an optimized HC-GHS significance detection algorithm, then performs feature point detection on the significance structure diagram by using an ORB algorithm, screens out feature points with strong robustness by using a Taylor series, performs grouping matching according to the directions of the feature points, and finally realizes the matching of the feature points by using Hamming distance. Alternatively, the specific area may be used as an alignment template, swept in a corresponding offset range, and aligned using mutual information as a similarity measure; alternatively, the alignment is performed using a gray-based elastic alignment method such as B-spline free deformation or demons algorithm.
Fig. 17 is a flowchart of another example of the alignment step shown in fig. 1, and compared with the embodiment shown in fig. 15, before searching for the feature point, step S800 is performed, in which distortion coefficients stored in a storage module are read according to a specific endoscope model, and distortion correction is performed on a specific region and a corresponding offset range to eliminate barrel distortion caused by an image pickup module of the endoscope, wherein the distortion coefficients can be obtained by calibrating the endoscope in advance by using a zhang' S camera calibration method through a chessboard calibration board. After the image alignment is completed, step S850 is performed, that is, the alignment mapping relationship is converted to the distortion correction by using the distortion coefficient again, so as to achieve the alignment of the image to be aligned to the target image.
When a plurality of specific regions are identified, different alignment weights need to be applied to the plurality of specific regions. The following takes the image alignment flow shown in fig. 15 as an example to describe how to perform alignment using the alignment weights during image alignment. First, in screening pairs of feature points, it is ensured that the ratio of the number of pairs of feature points screened from each specific region is equal to the ratio of the alignment weights. For example, in fig. 10, the alignment weights of thespecific region 501 and thespecific region 502 are 1 and 0.33, respectively, and 4 pairs of feature points need to be filtered, 3 pairs are filtered from thespecific region 501, and only 1 pair is filtered from thespecific region 502. Thus, when the simultaneous equations are used for solving the aligned homography transformation matrix, thespecific region 501 plays a leading role, thespecific region 502 plays an auxiliary role, the alignment effect of thespecific region 501 is ensured, and the alignment effect of thespecific region 502 is considered. In addition, any group of stored images can be called out from the storage module, a specific area is reassigned, the offset range is re-estimated, and the alignment is re-carried out.
Fig. 18 is a flowchart of another embodiment of the endoscopic image processing method according to the present invention, and compared to the endoscopic image processing method shown in fig. 1, the better alignment effect can be achieved by performing step S1010, i.e., preliminary image alignment, before performing alignment, and performing step S1020, i.e., image alignment adjustment, after the preliminary alignment. Fig. 19 is a schematic diagram of a preliminary alignment of a series of images in the manner shown in fig. 18. The method comprises the steps of conducting preliminary alignment on a group ofimages 1130 in a corresponding offset range according to aspecific area 1131 by adopting similarity transformation, namely solving 24 similarity transformation matrixes from images 1102 to 1125 to be aligned (only 1112, 1113 and 1114 are shown in FIG. 19) to atarget image 1101, preliminarily aligning the images 1102 to 1125 to be aligned to thetarget image 1101, wherein theimages 1101 to 1125 correspond to the 25 types ofillumination light 201 to 225 shown in FIG. 2 respectively. After the preliminary alignment, the corresponding regions 1132 to 1155 (only 1142, 1143, 1144 are shown in fig. 19) of thespecific region 1101 in the images 1102 to 1125 to be aligned can be obtained.
Because human tissue (including diseased tissue) is not sensitive to certain wavelengths of light, images taken when not sensitive to light illumination may be darker or less characteristic. Such asimage 1113 in fig. 19, the image is dark and the characteristic information is not noticeable. However, information about such images is necessary in the calculation of the spectral curves, and the accuracy of the alignment affects the accuracy of the subsequent spectral pathology analysis. Although there is a limitation of the offset range, since the feature information is not obvious, a large deviation may still occur during the preliminary alignment, for example, aspecific region 1101 has an obvious deviation in acorresponding region 1143 in theimage 1113; however, the image shift caused by various factors during the image taking process is continuous and should not be excessively abrupt. Thus, an offset trajectory for a set ofimages 1130 can be calculated from the 24 similarity transformation matrices of the preliminary alignment, and the alignment mapping for images with significant deviations can be adjusted based on the offset trajectory.
Specifically, the similarity transformation can be decomposed into three basic transformations of a displacement transformation, a rotation transformation, and a scaling transformation. Fig. 20, 21, and 22 are schematic diagrams of shift conversion, rotation conversion, and scaling conversion included in the similarity conversion, where 1201 is an image before conversion, 1202 is an image after shift conversion, 1203 is an image after rotation conversion, and 1204 is an image after scaling conversion. Accordingly, the offset trajectory of the set ofimages 1130 may consist of a displacement trajectory, a rotation trajectory, and a zoom trajectory.
FIG. 23 is a flowchart of an embodiment of the image alignment adjustment step shown in FIG. 18. Specifically, in step S1310, the offset trajectories, i.e., the displacement trajectory, the rotation trajectory and the scaling trajectory, of the set ofimages 1130 are calculated according to 24 similarity transformation matrices from the images 1102 to 1125 to thetarget image 1101. Fig. 24, 25, 26 are schematic diagrams of displacement trajectory, rotation trajectory and zoom trajectory after preliminary alignment, respectively; the abscissa is the center wavelength of 25 kinds of illumination light, and theillumination light 212, 213, 214 correspond to theimages 1112, 1113, 1114 in fig. 19, respectively; the ordinate represents the displacement amount, rotation amount, and zoom amount, respectively, and 1401 represents the displacement locus, 1402 represents the rotation locus, and 1403 represents the zoom locus. Step S1320, calculating a variance of each position point constituting the offset trajectory, or calculating a gradient of two adjacent position points, determining aposition point 1410 at which the variance or the gradient exceeds a set threshold as an abnormal position point, and determining animage 1113 corresponding to theabnormal position point 1410 as an image with a significant deviation on the corresponding offset trajectory. Step S1330 of interpolating a new displacement transformation matrix using the displacement transformation matrices of theimage 1112 and theimage 1114 to theimage 1101; interpolating a new rotational transformation matrix using the rotational transformation matrices ofimage 1112 andimage 1114 toimage 1101; a new scaling transformation matrix is interpolated using the scaling transformation matrices forimage 1112 andimage 1114 toimage 1101. Step S1340, synthesizing the new displacement transformation matrix, rotation transformation matrix and scaling transformation matrix into a new similarity transformation matrix as a new alignment transformation matrix from theimage 1113 to theimage 1101, and completing the adjustment of the alignment mapping relationship of theimage 1113 with significant deviation. If the abnormal locus is detected only in one or two offset tracks, only the corresponding basic transformation matrix needs to be adjusted, and then a new similarity transformation matrix is synthesized; or only one offset trajectory is calculated, and only one fundamental transformation dimension is detected and adjusted.
In addition, affine transformation or homography transformation can be adopted when a series of images are preliminarily aligned according to a specific area in the offset range, and besides the displacement track, the rotation track and the scaling track, other offset tracks, such as a shearing track or a perspective track, can be calculated to detect and adjust corresponding basic transformation dimensions.
The aligned set of images is arranged in the order of the center wavelengths of the corresponding illumination light by means of a software program to generate a spectral data cube for a specific region. In addition, a user can select any pixel point through input equipment, or automatically select a gray scale gravity center pixel point in a specific area, and the center wavelength is used as an abscissa and the gray scale value is used as an ordinate to generate a spectrum curve of the pixel point. The spectral curves of all pixel points in a specific area can be directly generated; or firstly, the specific area and the corresponding area in other images are respectively subjected to gray level averaging, and then the mean value spectrum curve of the specific area is generated. For the case where a plurality of specific regions are specified as shown in fig. 10, a spectrum data cube or a spectrum curve for each specific region may be generated separately.
All or part of the aligned set of images are fused by means of a software program to generate an image. One embodiment is: and respectively calculating the gray level variances of the specific area and the corresponding areas in other images, sorting all the images from large to small according to the variances, and automatically selecting the images which are arranged in the first 50% for fusion, namely selecting the images with relatively large gray level differences for fusion. Another embodiment is: and (3) re-screening feature points from the specific area and the corresponding areas in other images according to the same threshold standard, sorting the feature points from large to small according to the number of the contained feature points, and automatically selecting the images ranked in the top 30% for fusion, namely selecting the images with rich feature information for fusion. In addition, a user can select any number of images through input equipment, and the images are subjected to pseudo-color assignment and are fused to generate a color image. Or, all the images are respectively regarded as one color component, white balance is carried out, and white light images are generated through fusion.
In addition, an embodiment of the present invention provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor. The processor is used for calling the program instruction in the memory to execute the following method: identifying a specific region in a target image, wherein the target image is an image arbitrarily selected from a series of images of an object under examination taken under illumination of an illumination unit; calculating the offset range of the specific area in other images in the series of images, and aligning the series of images according to the specific area in the offset range.
The electronic device provided by the embodiment of the invention can execute the specific steps in the endoscope image processing method and can achieve the same technical effect, and the specific description is not provided herein.
In the above embodiments, the memory is a separate module, including the above mentioned memory module, and the memory may be implemented based on a non-volatile memory, such as an Electrically Erasable Programmable Read Only Memory (EEPROM), a Flash memory (Flash memory), and the like, and may also be a hard disk, an optical disk, or a magnetic tape. In another embodiment, the memory is used only for storing the endoscopic image processing method instructed by the program, the above-mentioned memory module belongs to another memory unit independent of the memory, and when the endoscopic image processing method is executed, if data is required to be called, the data is called from the memory module.
FIG. 27 is a schematic configuration diagram of an endoscope system according to an embodiment of the present invention. Fig. 27 provides an endoscope system including anillumination unit 1501, anendoscope body 1502, and anelectronic apparatus 1506. Among them, theillumination unit 1501 generates a plurality of types of illumination light, which are guided by theendoscope 1502 and are respectively irradiated onto theobject 1520 to be inspected. Theendoscope 1502 includes animage pickup module 1504, and theimage pickup module 1504 picks up a series ofimages 1530 of theobject 1520 corresponding to the plurality of types of illumination light; thecamera module 1504 has anoptical imaging lens 1505, and theoptical imaging lens 1505 is located at the tip of theendoscope body 1502. A processor in the electronic device executes an endoscopic image processing method, designating a particular region in any of the series ofimages 1530; estimating the range of offsets that the particular region can produce within the other images in the series ofimages 1530; aligning a series ofimages 1530 to the particular region within the offset range; generating a spectral data cube or spectral curve for the particular region using the aligned series of images; and fusing all or part of the aligned series of images to generate an image. A memory storing the image, the specific region, the shift range, the spectral data, and the like. The electronic device may be provided with a display device, or may be provided with an external display device for displaying the image, the specific region, the spectral data, and the like. The display device can be realized by a Sony medical display LMD-2451MC or other displays matched with the image resolution. In order to facilitate manual operation of a user, the electronic device can be further provided with input equipment, and other optional input auxiliary equipment such as a touch panel, a keyboard, a voice input device, a camera, a scanner and the like can be selected. Theobject 1520 in fig. 27 is for example only, and the actual object may be any other organ or part to which the endoscope is applied, such as the stomach or the intestine.
The electronic device may be implemented based on an FPGA (Field programmable Gate Array). In addition, the System on Chip (SoC), an Application Specific Integrated Circuit (ASIC), or an embedded processor may also be used, or a computer may also be used directly, or one or more of the above schemes may also be combined.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. An endoscopic image processing method, comprising:
identifying a specific region in a target image, wherein the target image is an image arbitrarily selected from a series of images of an object under examination taken under illumination of an illumination unit; calculating the offset range which can be generated by the specific area in other images in the series of images; aligning the series of images to the specific region within the offset range.
2. The endoscopic image processing method according to claim 1, wherein identifying a specific region in the target image comprises:
manually designating a certain area in the target image as a specific area; or when shooting the target image, adjusting the endoscope body of the endoscope to enable the target object to fall into a preset area, and taking the preset area as a specific area; or automatically recognizing the area where the target object is located in the target image as the specific area.
3. The endoscopic image processing method according to claim 1 or 2, wherein the specific region includes a plurality of specific regions, different alignment weights are applied to the plurality of specific regions, and the series of images are aligned within the offset range based on the alignment weights.
4. The endoscopic image processing method according to claim 1, wherein calculating a range of offsets that the specific region can produce within the other images in the series of images comprises:
and calculating the offset range of the specific area in other images based on the pixel offset range which can be generated by the pre-calibrated pixel points.
5. The endoscopic image processing method according to claim 4, wherein said calculating the offset range that the specific region can generate in the other images based on the pixel offset ranges that the pre-calibrated pixel points can generate comprises:
obtaining a pixel offset range of at least three pixel points which are calibrated in advance;
taking a closed area formed by intersecting tangent lines at the outermost sides of the pixel offset ranges as the offset ranges, or taking the outer boundaries of the pixel offset ranges as the offset ranges;
wherein the at least three pixel points include three pixel points located on the boundary of the specific region.
6. The endoscopic image processing method according to claim 1, wherein aligning the series of images in accordance with the specific region within the offset range comprises:
searching characteristic points in a specific area of the target image and the offset range of the image to be aligned and matching characteristic point pairs, and calculating the alignment mapping relation from the image to be aligned to the target image based on the characteristic point pairs.
7. The endoscopic image processing method according to claim 6, further comprising, before searching for a feature point: distortion correction is carried out on the specific area of the target image and the offset range of the image to be aligned on the basis of the distortion coefficient; after the image alignment is finished, the alignment mapping relation is converted to the distortion correction based on the distortion coefficient, and the alignment of the image to be aligned to the target image is achieved.
8. The endoscopic image processing method according to claim 1, further comprising, before aligning the series of images in accordance with the specific region within the offset range, calculating an offset trajectory of the series of images, and adjusting an alignment mapping relation of some of the series of images based on the offset trajectory.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the endoscopic image processing method as claimed in any of claims 1 to 8 are implemented by the processor when executing the program.
10. An endoscopic system comprising an endoscopic scope, further comprising the electronic device of claim 9, the processor communicatively coupled to a camera module within the endoscopic scope.
CN201911399032.2A2019-12-302019-12-30 Endoscope image processing method, electronic equipment and endoscope systemActiveCN111161852B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201911399032.2ACN111161852B (en)2019-12-302019-12-30 Endoscope image processing method, electronic equipment and endoscope system

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201911399032.2ACN111161852B (en)2019-12-302019-12-30 Endoscope image processing method, electronic equipment and endoscope system

Publications (2)

Publication NumberPublication Date
CN111161852Atrue CN111161852A (en)2020-05-15
CN111161852B CN111161852B (en)2023-08-15

Family

ID=70559586

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201911399032.2AActiveCN111161852B (en)2019-12-302019-12-30 Endoscope image processing method, electronic equipment and endoscope system

Country Status (1)

CountryLink
CN (1)CN111161852B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113068015A (en)*2021-03-242021-07-02南京锐普创科科技有限公司Endoscope image distortion correction system based on optical fiber probe
CN113344987A (en)*2021-07-072021-09-03华北电力大学(保定)Infrared and visible light image registration method and system for power equipment under complex background
CN114119735A (en)*2020-08-282022-03-01腾讯科技(深圳)有限公司Method, device and equipment for determining target area in image

Citations (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5432543A (en)*1992-03-051995-07-11Olympus Optical Co., Ltd.Endoscopic image processing device for estimating three-dimensional shape of object based on detection of same point on a plurality of different images
JP2001136540A (en)*1999-11-052001-05-18Olympus Optical Co LtdImage processor
US6842196B1 (en)*2000-04-042005-01-11Smith & Nephew, Inc.Method and system for automatic correction of motion artifacts
JP2010041418A (en)*2008-08-052010-02-18Olympus CorpImage processor, image processing program, image processing method, and electronic apparatus
US20120092472A1 (en)*2010-10-152012-04-19Olympus CorporationImage processing device, method of controlling image processing device, and endoscope apparatus
CN103035004A (en)*2012-12-102013-04-10浙江大学Circular target centralized positioning method under large visual field
CN104411229A (en)*2012-06-282015-03-11奥林巴斯株式会社Image processing device, image processing method, and image processing program
CN105931237A (en)*2016-04-192016-09-07北京理工大学 A kind of image calibration method and system
US20190080439A1 (en)*2017-09-142019-03-14Canon U.S.A., Inc.Distortion measurement and correction for spectrally encoded endoscopy
US20190298154A1 (en)*2018-03-292019-10-03Canon Usa Inc.Calibration Tool for Rotating Endoscope

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5432543A (en)*1992-03-051995-07-11Olympus Optical Co., Ltd.Endoscopic image processing device for estimating three-dimensional shape of object based on detection of same point on a plurality of different images
JP2001136540A (en)*1999-11-052001-05-18Olympus Optical Co LtdImage processor
US6842196B1 (en)*2000-04-042005-01-11Smith & Nephew, Inc.Method and system for automatic correction of motion artifacts
JP2010041418A (en)*2008-08-052010-02-18Olympus CorpImage processor, image processing program, image processing method, and electronic apparatus
US20120092472A1 (en)*2010-10-152012-04-19Olympus CorporationImage processing device, method of controlling image processing device, and endoscope apparatus
CN104411229A (en)*2012-06-282015-03-11奥林巴斯株式会社Image processing device, image processing method, and image processing program
CN103035004A (en)*2012-12-102013-04-10浙江大学Circular target centralized positioning method under large visual field
CN105931237A (en)*2016-04-192016-09-07北京理工大学 A kind of image calibration method and system
US20190080439A1 (en)*2017-09-142019-03-14Canon U.S.A., Inc.Distortion measurement and correction for spectrally encoded endoscopy
US20190298154A1 (en)*2018-03-292019-10-03Canon Usa Inc.Calibration Tool for Rotating Endoscope

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
钱渠: "基于内窥镜视频的工业管道图像展开研究"*

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114119735A (en)*2020-08-282022-03-01腾讯科技(深圳)有限公司Method, device and equipment for determining target area in image
CN113068015A (en)*2021-03-242021-07-02南京锐普创科科技有限公司Endoscope image distortion correction system based on optical fiber probe
CN113344987A (en)*2021-07-072021-09-03华北电力大学(保定)Infrared and visible light image registration method and system for power equipment under complex background

Also Published As

Publication numberPublication date
CN111161852B (en)2023-08-15

Similar Documents

PublicationPublication DateTitle
JP6150583B2 (en) Image processing apparatus, endoscope apparatus, program, and operation method of image processing apparatus
CN111275041B (en)Endoscope image display method and device, computer equipment and storage medium
JP6588462B2 (en) Wide-field retinal image acquisition system and method
US20220125280A1 (en)Apparatuses and methods involving multi-modal imaging of a sample
JP6952214B2 (en) Endoscope processor, information processing device, endoscope system, program and information processing method
CN111161852A (en)Endoscope image processing method, electronic equipment and endoscope system
CN101420897A (en)Endoscope insertion direction detecting device and endoscope insertion direction detecting method
CN113543694A (en)Medical image processing device, processor device, endoscope system, medical image processing method, and program
JP2014144034A (en)Image processing apparatus, endoscope device, image processing method and image processing program
JP6150554B2 (en) Image processing apparatus, endoscope apparatus, operation method of image processing apparatus, and image processing program
CN115209782A (en)Endoscope system and lumen scanning method based on endoscope system
JP7084546B2 (en) Endoscope system
WO2014208287A1 (en)Detection device, learning device, detection method, learning method, and program
CN113016006A (en)Apparatus and method for wide field hyperspectral imaging
JP6150555B2 (en) Endoscope apparatus, operation method of endoscope apparatus, and image processing program
JP2010220794A (en) Endoscopic image rotation apparatus and method, and program
CN115311239B (en) Virtual ruler construction method, system, and measurement method for video image measurement
JP7455716B2 (en) Endoscope processor and endoscope system
WO2019203006A1 (en)Endoscope device, endoscope processor device, and endoscope image display method
CN107529962B (en)Image processing apparatus, image processing method, and recording medium
US20240000299A1 (en)Image processing apparatus, image processing method, and program
JP7142754B1 (en) How to detect images of esophageal cancer using convolutional neural networks
JP2023129258A (en)Method for detecting objects in hyperspectral imaging
Obukhova et al.Modern methods and algorithms in digital processing of endoscopic images
CN116977411B (en)Endoscope moving speed estimation method and device, electronic equipment and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp