技术领域technical field
本发明属于医学图像处理技术领域,尤其涉及一种基于Adaboost机器学习的胶囊胃镜图像中出血点智能识别方法。The invention belongs to the technical field of medical image processing, in particular to an intelligent identification method for bleeding points in a capsule gastroscope image based on Adaboost machine learning.
背景技术Background technique
消化道类疾病具有前期潜伏难发现,后期难以根治的特点,一旦病人前期耽误了最佳诊断时间,未得到及时治疗,那将大概率长期无法摆脱消化道类疾病的困扰。因此,相较于患病后再医治,提高对消化道类疾病的检测手段,对减少人们的消化道类疾病负担有着重要的意义。Digestive tract diseases have the characteristics of being latent and difficult to detect in the early stage and difficult to cure in the later stage. Once the patient delays the best diagnosis time in the early stage and does not receive timely treatment, there is a high probability that he will not be able to get rid of the troubles of gastrointestinal diseases for a long time. Therefore, it is of great significance to improve the detection methods for digestive tract diseases, which is of great significance to reduce the burden of digestive tract diseases.
然而,目前医院常用的检测手段大多是通过传统的插入式胃镜、直肠镜进行直接的检测。这类检测手段的严重弊端就是会对病人造成严重的不适。为减少传统胃镜对病人造成的痛苦,无线胶囊内窥镜应运而生。通常,无线胶囊内窥镜以至少2帧每秒的速度传输肠胃道内壁图像,在人体内的滞留时间约有8小时,一位病人完成一次检测后,无线胶囊内窥镜将产生数万幅图像。在数量如此庞大的图像中,医生更关注的仅仅是那些具有出血、溃疡等病理特征的图像或者图像里的某些区域,而这些有用的图像所占比例很小。如果通过人工来进行筛选识别将会是一项工作量极大而又繁琐枯燥的事情。所以,利用计算机进行辅助诊断,对出血点进行智能识别具有深远意义。However, most of the detection methods commonly used in hospitals are direct detection through traditional insertion gastroscopes and proctoscopes. The serious drawback of this kind of testing method is that it can cause serious discomfort to the patient. In order to reduce the pain caused by traditional gastroscopes to patients, wireless capsule endoscopes came into being. Usually, the wireless capsule endoscope transmits images of the gastrointestinal tract at a rate of at least 2 frames per second, and the residence time in the human body is about 8 hours. After a patient completes one test, the wireless capsule endoscope will generate tens of thousands of images. image. In such a large number of images, doctors pay more attention to only those images with pathological features such as bleeding and ulcers or certain areas in the images, and these useful images account for a small proportion. If it is manually screened and identified, it will be a laborious and tedious task. Therefore, the use of computer-aided diagnosis and intelligent identification of bleeding points has far-reaching significance.
发明内容SUMMARY OF THE INVENTION
本发明的目的是对具有出血点的胶囊胃镜图像,利用Adaboost机器学习算法进行出血点的定位标注以及非出血区域的滤除,方便医生更快捷方便地进行诊断工作。The purpose of the invention is to use the Adaboost machine learning algorithm to locate and mark the bleeding point and filter out the non-bleeding area for the capsule gastroscope image with bleeding point, so as to facilitate the doctor to carry out the diagnosis work more quickly and conveniently.
本发明采取的技术方案是:The technical scheme adopted by the present invention is:
首先,通过彩色空间转换把输入图像转换到HSI空间,提取每幅图像在HSI颜色空间下三通道的均值,构建成三维向量作为图像级特征向量,供Adaboost进行训练以获得图像分类器;其次,由于胶囊胃镜图像中存在金属边框以及过亮过暗的区域,因此,本发明通过阈值分割的方法对这些区域内的像素进行滤除,并分别提取阈值分割后剩余像素的H、S、I、A、M五通道颜色数据来构造五维特征向量,以供Adaboost进行训练并获得像素分类器;最后,本发明对像素分类器初步识别的图像进行后处理,过滤一些被误判的像素,以优化识别效果。First, convert the input image to HSI space through color space conversion, extract the mean of the three channels of each image in the HSI color space, and construct a three-dimensional vector as an image-level feature vector for Adaboost to train to obtain an image classifier; secondly, Since there are metal frames and areas that are too bright and too dark in the capsule gastroscope image, the present invention filters out the pixels in these areas by threshold segmentation, and extracts H, S, I, and H of the remaining pixels after threshold segmentation. A and M five-channel color data are used to construct five-dimensional feature vectors for Adaboost to train and obtain pixel classifiers; finally, the present invention performs post-processing on the images preliminarily recognized by the pixel classifiers, and filters some misjudged pixels to obtain pixel classifiers. Optimize the recognition effect.
本发明解决其技术问题所采用的技术方案如下:The technical scheme adopted by the present invention to solve its technical problems is as follows:
步骤(1):输入胶囊胃镜出血图像集Dt,然后根据图像颜色的整体深浅对Dt进行分类,分为正常集DtA和颜色偏深集DtB。Step (1): Input the capsule gastroscope hemorrhage image set Dt , and then classify Dt according to the overall depth of the image color, and divide it into a normal set DtA and a darker color set DtB .
步骤(2):依次输入Dt中的图像,并对每次输入的图像进行颜色空间转换,将其由RGB空间转换为HSI空间。在HSI颜色空间,分别计算Dt中每幅图像三通道的均值来构建特征向量,如第i张图像的图像级特征向量:Step (2): Input the images in Dt in turn, and perform color space conversion on each input image, and convert it from RGB space to HSI space. In the HSI color space, the mean of the three channels of each image in Dt is calculated separately to construct a feature vector, such as the image-level feature vector of the ith image:
Fimg(i)={mean(Hi),mean(Si),mean(Ii)} (1)Fimg (i)={mean(Hi ),mean(Si ),mean(Ii )} (1)
步骤(3):将步骤(2)中得到的Dt中每幅图像的图像级特征向量整合为一个矩阵,即获得一个N×3的特征向量矩阵Timg(其中,N为训练集Dt中图像的数量)。Step (3): Integrate the image-level feature vectors of each image in Dt obtained in step (2) into a matrix, that is, obtain an N×3 feature vector matrix Timg (where N is the training set Dt ) number of images in).
步骤(4):根据步骤(1)分类得到的每幅图像对应的所属类别,建立一个对应的N×1标签矩阵Timg_label,例如若第i张图像属于颜色正常,则Timg_label(i)=1,若属于颜色偏深,则Timg_label(i)=-1。Step (4): Establish a corresponding N×1 label matrix Timg_label according to the category corresponding to each image classified in step (1), for example, if the i-th image is of normal color, then Timg_label (i)= 1. If the color is darker, then Timg_label (i)=-1.
步骤(5):通过设定图像有效区域的横纵坐标,构建有效数据窗口,对步骤(1)输入的图像集中的每幅图像进行图像切割以去除图像边角的无效区域。具体窗口坐标根据胃窥镜摄像头参数和使用环境设定。Step (5): By setting the horizontal and vertical coordinates of the effective area of the image, a valid data window is constructed, and image cutting is performed on each image in the image set input in step (1) to remove the invalid area at the corner of the image. The specific window coordinates are set according to the parameters of the gastroscope camera and the use environment.
步骤(6):根据每个像素的I通道数值,通过设定阈值,对于步骤(5)得到的每幅图像中的过亮过暗区域进行滤除,Step (6): According to the I channel value of each pixel, by setting a threshold, filter out the over-bright and over-dark areas in each image obtained in step (5),
T1≤I≤T2 (2)T1 ≤I≤T2 (2)
其中,T1和T2分别是低阈值和高阈值。不满足上述范围的像素将被滤除,这些像素直接被判定为非出血区域,且不参与后续的训练。whereT1 andT2 are the low and high thresholds, respectively. Pixels that do not meet the above range will be filtered out, and these pixels will be directly determined as non-bleeding areas and will not participate in subsequent training.
步骤(7):对步骤(1)得到的DtA进行像素特征向量提取。首先,对DtA中每幅图像进行步骤(5)和(6)的预处理;然后,分别在HSI、LAB、CMYK颜色空间下,提取预处理后图像的每个像素的H、S、I、A、M共五通道的颜色数据,构成该像素的像素级特征向量。Step (7): Extract pixel feature vector for DtA obtained in step (1). First, perform the preprocessing steps (5) and (6) on each image in DtA ; then, in the HSI, LAB, CMYK color spaces, respectively, extract the H, S, I of each pixel of the preprocessed image The color data of five channels, A and M, constitute the pixel-level feature vector of the pixel.
Fpixel_tA(i)={Hi,Si,Ii,Ai,Mi} (3)Fpixel_tA (i)={Hi ,Si ,Ii ,Ai ,Mi }(3)
其中,H,S,I通道数据可由RGB计算,Among them, H, S, I channel data can be calculated by RGB,
A通道的计算公式如下,The calculation formula of channel A is as follows:
A=500(f(X/0.950456)-f(Y)) (5)A=500(f(X/0.950456)-f(Y)) (5)
其中,in,
M通道数据可由RGB计算,M channel data can be calculated by RGB,
其中,in,
K=1-MAX(R,G,B) (8)K=1-MAX(R,G,B) (8)
步骤(8):类似步骤(7),对步骤(1)得到的DtB进行像素特征向量提取,得到像素级特征向量。Step (8): Similar to step (7), extract pixel feature vectors from DtB obtained in step (1) to obtain pixel-level feature vectors.
Fpixel_tB(i)={Hi,Si,Ii,Ai,Mi} (9)Fpixel_tB (i)={Hi ,Si ,Ii ,Ai ,Mi }(9)
步骤(9):根据数据集中每幅图像对应的掩膜图像确定步骤(7)和(8)中每个像素所属类别,建立2个标签矩阵Tpixel_label_A和Tpixel_label_B。Step (9): Determine the category of each pixel in steps (7) and (8) according to the mask image corresponding to each image in the dataset, and establish two label matrices Tpixel_label_A and Tpixel_label_B .
步骤(10):利用步骤(3)获得的特征向量矩阵Timg和步骤(4)得到的标签矩阵Timg_label进行Adaboost图像分类器训练,通过参数调整,获得性能最优的分类器。在训练过程中,需要人工调节的参数是Adaboost算法循环训练的次数K、正则化项v以及CART决策树最大分裂点数S。其中S决定了每一轮训练得到的弱分类器的性能强弱,K、v决定了最终集成的强分类器的性能,S越大则K可以适当减小,但S过大会引起强分类器过拟合,v越小则分类器的过拟合现象越弱,但需要的K越大。所以三个参数需配合设置,通过在适当范围内按一定原则遍历其组合方式以得到理想的参数设置。Step (10): Use the feature vector matrix Timg obtained in step (3) and the label matrix Timg_label obtained in step (4) to train the Adaboost image classifier, and obtain a classifier with the best performance through parameter adjustment. In the training process, the parameters that need to be manually adjusted are the number of Adaboost algorithm loop training K, the regularization term v, and the maximum number of split points S of the CART decision tree. Among them, S determines the performance of the weak classifier obtained in each round of training, and K and v determine the performance of the final integrated strong classifier. The larger S is, the K can be appropriately reduced, but if S is too large, it will cause a strong classifier. Over-fitting, the smaller v is, the weaker the over-fitting phenomenon of the classifier is, but the larger K is required. Therefore, the three parameters need to be set together, and the ideal parameter setting can be obtained by traversing their combination methods within an appropriate range according to certain principles.
步骤(11):利用步骤(7)和(8)获得的像素级特征向量矩阵Fpixel_tA和Fpixel_tB以及步骤(9)得到的标签矩阵Tpixel_label_A和Tpixel_label_B进行Adaboost像素分类器训练,通过参数调整,获得性能最优的分类器。Adaboost像素分类器的训练方法与步骤(10)相同。Step (11): Use the pixel-level feature vector matrices Fpixel_tA and Fpixel_tB obtained in steps (7) and (8) and the label matrices Tpixel_label_A and Tpixel_label_B obtained in step (9) to train the Adaboost pixel classifier, and adjust the parameters , to obtain the best-performing classifier. The training method of Adaboost pixel classifier is the same as step (10).
步骤(12):利用步骤(10)训练的图像分类器对图像进行分类,再利用步骤(11)得到的像素分类器识别出胃窥镜图像中的出血像素,剔除连通区域面积较小的出血像素。Step (12): use the image classifier trained in step (10) to classify the image, and then use the pixel classifier obtained in step (11) to identify the bleeding pixels in the gastroscopic image, and remove the bleeding with a smaller connected area. pixel.
步骤(13):基于步骤(12)得到的检测结果,根据出血连通区域的形心坐标和面积,以形心坐标为圆心,连通区面积为圆面积,显示该圆形区域内的图像,以便于医生诊断。Step (13): Based on the detection result obtained in step (12), according to the centroid coordinates and area of the bleeding connected area, take the centroid coordinates as the center of the circle, and the area of the connected area as the circle area, and display the image in the circular area, so that Diagnosed by a doctor.
本发明的有益效果:Beneficial effects of the present invention:
本发明基于Adaboost机器学习算法所建立的出血点识别模型可以准确识别出胶囊胃窥镜图像中绝大部分出血点并滤除非出血区域,对于少数成像质量不好的图像,也可以做到对关键出血点的识别,这一模型在实际诊断中具有较高的应用作用和意义。The bleeding point recognition model established by the present invention based on the Adaboost machine learning algorithm can accurately identify most of the bleeding points in the capsule gastroscope image and filter out the non-bleeding areas, and can also be used for a few images with poor imaging quality. The identification of bleeding points, this model has high application and significance in actual diagnosis.
附图说明Description of drawings
图1为本发明基于Adaboost机器学习实现胶囊胃镜图像出血点智能识别的结构框图。Fig. 1 is a structural block diagram for realizing intelligent recognition of bleeding points in capsule gastroscope images based on Adaboost machine learning of the present invention.
图2胶囊胃窥镜出血点测试集中的出血点图像与位置标定示意图。Figure 2 Schematic diagram of the bleeding point image and position calibration in the bleeding point test set of the capsule gastroscope.
图3出血点识别的图像显示效果示意图。Figure 3 is a schematic diagram of the image display effect of bleeding point recognition.
具体实施方式Detailed ways
下面结合附图,对本发明方法作进一步说明。The method of the present invention will be further described below in conjunction with the accompanying drawings.
如图1所示,一种基于Adaboost机器学习的胶囊胃镜图像出血点智能识别方法,其具体实施步骤如下:As shown in Figure 1, a method for intelligent identification of bleeding points in capsule gastroscopic images based on Adaboost machine learning, the specific implementation steps are as follows:
步骤(1):输入由采集获得的1893张具有出血点的胶囊胃镜图像集Dt以及对应的掩膜图像,然后根据图像颜色的整体深浅对Dt进行人工分类,分为正常集DtA和颜色偏深集DtB。Step (1): Input the 1893 capsule gastroscope image sets Dt with bleeding points obtained by collection and the corresponding mask images, and then manually classify Dt according to the overall depth of the image color, and divide it into normal sets DtA and D t . A darker color set DtB .
步骤(2):在Matlab环境下,依次输入Dt中的图像,并对每次输入的图像进行颜色空间转换,将其由RGB空间转换为HSI空间,进而,在HSI颜色空间分别计算Dt中每幅图像三通道的均值来构建特征向量,如第i张图像的图像级特征向量:Step (2): In the Matlab environment, input the images in Dt in turn, and perform color space conversion on each input image, convert it from RGB space to HSI space, and then calculate Dt in the HSI color space respectively. The mean value of the three channels of each image in the image is used to construct a feature vector, such as the image-level feature vector of the i-th image:
Fimg(i)={mean(Hi),mean(Si),mean(Ii)} (1)Fimg (i)={mean(Hi ),mean(Si ),mean(Ii )} (1)
步骤(3):将步骤(2)中得到的Dt中每幅图像的图像级特征向量整合为一个矩阵,即获得一个N×3的特征向量矩阵Timg(其中,N为训练集Dt中图像的数量)。Step (3): Integrate the image-level feature vectors of each image in Dt obtained in step (2) into a matrix, that is, obtain an N×3 feature vector matrix Timg (where N is the training set Dt ) number of images in).
步骤(4):根据步骤(1)人工分类得到的每幅图像对应的所属类别,建立一个对应的N×1标签矩阵Timg_label,例如若第i张图像属于颜色正常,则Timg_label(i)=1,若属于颜色偏深,则Timg_label(i)=-1。Step (4): According to the category corresponding to each image obtained by manual classification in step (1), establish a corresponding N×1 label matrix Timg_label , for example, if the ith image belongs to the normal color, then Timg_label (i) =1, if it belongs to a darker color, then Timg_label (i)=-1.
步骤(5):通过设定图像有效区域的横纵坐标,构建有效数据窗口,对步骤(1)输入的图像集中的每幅图像进行图像切割以去除图像边角的无效区域。具体窗口坐标根据胃窥镜摄像头参数和使用环境设定。在本实施例中,针对图像无效区域的切割所设定的像素坐标范围为:Step (5): By setting the horizontal and vertical coordinates of the effective area of the image, a valid data window is constructed, and image cutting is performed on each image in the image set input in step (1) to remove the invalid area at the corner of the image. The specific window coordinates are set according to the parameters of the gastroscope camera and the use environment. In this embodiment, the pixel coordinate range set for the cutting of the invalid area of the image is:
其中,x,y为像素坐标值。Among them, x and y are pixel coordinate values.
步骤(6):根据每个像素的I通道数值,通过设定阈值,对于步骤(5)得到的每幅图像中的过亮过暗区域进行滤除,Step (6): According to the I channel value of each pixel, by setting a threshold, filter out the over-bright and over-dark areas in each image obtained in step (5),
T1≤I≤T2 (3)T1 ≤I≤T2 (3)
其中,T1和T2分别是低阈值和高阈值,在本实施例中,T1=0.235,T2=0.863。不满足上述范围的像素将被滤除,这些像素直接被判定为非出血区域,且不参与后续的训练。Among them, T1 and T2 are the low threshold and the high threshold, respectively, and in this embodiment, T1 =0.235 and T2 =0.863. Pixels that do not meet the above range will be filtered out, and these pixels will be directly determined as non-bleeding areas and will not participate in subsequent training.
步骤(7):对步骤(1)得到的DtA进行像素特征向量提取。首先,对DtA中每幅图像进行步骤(5)和(6)的预处理;然后,分别在HSI、LAB、CMYK颜色空间下,提取预处理后图像的每个像素的H、S、I、A、M共五通道的颜色数据,构成该像素的像素级特征向量。Step (7): Extract pixel feature vector for DtA obtained in step (1). First, perform the preprocessing steps (5) and (6) on each image in DtA ; then, in the HSI, LAB, CMYK color spaces, respectively, extract the H, S, I of each pixel of the preprocessed image The color data of five channels, A and M, constitute the pixel-level feature vector of the pixel.
Fpixel_tA(i)={Hi,Si,Ii,Ai,Mi} (4)Fpixel_tA (i)={Hi ,Si ,Ii ,Ai ,Mi }(4)
其中,H,S,I通道数据可由RGB计算,Among them, H, S, I channel data can be calculated by RGB,
A通道的计算公式如下,The calculation formula of channel A is as follows,
A=500(f(X/0.950456)-f(Y)) (6)A=500(f(X/0.950456)-f(Y)) (6)
其中,in,
M通道数据可由RGB计算,M channel data can be calculated by RGB,
其中,in,
K=1-MAX(R,G,B) (9)K=1-MAX(R,G,B) (9)
步骤(8):类似步骤(7),对步骤(1)得到的DtB进行像素特征向量提取,得到像素级特征向量。Step (8): Similar to step (7), extract pixel feature vectors from DtB obtained in step (1) to obtain pixel-level feature vectors.
Fpixel_tB(i)={Hi,Si,Ii,Ai,Mi} (10)Fpixel_tB (i)={Hi ,Si ,Ii ,Ai ,Mi }(10)
步骤(9):根据数据集中每幅图像对应的掩膜图像确定步骤(7)和(8)中每个像素所属类别,建立2个标签矩阵Tpixel_label_A和Tpixel_label_B。Step (9): Determine the category of each pixel in steps (7) and (8) according to the mask image corresponding to each image in the dataset, and establish two label matrices Tpixel_label_A and Tpixel_label_B .
步骤(10):调用Matlab2015b自带的机器学习工具箱classificationLearner,利用步骤(3)获得的特征向量矩阵Timg和步骤(4)得到的标签矩阵Timg_label进行Adaboost图像分类器训练。在本实施例中,通过设置最大分裂节点数(Maximum number of splits)来控制Adaboost每轮训练所得的弱学习器的性能强弱,设置弱学习器个数(Number oflearners)来控制Adaboost训练的效率以及最终所得强分类器的性能,设置步长(Learningrate)来控制强分类器的抵抗过拟合的能力。经过多次重复训练和调节参数,对于图像分类器的参数设置为最大分裂节点数为3,弱学习器个数为15,步长为0.1。Step (10): Call the classificationLearner, a machine learning toolbox that comes with Matlab2015b, and use the feature vector matrix Timg obtained in step (3) and the label matrix Timg_label obtained in step (4) to train the Adaboost image classifier. In this embodiment, the performance of the weak learners obtained by each round of Adaboost training is controlled by setting the Maximum number of splits, and the efficiency of Adaboost training is controlled by setting the Number of learners As well as the performance of the final strong classifier, the step size (Learningrate) is set to control the ability of the strong classifier to resist overfitting. After repeated training and adjustment of parameters, the parameters for the image classifier are set as the maximum number of split nodes is 3, the number of weak learners is 15, and the step size is 0.1.
步骤(11):利用步骤(7)和(8)获得的像素级特征向量矩阵Fpixel_tA和Fpixel_tB以及步骤(9)得到的标签矩阵Tpixel_label_A和Tpixel_label_B进行Adaboost像素分类器训练,通过参数调整,获得性能最优的分类器。Adaboost像素分类器的训练方法与步骤(10)相同。在本实施例中,对于针对颜色正常集DtA的像素分类器的参数设置为最大分裂节点数为6,弱分类器个数为15,步长为时0.1;针对颜色偏深集DtB的像素分类器的参数设置为最大分裂节点数为10,弱分类器个数为100,步长为时0.1。Step (11): Use the pixel-level feature vector matrices Fpixel_tA and Fpixel_tB obtained in steps (7) and (8) and the label matrices Tpixel_label_A and Tpixel_label_B obtained in step (9) to train the Adaboost pixel classifier, and adjust the parameters , to obtain the best-performing classifier. The training method of Adaboost pixel classifier is the same as step (10). In this embodiment, the parameters of the pixel classifier for the normal color set DtA are set as the maximum number of split nodes is 6, the number of weak classifiers is 15, and the step size is0.1 ; The parameters of the pixel classifier are set as the maximum number of split nodes is 10, the number of weak classifiers is 100, and the step size is 0.1.
步骤(12):利用步骤(10)训练的图像分类器对图像进行分类,再利用步骤(11)得到的像素分类器识别出胃窥镜图像中的出血像素,剔除连通区域面积较小的出血像素。在本实施例中,具体做法为:利用步骤(11)得到的像素分类器对胃窥镜图像中的像素依次进行识别,可以获得识别后每个像素的标签,根据每个像素的标签以及像素坐标,得到一幅二值化图像,如标签为出血的像素显示白色,标签为非出血的像素显示黑色。进一步,利用在Matlab平台下求得其白色连通区域的数量、面积大小及形心。通过对面积设定合理阈值,将面积过小的白色区域修正为黑色(即将出血标签修正为非出血标签),实现修正。Step (12): use the image classifier trained in step (10) to classify the image, and then use the pixel classifier obtained in step (11) to identify the bleeding pixels in the gastroscopic image, and remove the bleeding with a smaller connected area. pixel. In this embodiment, the specific method is as follows: using the pixel classifier obtained in step (11) to sequentially identify the pixels in the gastroscopic image, the label of each pixel after the recognition can be obtained, according to the label of each pixel and the pixel Coordinates to get a binarized image, such as pixels labeled bleed appear white, and pixels labeled non-bleed appear black. Further, the number, area size and centroid of the white connected area are obtained under the Matlab platform. By setting a reasonable threshold for the area, the white area with too small area is corrected to black (that is, the bleeding label is corrected to a non-bleeding label), so as to realize the correction.
步骤(13):基于步骤(12)得到的检测结果,根据出血连通区域的形心坐标和面积,以形心坐标为圆心,连通区面积为圆面积,显示该圆形区域内的图像,以便于医生诊断。在本实施例中,步骤(12)得到的二值图像中的白色区域即为识别出的出血点区域。对于该区域,以其形心为圆心,显示2倍面积的圆形区域内对应的实际图像,即可以获得对出血点的良好显示效果。Step (13): Based on the detection result obtained in step (12), according to the centroid coordinates and area of the bleeding connected area, take the centroid coordinates as the center of the circle, and the area of the connected area as the circle area, and display the image in the circular area, so that Diagnosed by a doctor. In this embodiment, the white area in the binary image obtained in step (12) is the identified bleeding point area. For this area, take the centroid as the center of the circle, and display the corresponding actual image in the circular area with twice the area, that is, a good display effect on the bleeding point can be obtained.
为实现对本发明所提出的方法的性能评测,在本实施例中,利用本单位与合作公司共建的胶囊胃窥镜出血点测试集进行测试。测试集中共有1863张出血点图像,而且每一张出血点图像都对应一个出血点位置标定图,如图2所示。In order to realize the performance evaluation of the method proposed by the present invention, in this embodiment, the test set of the bleeding point of capsule gastroscope jointly established by the company and the cooperative company is used for testing. There are 1863 bleeding point images in the test set, and each bleeding point image corresponds to a bleeding point position calibration map, as shown in Figure 2.
在测试中,用训练好的模型对输入的测试图像进行处理,通过自动识别的图像对出血点的识别效果以及非出血区域的滤除效果来评估模型的图像级识别性能。本实例的测试效果如图3所示。可以看出,基于本发明提出的方法,识别并经过优化显示后的出血点与实际出血点位置比较吻合,可以辅助医生做出正确的诊断。In the test, the input test image is processed with the trained model, and the image-level recognition performance of the model is evaluated by the recognition effect of the automatically recognized image on the bleeding point and the filtering effect of the non-bleeding area. The test effect of this example is shown in Figure 3. It can be seen that, based on the method proposed in the present invention, the identified and optimally displayed bleeding point is more consistent with the actual bleeding point, which can assist the doctor to make a correct diagnosis.
进一步,为评估本发明提出的出血点检测算法性能,在本实施例的测试中采用如下分析指标:Further, in order to evaluate the performance of the bleeding point detection algorithm proposed by the present invention, the following analysis indicators are adopted in the test of the present embodiment:
①准确度(Accuracy):①Accuracy:
其中,TP(True Positive)代表实际为病灶并且诊断同为病灶的数量,FP(FalsePositive)代表将实际为正常组织误判为病灶的数量,TN(True Negative)代表实际为正常组织并且诊断同为正常组织的数量,FN(False Negative)代表实际病灶但被误判为正常组织的数量。Among them, TP (True Positive) represents the number of lesions that are actually lesions and are diagnosed as lesions, FP (FalsePositive) represents the number of lesions that are actually normal tissues misjudged as lesions, and TN (True Negative) represents the actual normal tissues and the same diagnosis as the number of lesions The number of normal tissues, FN (False Negative) represents the number of actual lesions but misjudged as normal tissues.
②特异度(Specificity):②Specificity:
特异度主要用于反映鉴别非病灶的能力,其值越大越好。Specificity is mainly used to reflect the ability to identify non-lesions, and the larger the value, the better.
表1列出了出血点检测的各项指标性能。可以看出,基于本发明提出的方法可以取得很高的出血点检测准确度和特异度,可以大大辅助提升医生诊断的效率。Table 1 lists the performance of various indicators of bleeding point detection. It can be seen that, based on the method proposed in the present invention, high bleeding point detection accuracy and specificity can be obtained, which can greatly assist in improving the efficiency of doctor's diagnosis.
表1在测试图像集上的测试结果Table 1 Test results on the test image set
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810884973.4ACN109241963B (en) | 2018-08-06 | 2018-08-06 | Intelligent recognition method of bleeding points in capsule gastroscopic images based on Adaboost machine learning |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810884973.4ACN109241963B (en) | 2018-08-06 | 2018-08-06 | Intelligent recognition method of bleeding points in capsule gastroscopic images based on Adaboost machine learning |
| Publication Number | Publication Date |
|---|---|
| CN109241963Atrue CN109241963A (en) | 2019-01-18 |
| CN109241963B CN109241963B (en) | 2021-02-02 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201810884973.4AExpired - Fee RelatedCN109241963B (en) | 2018-08-06 | 2018-08-06 | Intelligent recognition method of bleeding points in capsule gastroscopic images based on Adaboost machine learning |
| Country | Link |
|---|---|
| CN (1) | CN109241963B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110400308A (en)* | 2019-07-30 | 2019-11-01 | 青岛海信医疗设备股份有限公司 | Image display method, device, equipment and storage medium |
| CN111986196A (en)* | 2020-09-08 | 2020-11-24 | 贵州工程应用技术学院 | Automatic monitoring method and system for retention of gastrointestinal capsule endoscope |
| CN112488979A (en)* | 2019-08-20 | 2021-03-12 | 新加坡国立大学 | Endoscope image recognition method |
| CN112819017A (en)* | 2021-03-09 | 2021-05-18 | 遵义师范学院 | High-precision color cast image identification method based on histogram |
| CN115184369A (en)* | 2022-09-14 | 2022-10-14 | 北京奥乘智能技术有限公司 | Capsule detection device, capsule detection method, image processing apparatus, and storage medium |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090080768A1 (en)* | 2007-09-20 | 2009-03-26 | Chung Shan Institute Of Science And Technology, Armaments Bureau, M.N.D. | Recognition method for images by probing alimentary canals |
| CN102436638A (en)* | 2010-08-24 | 2012-05-02 | 奥林巴斯株式会社 | Image processing apparatus, image processing method, and computer-readable recording medium |
| US20150065850A1 (en)* | 2013-09-04 | 2015-03-05 | Siemens Corporation | Accurate and efficient polyp detection in wireless capsule endoscopy images |
| CN106373137A (en)* | 2016-08-24 | 2017-02-01 | 安翰光电技术(武汉)有限公司 | Digestive tract hemorrhage image detection method used for capsule endoscope |
| CN107292347A (en)* | 2017-07-06 | 2017-10-24 | 中冶华天南京电气工程技术有限公司 | A kind of capsule endoscope image-recognizing method |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090080768A1 (en)* | 2007-09-20 | 2009-03-26 | Chung Shan Institute Of Science And Technology, Armaments Bureau, M.N.D. | Recognition method for images by probing alimentary canals |
| CN102436638A (en)* | 2010-08-24 | 2012-05-02 | 奥林巴斯株式会社 | Image processing apparatus, image processing method, and computer-readable recording medium |
| US20150065850A1 (en)* | 2013-09-04 | 2015-03-05 | Siemens Corporation | Accurate and efficient polyp detection in wireless capsule endoscopy images |
| CN106373137A (en)* | 2016-08-24 | 2017-02-01 | 安翰光电技术(武汉)有限公司 | Digestive tract hemorrhage image detection method used for capsule endoscope |
| CN107292347A (en)* | 2017-07-06 | 2017-10-24 | 中冶华天南京电气工程技术有限公司 | A kind of capsule endoscope image-recognizing method |
| Title |
|---|
| XIAO JIA,MAX Q.-H. MENG: "A deep convolutional neural network for bleeding detection in Wireless Capsule Endoscopy images", 《2016 38TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC)》* |
| 陈淑芳: "基于颜色特征向量的无线胶囊内窥镜图像出血检测研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110400308A (en)* | 2019-07-30 | 2019-11-01 | 青岛海信医疗设备股份有限公司 | Image display method, device, equipment and storage medium |
| CN112488979A (en)* | 2019-08-20 | 2021-03-12 | 新加坡国立大学 | Endoscope image recognition method |
| CN112488979B (en)* | 2019-08-20 | 2025-08-19 | 新加坡国立大学 | Endoscope image recognition method |
| CN111986196A (en)* | 2020-09-08 | 2020-11-24 | 贵州工程应用技术学院 | Automatic monitoring method and system for retention of gastrointestinal capsule endoscope |
| CN112819017A (en)* | 2021-03-09 | 2021-05-18 | 遵义师范学院 | High-precision color cast image identification method based on histogram |
| CN115184369A (en)* | 2022-09-14 | 2022-10-14 | 北京奥乘智能技术有限公司 | Capsule detection device, capsule detection method, image processing apparatus, and storage medium |
| Publication number | Publication date |
|---|---|
| CN109241963B (en) | 2021-02-02 |
| Publication | Publication Date | Title |
|---|---|---|
| CN109241963B (en) | Intelligent recognition method of bleeding points in capsule gastroscopic images based on Adaboost machine learning | |
| CN108615051B (en) | Diabetic retinal image classification method and system based on deep learning | |
| CN106097335B (en) | Alimentary canal lesion image identification system and recognition methods | |
| WO2020224123A1 (en) | Deep learning-based seizure focus three-dimensional automatic positioning system | |
| CN107730489A (en) | Wireless capsule endoscope small intestine disease variant computer assisted detection system and detection method | |
| CN106023151B (en) | Tongue object detection method under a kind of open environment | |
| JP2021504816A (en) | Bone age evaluation and height prediction model, its system and its prediction method | |
| CN109635871B (en) | A Capsule Endoscopy Image Classification Method Based on Multi-feature Fusion | |
| CN105513077A (en) | System for screening diabetic retinopathy | |
| CN104299242B (en) | Fluoroscopic visualization eye fundus image extracting method based on NGC ACM | |
| CN113269191A (en) | Crop leaf disease identification method and device and storage medium | |
| CN111178369B (en) | A medical image recognition method and system, electronic equipment, and storage medium | |
| CN104794708A (en) | Atherosclerosis plaque composition dividing method based on multi-feature learning | |
| CN112419248B (en) | Ear sclerosis focus detection and diagnosis system based on small target detection neural network | |
| CN107330352A (en) | Sleeping position pressure image-recognizing method based on HOG features and machine learning | |
| CN109961838A (en) | A deep learning-based ultrasound imaging-assisted screening method for chronic kidney disease | |
| CN104463215A (en) | Tiny aneurysm occurrence risk prediction system based on retina image processing | |
| CN117036905B (en) | A lesion recognition method for capsule endoscopy images based on color attention in HSV color space | |
| CN109087310A (en) | Dividing method, system, storage medium and the intelligent terminal of Meibomian gland texture region | |
| CN106780439A (en) | A method for screening fundus images | |
| CN105118070A (en) | Time series based method for positioning bleeding segments in WCE (wireless capsule endoscope) video | |
| CN117649373A (en) | Digestive endoscope image processing method and storage medium | |
| CN112464871A (en) | Deep learning-based traditional Chinese medicine tongue image processing method and system | |
| CN105976342A (en) | Adaptive gray-level image pseudo-color processing method | |
| CN103377375A (en) | Method for processing gastroscope image |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee | Granted publication date:20210202 | |
| CF01 | Termination of patent right due to non-payment of annual fee |