Movatterモバイル変換


[0]ホーム

URL:


CN114022470A - Segmentation method of nematode experimental image - Google Patents

Segmentation method of nematode experimental image
Download PDF

Info

Publication number
CN114022470A
CN114022470ACN202111352070.XACN202111352070ACN114022470ACN 114022470 ACN114022470 ACN 114022470ACN 202111352070 ACN202111352070 ACN 202111352070ACN 114022470 ACN114022470 ACN 114022470A
Authority
CN
China
Prior art keywords
nematode
image
network
experimental
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111352070.XA
Other languages
Chinese (zh)
Inventor
陈维洋
李守聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu University of Technology
Original Assignee
Qilu University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu University of TechnologyfiledCriticalQilu University of Technology
Priority to CN202111352070.XApriorityCriticalpatent/CN114022470A/en
Publication of CN114022470ApublicationCriticalpatent/CN114022470A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种线虫实验图像的分割方法,属于生物图像处理技术领域,本发明要解决的技术问题为如何提高图像分割的准确率,技术方案为:该方法具体如下:S1、下载BBBC010线虫实验图像数据集,并将数据集分为训练集和测试集;S2、利用底帽变换法去除线虫实验图像中不均匀照明;S3、使用训练集对改进的对抗生成网络模型进行训练及分割,分割出线虫实验图像中的线虫,得到用0和1分别标记的非线虫部分和线虫部分的图像;S4、获取测试集中对应线虫实验图像数据中的真实值,根据分割性能评价参数来评价分割结果并将评价结果保存到Excel中;S5、交换测试集和数据集的线虫实验图像,重复步骤S2、S3和S4;S6、计算平均accuracy,得到线虫实验图像分割平均准确率。

Figure 202111352070

The invention discloses a method for segmenting an experimental image of nematodes, belonging to the technical field of biological image processing. The technical problem to be solved by the invention is how to improve the accuracy of image segmentation. The technical scheme is as follows: the method is as follows: S1. Experimental image data set, and the data set is divided into training set and test set; S2. Use the bottom hat transformation method to remove uneven illumination in the experimental image of nematodes; S3. Use the training set to train and segment the improved confrontational generative network model, Segment the nematodes in the nematode experimental image, and obtain the images of the non-nematode part and the nematode part marked with 0 and 1 respectively; S4, obtain the real value in the corresponding nematode experimental image data in the test set, and evaluate the segmentation result according to the segmentation performance evaluation parameter Save the evaluation results in Excel; S5, exchange the nematode experimental images of the test set and the data set, and repeat steps S2, S3 and S4; S6, calculate the average accuracy, and obtain the average accuracy of the nematode experimental image segmentation.

Figure 202111352070

Description

Translated fromChinese
一种线虫实验图像的分割方法A segmentation method of nematode experimental images

技术领域technical field

本发明涉及生物图像处理技术领域,具体地说是一种线虫实验图像的分割方法。The invention relates to the technical field of biological image processing, in particular to a method for segmenting experimental images of nematodes.

背景技术Background technique

在当今社会中,图像是人类获取信息必不可少的一类途径。图像分割就是根据根据某种规则将图像分成若干个有意义的部分,使得每一部分都符合某种特性的要求,而任意两个相邻部分的合并都会破坏这一致性。随着计算机技术的发展,图像分割被广泛应用于各个领域,比如自动驾驶、医学影像分析、生物图像分析等等。图像分割是后续一系列操作的重要前提,图像分割的准确率将直接关系到后续操作的处理结果。In today's society, images are an indispensable way for human beings to obtain information. Image segmentation is to divide the image into several meaningful parts according to certain rules, so that each part meets the requirements of a certain characteristic, and the merging of any two adjacent parts will destroy this consistency. With the development of computer technology, image segmentation is widely used in various fields, such as autonomous driving, medical image analysis, biological image analysis and so on. Image segmentation is an important prerequisite for a series of subsequent operations, and the accuracy of image segmentation will be directly related to the processing results of subsequent operations.

近些年来,各种图像分割的算法层出不穷,比如:基于阈值的图像分割方法,如:大津法;基于区域的图像分割方法;基于边缘检测的图像分割方法;基于小波分析和小波变换的图像分割方法;基于遗传算法的图像分割方法;基于主动轮廓模型的图像分割方法;基于深度学习的图像分割方法等等。In recent years, various image segmentation algorithms have emerged one after another, such as: threshold-based image segmentation methods, such as: Otsu method; region-based image segmentation methods; edge detection-based image segmentation methods; image segmentation based on wavelet analysis and wavelet transform method; image segmentation method based on genetic algorithm; image segmentation method based on active contour model; image segmentation method based on deep learning, etc.

常见的图像分割数据集如BBBC010数据集,它有100张不同线虫实验图像。这100幅线虫图像来自384孔的阳性和阴性对照板。这些图像是从屏幕上选择的对照组,用于使用线虫寻找新的抗感染药物。这些动物暴露于病原体粪肠球菌,未经治疗或使用氨苄西林(一种已知的抗该病原体的抗生素)治疗。未经治疗(阴性对照)的线虫主要表现为“死亡”的雌雄同体:线虫呈棒状,纹理稍不均匀。经处理(氨苄西林,阳性对照)的线虫主要表现为“活”表型:线虫的形状呈曲线状,质地光滑。在BBBC010数据集中,线虫实验图像中受到不均匀照明的影响,对分割结果对准确率有很大影响。而线虫图像分割的结果对下一步的对线虫图像的分析有重大影响,故如何提高图像分割的准确率是目前亟待解决的技术问题。Common image segmentation datasets such as the BBBC010 dataset, which has 100 experimental images of different nematodes. The 100 nematode images were from 384-well positive and negative control plates. The images are a control group selected from the screen for use in the worms' search for new anti-infective drugs. The animals were exposed to the pathogen Enterococcus faecalis, either untreated or treated with ampicillin, an antibiotic known to fight the pathogen. Untreated (negative control) nematodes were predominantly "dead" hermaphrodites: nematodes were rod-shaped with slightly uneven texture. The treated (ampicillin, positive control) nematodes showed predominantly a "live" phenotype: nematodes were curvilinear in shape and smooth in texture. In the BBBC010 dataset, the experimental images of nematodes are affected by uneven illumination, which has a great impact on the accuracy of the segmentation results. The results of nematode image segmentation have a significant impact on the next step in the analysis of nematode images, so how to improve the accuracy of image segmentation is a technical problem that needs to be solved urgently.

发明内容SUMMARY OF THE INVENTION

本发明的技术任务是提供一种线虫实验图像的分割方法,来解决如何提高图像分割的准确率的问题。The technical task of the present invention is to provide a method for segmentation of nematode experimental images to solve the problem of how to improve the accuracy of image segmentation.

本发明的技术任务是按以下方式实现的,一种线虫实验图像的分割方法,该方法具体如下:The technical task of the present invention is achieved in the following manner, a method for segmenting an experimental image of nematodes, the method is specifically as follows:

S1、下载BBBC010线虫实验图像数据集,并将数据集分为训练集和测试集;S1. Download the BBBC010 nematode experimental image dataset, and divide the dataset into a training set and a test set;

S2、利用底帽变换法去除线虫实验图像中不均匀照明;S2. Use the bottom cap transformation method to remove uneven illumination in the nematode experimental image;

S3、使用训练集对改进的对抗生成网络模型进行训练及分割,分割出线虫实验图像中的线虫,得到用0和1分别标记的非线虫部分和线虫部分的图像;S3. Use the training set to train and segment the improved adversarial generative network model, segment the nematodes in the nematode experimental images, and obtain images of the non-nematode part and the nematode part marked with 0 and 1 respectively;

S4、获取测试集中对应线虫实验图像数据中的真实值,根据分割性能评价参数来评价分割结果并将评价结果保存到Excel中;S4. Obtain the real values in the corresponding nematode experimental image data in the test set, evaluate the segmentation results according to the segmentation performance evaluation parameters, and save the evaluation results in Excel;

S5、交换测试集和数据集的线虫实验图像,重复步骤S2、S3和S4;S5, exchange the nematode experimental images of the test set and the data set, and repeat steps S2, S3 and S4;

S6、计算平均accuracy,得到线虫实验图像分割平均准确率。S6, calculate the average accuracy, and obtain the average accuracy of image segmentation in the nematode experiment.

作为优选,改进的对抗生成网络模型包括生成器网络和判别器网络,生成器网络包括8个卷积层和8个反卷积层。Preferably, the improved adversarial generative network model includes a generator network and a discriminator network, and the generator network includes 8 convolutional layers and 8 deconvolutional layers.

更优地,所述生成器网络使用Unet架构的神经网络对线虫图像进行分割,Unet架构的神经网络包括收缩路径和扩展路径。More preferably, the generator network uses a neural network of Unet architecture to segment the nematode image, and the neural network of Unet architecture includes a contraction path and an expansion path.

更优地,所述收缩路径具体如下:More preferably, the shrinking path is as follows:

(1)、通过卷积和池化运算提取不同尺度下输入线虫实验图像的特征;其中,在特征提取过程中,将整个改进的对抗生成网络模型的卷积核大小设置为3*3,以最小化神经网络的复杂度,保证网络的预测和分割精度;(1) Extract the features of the input nematode experimental images at different scales through convolution and pooling operations; in which, in the feature extraction process, the size of the convolution kernel of the entire improved confrontation generation network model is set to 3*3, with Minimize the complexity of the neural network and ensure the prediction and segmentation accuracy of the network;

(2)、每层使用2倍的3*3卷积层对线虫实验图像进行卷积,并在改进的对抗生成网络模型训练过程中,使用batchnorm层保持每层神经网络的输入;(2) Each layer uses 2 times the 3*3 convolution layer to convolve the nematode experimental image, and in the training process of the improved confrontation generative network model, the batchnorm layer is used to maintain the input of each layer of neural network;

(3)、通过池化层将线虫实验图像大小减少一半,并进行4次下采样操作。(3) Reduce the size of the nematode experimental image by half through the pooling layer, and perform 4 downsampling operations.

更优地,所述扩展路径部分具体如下:More preferably, the extension path part is as follows:

①、通过上采样操作从底层恢复收缩路径部分缩小后的线虫实验图像;1. The nematode experimental image after the partial shrinkage of the shrinkage path is restored from the bottom layer by the upsampling operation;

②、恢复后的线虫实验图像通过concat操作与收缩路径部分对应层的线虫实验图像进行融合,并采用dropout优化操作提高图像精度,每次随机失活一半节点;2. The restored nematode experimental image is fused with the nematode experimental image of the corresponding layer of the shrinking path part through the concat operation, and the dropout optimization operation is used to improve the image accuracy, and half of the nodes are randomly deactivated each time;

③、融合后的线虫实验图像经过两次3*3卷积后,继续重复步骤①和②,至到将线虫实验图像恢复到输入的大小;3. After the fused nematode experimental image has undergone two 3*3 convolutions, continue to repeatsteps ① and ② until the nematode experimental image is restored to the input size;

④、通过1*1卷积层得到最终输出的分割结果预测图。④. Obtain the final output segmentation result prediction map through the 1*1 convolution layer.

更优地,所述判别器网络采用由五个卷积层组成的神经网络,判别器网络用于对输入数据的真假进行二分类判别。More preferably, the discriminator network adopts a neural network composed of five convolutional layers, and the discriminator network is used to perform binary classification discrimination on the true and false input data.

更优地,判别器网络用于对输入数据的真假进行二分类判别具体如下:More preferably, the discriminator network is used to perform binary classification discrimination on the true and false input data as follows:

(一)、输入层通过concat操作将BBBC008的掩模图像与生成器网络生成的分割图像相结合,输入到卷积网络中;(1) The input layer combines the mask image of BBBC008 with the segmented image generated by the generator network through the concat operation, and inputs it into the convolutional network;

(二)、通过三个步长为2的4*4卷积层对结果进行卷积;(2) Convolve the result through three 4*4 convolutional layers with a stride of 2;

(三)、通过两个步长为1的4*4卷积层对结果进行卷积;(3) Convolve the result through two 4*4 convolutional layers with a stride of 1;

(四)、在判别器网络末端,使用sigmod激活函数用于输出0或1的识别结果。(4) At the end of the discriminator network, the sigmod activation function is used to output the recognition result of 0 or 1.

作为优选,步骤S3中使用训练集对改进的对抗生成网络模型进行训练具体如下:Preferably, in step S3, the training set is used to train the improved confrontational generative network model as follows:

S301、通过生成器网路预测生成一张线虫实验图片;S301, generating a nematode experimental picture by predicting the generator network;

S302、将生成器网络预测的线虫实验分割图片和真实的线虫实验标签图片发送到判别器网络进行判别:S302, sending the nematode experimental segmentation image predicted by the generator network and the real nematode experimental label image to the discriminator network for discrimination:

①、若生成的线虫实验分割图像与真是的线虫分割图像相同,则判别器将输出接近1的数字;①. If the generated nematode experimental segmentation image is the same as the real nematode segmentation image, the discriminator will output a number close to 1;

②、若生成的线虫实验分割图像与真是的线虫分割图像不同,则将输出接近0的数字;2. If the generated nematode experimental segmentation image is different from the real nematode segmentation image, a number close to 0 will be output;

S303、生成器网络根据判别器网络的输出继续生成预测的线虫实验分割图像;S303, the generator network continues to generate the predicted nematode experimental segmentation image according to the output of the discriminator network;

S304、判别器网络无法区分发送到判别器网络的线虫实验图像是发生器网络输出的线虫实验图像还是真实的线虫分割图像,即网络达到纳什均衡状态,完成分割网络的训练。S304, the discriminator network cannot distinguish whether the nematode experimental image sent to the discriminator network is the nematode experimental image output by the generator network or the real nematode segmented image, that is, the network reaches a Nash equilibrium state, and the training of the segmentation network is completed.

作为优选,在使用训练集对改进的对抗生成网络模型进行训练过程中,使用Relu激活函数,并使用Adam优化算法,学习率为0.0002;对各层的输出结果进行批量归一化,以减少各层之间的依赖性,提高各网络层之间的独立性。Preferably, in the process of training the improved confrontation generative network model using the training set, the Relu activation function is used, and the Adam optimization algorithm is used, and the learning rate is 0.0002; the output results of each layer are batch normalized to reduce the number of Dependency between layers to improve the independence between network layers.

作为优选,改进的对抗生成网络模型的损失函数为:Preferably, the loss function of the improved adversarial generative network model is:

Figure BDA0003356142570000041
Figure BDA0003356142570000041

GDGD

其中,

Figure BDA0003356142570000042
in,
Figure BDA0003356142570000042

Figure BDA0003356142570000043
Figure BDA0003356142570000043

Figure BDA0003356142570000044
Figure BDA0003356142570000044

改进的对抗生成网络模型需要学习图片x和随机噪声z之间的映射,以输出图像y,G:{x,z}->y;Pdata表示真实数据分布,z表示取自先验分布Pz(z),E表示计算出的期望值;改进的对抗生成网络模型损失函数为交叉熵损失函数与传统的损失函数(L1距离)相结合;G表示生成器网络,生成器网络尝试最小化损失函数;D表示判别器网络,判别器网络尝试最大化损失函数。The improved adversarial generative network model needs to learn the mapping between the image x and the random noise z to output the image y, G: {x, z}->y; Pdata represents the real data distribution, and z represents the prior distribution Pz (z), E represents the calculated expected value; the loss function of the improved adversarial generative network model is a combination of the cross-entropy loss function and the traditional loss function (L1 distance); G represents the generator network, and the generator network tries to minimize loss function; D represents the discriminator network, which tries to maximize the loss function.

本发明的线虫实验图像的分割方法具有以下优点:The segmentation method of the nematode experimental image of the present invention has the following advantages:

(一)本发明有效的解决了不均匀照明的线虫实验图像的图像分割准确率低的问题,为下一步对线虫实验图像分析提供了更加准确的图像分割图像;(1) The present invention effectively solves the problem of low image segmentation accuracy of nematode experimental images with uneven illumination, and provides a more accurate image segmentation image for the next step of analyzing nematode experimental images;

(二)本发明采用了底帽变换的方法去除不均匀照明对BBBC010数据集的线虫实验图像分割准确率结果的影响,与cellprofiler去除BBBC010数据集的线虫图像的图像分割的结果相比,在多幅不均匀照明的线虫图像的去除不均匀照明的操作中,底帽变换比cellprofiler具有明显的优势,如附图5所示,在一幅完整的线虫实验图像和另一幅线虫实验图像的一部分组合时,使用底帽变换去除不均匀照明的方法依然有效,使用cellprofiler的凸包法去除不均匀照明的方法不能奏效;在扩大另一幅线虫实验图像的部分,完整的左侧图像保持不变的情况下,使用底帽变换依旧可以去除不均匀照明,但是cellprofiler去除不均匀就照明的方法无效;在四幅不均匀照明的线虫图像组合在一起,同时去除不均匀照明的情况下,底帽变换处理的结果依然良好,但是cellprofiler处理的结果很差;所以,通过底帽变换的步骤,很好的解决了BBBC010数据集的线虫实验图像的不均匀照明的问题。(2) The present invention adopts the method of bottom cap transformation to remove the influence of uneven illumination on the accuracy of nematode image segmentation in the BBBC010 data set. Removal of unevenly illuminated C. elegans images, the bottom cap transform has a clear advantage over cellprofiler, as shown in Figure 5, in a complete C. elegans experimental image and a portion of another C. elegans experimental image. When combined, using the bottom hat transform to remove uneven illumination still works, using cellprofiler's convex hull method to remove uneven illumination doesn't work; while enlarging the portion of another nematode experimental image, the complete left image remains the same Under the circumstance, the uneven illumination can still be removed by using the bottom hat transformation, but the method of cellprofiler to remove the uneven illumination is invalid; in the case where four images of nematodes with uneven illumination are combined together and the uneven illumination is removed at the same time, the bottom hat transformation The results of the processing are still good, but the results of the cellprofiler processing are very poor; therefore, the problem of uneven illumination of the nematode experimental images of the BBBC010 dataset is well resolved by the step of bottom-hat transformation.

附图说明Description of drawings

下面结合附图对本发明进一步说明。The present invention will be further described below with reference to the accompanying drawings.

附图1为线虫实验图像的分割方法的流程框图;Accompanying drawing 1 is the flow chart of the segmentation method of nematode experimental image;

附图2为选取测试组的线虫实验图像;Accompanying drawing 2 is the nematode experiment image of selecting test group;

附图3为通过改进的对抗生成网络模型得到的测试组的线虫实验图像的二值化分割图像;Accompanying drawing 3 is the binarized segmentation image of the nematode experimental image of the test group obtained by the improved confrontation generative network model;

附图4为测试组的线虫实验图像对应的“真实数据”图像;Accompanying drawing 4 is the "real data" image corresponding to the nematode experimental image of the test group;

附图5为底帽变换能处理不均匀照明但是cellprofiler不能处理的三种情况,左侧三组图像为cellprofiler处理结果,右侧三组为同样的不均匀照明的线虫图像进行底帽变换的处理结果;Figure 5 shows three cases where the bottom cap transformation can handle uneven illumination but the cellprofiler cannot. The three groups of images on the left are the processing results of the cellprofiler, and the three groups on the right are the same non-uniformly illuminated nematode images subjected to the bottom cap transformation. process result;

附图6为cellprofiler对BBBC010数据集进行处理的平均准确率结果和otsu、u-net、没有底帽变换的情况下使用改进的对抗生成网络和有底帽变换的情况下使用改进的生成对抗网络的平均分割准确率的结果。Figure 6 shows the average accuracy results of cellprofiler processing the BBBC010 dataset and otsu, u-net, using the improved adversarial generative network without bottom hat transformation and using the improved generative adversarial network with bottom hat transformation The result of the average segmentation accuracy.

具体实施方式Detailed ways

参照说明书附图和具体实施例对本发明的一种线虫实验图像的分割方法作以下详细地说明。The method for segmenting a nematode experimental image of the present invention will be described in detail below with reference to the accompanying drawings and specific examples.

实施例1:Example 1:

如附图1所示,本发明的线虫实验图像的分割方法,该方法具体如下:As shown in accompanying drawing 1, the segmentation method of the nematode experimental image of the present invention, the method is specific as follows:

S1、下载BBBC010线虫实验图像数据集,并将数据集分为训练集和测试集;S1. Download the BBBC010 nematode experimental image dataset, and divide the dataset into a training set and a test set;

S2、利用底帽变换法去除线虫实验图像中不均匀照明;S2. Use the bottom cap transformation method to remove uneven illumination in the nematode experimental image;

S3、使用训练集对改进的对抗生成网络模型进行训练及分割,分割出线虫实验图像中的线虫,得到用0和1分别标记的非线虫部分和线虫部分的图像;S3. Use the training set to train and segment the improved adversarial generative network model, segment the nematodes in the nematode experimental images, and obtain images of the non-nematode part and the nematode part marked with 0 and 1 respectively;

S4、获取测试集中对应线虫实验图像数据中的真实值,根据分割性能评价参数来评价分割结果并将评价结果保存到Excel中;S4. Obtain the real values in the corresponding nematode experimental image data in the test set, evaluate the segmentation results according to the segmentation performance evaluation parameters, and save the evaluation results in Excel;

S5、交换测试集和数据集的线虫实验图像,重复步骤S2、S3和S4;S5, exchange the nematode experimental images of the test set and the data set, and repeat steps S2, S3 and S4;

S6、计算平均accuracy,得到线虫实验图像分割平均准确率。S6, calculate the average accuracy, and obtain the average accuracy of image segmentation in the nematode experiment.

本实施例中的改进的对抗生成网络模型包括生成器网络和判别器网络,生成器网络包括8个卷积层和8个反卷积层。生成器网络使用Unet架构的神经网络对线虫图像进行分割,Unet架构的神经网络包括收缩路径和扩展路径。The improved adversarial generative network model in this embodiment includes a generator network and a discriminator network, and the generator network includes 8 convolutional layers and 8 deconvolutional layers. The generator network uses the neural network of Unet architecture to segment the nematode images, and the neural network of Unet architecture includes contraction path and expansion path.

本实施例中的收缩路径具体如下:The shrinking path in this embodiment is as follows:

(1)、通过卷积和池化运算提取不同尺度下输入线虫实验图像的特征;其中,在特征提取过程中,将整个改进的对抗生成网络模型的卷积核大小设置为3*3,以最小化神经网络的复杂度,保证网络的预测和分割精度;(1) Extract the features of the input nematode experimental images at different scales through convolution and pooling operations; in which, in the feature extraction process, the size of the convolution kernel of the entire improved confrontation generation network model is set to 3*3, with Minimize the complexity of the neural network and ensure the prediction and segmentation accuracy of the network;

(2)、每层使用2倍的3*3卷积层对线虫实验图像进行卷积,并在改进的对抗生成网络模型训练过程中,使用batchnorm层保持每层神经网络的输入;(2) Each layer uses 2 times the 3*3 convolution layer to convolve the nematode experimental image, and in the training process of the improved confrontation generative network model, the batchnorm layer is used to maintain the input of each layer of neural network;

(3)、通过池化层将线虫实验图像大小减少一半,并进行4次下采样操作。(3) Reduce the size of the nematode experimental image by half through the pooling layer, and perform 4 downsampling operations.

本实施例中的扩展路径部分具体如下:The extension path part in this embodiment is as follows:

①、通过上采样操作从底层恢复收缩路径部分缩小后的线虫实验图像;1. The nematode experimental image after the partial shrinkage of the shrinkage path is restored from the bottom layer by the upsampling operation;

②、恢复后的线虫实验图像通过concat操作与收缩路径部分对应层的线虫实验图像进行融合,并采用dropout优化操作提高图像精度,每次随机失活一半节点;2. The restored nematode experimental image is fused with the nematode experimental image of the corresponding layer of the shrinking path part through the concat operation, and the dropout optimization operation is used to improve the image accuracy, and half of the nodes are randomly deactivated each time;

③、融合后的线虫实验图像经过两次3*3卷积后,继续重复步骤①和②,至到将线虫实验图像恢复到输入的大小;3. After the fused nematode experimental image has undergone two 3*3 convolutions, continue to repeatsteps ① and ② until the nematode experimental image is restored to the input size;

④、通过1*1卷积层得到最终输出的分割结果预测图。④. Obtain the final output segmentation result prediction map through the 1*1 convolution layer.

本实施例中的判别器网络采用由五个卷积层组成的神经网络,判别器网络用于对输入数据的真假进行二分类判别;判别器网络用于对输入数据的真假进行二分类判别具体如下:The discriminator network in this embodiment adopts a neural network composed of five convolutional layers. The discriminator network is used to perform binary classification on the true and false input data; the discriminator network is used to perform binary classification on the true and false input data. The judgment is as follows:

(一)、输入层通过concat操作将BBBC008的掩模图像与生成器网络生成的分割图像相结合,输入到卷积网络中;(1) The input layer combines the mask image of BBBC008 with the segmented image generated by the generator network through the concat operation, and inputs it into the convolutional network;

(二)、通过三个步长为2的4*4卷积层对结果进行卷积;(2) Convolve the result through three 4*4 convolutional layers with a stride of 2;

(三)、通过两个步长为1的4*4卷积层对结果进行卷积;(3) Convolve the result through two 4*4 convolutional layers with a stride of 1;

(四)、在判别器网络末端,使用sigmod激活函数用于输出0或1的识别结果。(4) At the end of the discriminator network, the sigmod activation function is used to output the recognition result of 0 or 1.

本实施例中的步骤S3中使用训练集对改进的对抗生成网络模型进行训练具体如下:In step S3 in this embodiment, the training set is used to train the improved confrontational generative network model as follows:

S301、通过生成器网路预测生成一张线虫实验图片;S301, generating a nematode experimental picture by predicting the generator network;

S302、将生成器网络预测的线虫实验分割图片和真实的线虫实验标签图片发送到判别器网络进行判别:S302, sending the nematode experimental segmentation image predicted by the generator network and the real nematode experimental label image to the discriminator network for discrimination:

①、若生成的线虫实验分割图像与真是的线虫分割图像相同,则判别器将输出接近1的数字;①. If the generated nematode experimental segmentation image is the same as the real nematode segmentation image, the discriminator will output a number close to 1;

②、若生成的线虫实验分割图像与真是的线虫分割图像不同,则将输出接近0的数字;2. If the generated nematode experimental segmentation image is different from the real nematode segmentation image, a number close to 0 will be output;

S303、生成器网络根据判别器网络的输出继续生成预测的线虫实验分割图像;S303, the generator network continues to generate the predicted nematode experimental segmentation image according to the output of the discriminator network;

S304、判别器网络无法区分发送到判别器网络的线虫实验图像是发生器网络输出的线虫实验图像还是真实的线虫分割图像,即网络达到纳什均衡状态,完成分割网络的训练。S304, the discriminator network cannot distinguish whether the nematode experimental image sent to the discriminator network is the nematode experimental image output by the generator network or the real nematode segmented image, that is, the network reaches a Nash equilibrium state, and the training of the segmentation network is completed.

本实施例中的在使用训练集对改进的对抗生成网络模型进行训练过程中,使用Relu激活函数,并使用Adam优化算法,学习率为0.0002;对各层的输出结果进行批量归一化,以减少各层之间的依赖性,提高各网络层之间的独立性。In this embodiment, in the process of training the improved confrontation generation network model using the training set, the Relu activation function is used, and the Adam optimization algorithm is used, and the learning rate is 0.0002; the output results of each layer are batch normalized to obtain Reduce dependencies between layers and improve independence between network layers.

作为优选,改进的对抗生成网络模型的损失函数为:Preferably, the loss function of the improved adversarial generative network model is:

Figure BDA0003356142570000071
Figure BDA0003356142570000071

GDGD

其中,

Figure BDA0003356142570000072
in,
Figure BDA0003356142570000072

Figure BDA0003356142570000081
Figure BDA0003356142570000081

Figure BDA0003356142570000082
Figure BDA0003356142570000082

改进的对抗生成网络模型需要学习图片x和随机噪声z之间的映射,以输出图像y,G:{x,z}->y;Pdata表示真实数据分布,z表示取自先验分布Pz(Z),E表示计算出的期望值;改进的对抗生成网络模型损失函数为交叉熵损失函数与传统的损失函数(L1距离)相结合;G表示生成器网络,生成器网络尝试最小化损失函数;D表示判别器网络,判别器网络尝试最大化损失函数。The improved adversarial generative network model needs to learn the mapping between the image x and the random noise z to output the image y, G: {x, z}->y; Pdata represents the real data distribution, and z represents the prior distribution Pz (Z), E represents the calculated expected value; the loss function of the improved adversarial generative network model is a combination of the cross-entropy loss function and the traditionalloss function (L1 distance); G represents the generator network, and the generator network tries to minimize loss function; D represents the discriminator network, which tries to maximize the loss function.

实施例2:Example 2:

本发明的线虫实验图像的分割方法,该方法步骤如下:The segmentation method of the nematode experimental image of the present invention, the method steps are as follows:

S1、下载BBBC010线虫实验数据集,共100张不均匀照明的线虫实验图像,先将其前50张线虫实验图像作为训练集,后50张线虫实验图像作为测试集,如附图2所示;S1. Download the BBBC010 nematode experimental data set, a total of 100 nematode experimental images with uneven illumination, first take the first 50 nematode experimental images as the training set, and the last 50 nematode experimental images as the test set, as shown in Figure 2;

S2、将训练集中的图像进行底帽变换(一种数学形态学方法),去除线虫实验图像不均匀照明,如附图3和4所示;并设计了cellprofiler处理不均匀照明的线虫实验图像的对比实验,在附图5所示的三种情况,即一张不均匀照明的线虫实验图像和另一张线虫实验图像的一部分进行组合的情况下,以及一张不均匀照明的线虫实验图像和三张线虫实验图像进行组合的情况下,cellprofiler不能处理不均匀照明,但是底帽变换可以处理的对照实验图片。S2. Perform bottom hat transformation (a mathematical morphological method) on the images in the training set to remove the uneven illumination of the nematode experimental images, as shown in Figures 3 and 4; For comparison experiments, in the three cases shown in Fig. 5, that is, a nematode experimental image with uneven illumination and a part of another nematode experimental image are combined, and a nematode experimental image with uneven illumination and When the three nematode experimental images are combined, the cellprofiler cannot handle uneven illumination, but the bottom hat transform can handle the control experimental images.

S3、使用改进的对抗生成网络模型对训练集进行训练,生成器网络先生成图像,进入生成对抗网络的判别器网络进行判别,判断是真实图像还是生成的图像,直到判别器网络无法判别是真实图像还是生成器网络生成的图像;然后对测试集进行测试,即可得到用0和1分别标记非线虫部分和线虫部分的图像,达到对BBBC010线虫进行图像分割的目的,得到线虫实验图像分割后的二值图像。S3. Use the improved adversarial generative network model to train the training set. The generator network first generates an image, and then enters the discriminator network of the generative adversarial network to judge whether it is a real image or a generated image, until the discriminator network cannot distinguish whether it is real The image is still the image generated by the generator network; then the test set is tested, and the images marked with 0 and 1 respectively for the non-nematode part and the nematode part can be obtained, so as to achieve the purpose of image segmentation of BBBC010 nematodes, and obtain the nematode experimental image after segmentation. the binary image.

S4、读入对应测试集数据的“准确结果”,即已分割出线虫部分的线虫图像,计算分割性能评价参数如accuracy(准确率),然后交换训练集和测试集的图像,再次进行相同的步骤操作;最后计算平均accuracy,得到图像分割平均准确率为99.22%。并使用cellprofiler、otsu、u-net、没有底帽变换的情况下使用改进的对抗生成网络和有底帽变换的情况下使用改进的生成对抗网络的平均分割准确率做对比。其中cellprofiler是采用了凸包的方法去除的不均匀照明。Cellprofiler有一些不均匀照明的图片不能处理,但是底帽变换可以处理,如附图5所示。otsu采用的是基于阈值的分割方法,进行线虫图像分割。所有的图像分割方法处理BBBC010数据集的平均准确率结果,如附图6所示,第一列为cellprofiler的处理结果的平均准确率为97.85%,第二列为otsu的处理结果的平均准确率为97.87%,第三列为u-net的处理结果的平均准确率为98.14%,第四列为输入图像没有使用底帽变换处理的改进的生成对抗网络的处理结果的平均准确率为99.04%,第五列为输入图像使用底帽变换处理的改进的生成对抗网络的处理结果的平均准确率为99.22%。其中本发明即输入图像使用底帽变换处理的改进的生成对抗网络的处理结果在五种对BBBC010的图像分割方法中表现最佳。S4. Read in the "accurate results" corresponding to the test set data, that is, the nematode images of which the nematode part has been segmented, calculate the segmentation performance evaluation parameters such as accuracy (accuracy rate), and then exchange the images of the training set and the test set, and perform the same procedure again. Step operation; finally calculate the average accuracy, get the average accuracy of image segmentation is 99.22%. And compare the average segmentation accuracy using cellprofiler, otsu, u-net, using the improved adversarial generative network without bottom hat transformation and using the improved generative adversarial network with bottom hat transformation. The cellprofiler is the uneven illumination removed by the convex hull method. Cellprofiler has some images with uneven illumination that it cannot handle, but the bottom hat transform can handle, as shown in Figure 5. otsu uses a threshold-based segmentation method for nematode image segmentation. The average accuracy results of all image segmentation methods processing the BBBC010 dataset, as shown in Figure 6, the first column has an average accuracy of 97.85% for the processing results of cellprofiler, and the second column is the average accuracy of the processing results for otsu. is 97.87%, the third column has an average accuracy of 98.14% for the processing results of u-net, and the fourth column has an average accuracy of 99.04% for the processing results of the improved generative adversarial network where the input image is not processed using the bottom hat transform , and the fifth column has an average accuracy of 99.22% for the processing results of the improved generative adversarial network using the bottom hat transform on the input image. Among them, the processing result of the improved generative adversarial network in which the input image is processed by the bottom hat transformation of the present invention is the best among the five image segmentation methods for BBBC010.

其中,使用了底帽变换对图像进行了处理,这一步骤的目的是去除不均匀照明对图像分割的结果的影响。然后设计了改进的对抗生成网络模型,对BBBC010图像进行处理。Among them, the image is processed using the bottom hat transformation, the purpose of this step is to remove the influence of uneven illumination on the result of image segmentation. Then an improved adversarial generative network model is designed to process BBBC010 images.

对抗生成网络(Generative Adversarial Networks)是一种深度学习的模型,是近些年来复杂分布上无监督学习最具前景的方法之一。对抗生成网络的核心思想源于博弈论的纳什均衡,其包括两个主要模块:生成器和判别器。通过生成器和判别器的相互博弈学习产生相当好的输出。在这里,我们生成器生成线虫实验图像分割的结果,此结果经过判别器——判别器是一个二分类器,判别输入是真实数据还是生成的样本。如果判别器的输入来自真实数据,标注为1,如果输入为生成器生成的数据,标注为0。判别器——二分类器的目标就是实现对数据来源的二分类判别。生成器的目标就是尽可能生成和真实数据接近的数据,最终判别器无法判断,无论对于真假数据,输出结果概率都是0.5。最终将训练好的生成器用于线虫实验图像的分割,即可得到99.22%的平均分割准确率。Generative Adversarial Networks is a deep learning model and one of the most promising methods for unsupervised learning on complex distributions in recent years. The core idea of adversarial generative networks is derived from the Nash equilibrium of game theory, which consists of two main modules: generator and discriminator. The mutual game learning of generator and discriminator produces reasonably good output. Here, our generator generates the results of the segmentation of experimental images of nematodes, which are passed through a discriminator—a binary classifier that discriminates whether the input is real data or generated samples. If the input to the discriminator is from real data, it is marked as 1, and if the input is from data generated by the generator, it is marked as 0. Discriminator - The goal of the binary classifier is to realize the binary classification of the data source. The goal of the generator is to generate data that is as close to the real data as possible. In the end, the discriminator cannot judge. Whether for true or false data, the probability of the output result is 0.5. Finally, the trained generator is used to segment the C. elegans experimental images, and the average segmentation accuracy of 99.22% can be obtained.

本发明设计了cellprofiler、otsu、u-net,输入图像经过底帽变换处理的改进的生成对抗网络和输入图像没有经过底帽变换处理的改进的生成对抗网络的对照实验。最后结果证明输入图像经过底帽变换处理的改进的生成对抗网络在BBBC010数据集的平均分割准确率的结果最佳,为99.22%。The invention designs cellprofiler, otsu, u-net, the improved generative adversarial network in which the input image is processed by bottom hat transformation and the improved generative confrontation network in which the input image is not processed by bottom hat transformation is compared. The final result proves that the average segmentation accuracy of the improved generative adversarial network with the input image processed by bottom hat transformation is the best in the BBBC010 dataset, which is 99.22%.

本发明的是底帽变换对不均匀照明的模式生物图像进行处理,很好的去除了不均匀照明对图像分割的准确率的影响。同时设计了改进的生成对抗网络,在BBBC010的数据集上取得了平均准确率99.22%的结果。The invention uses the bottom hat transformation to process the pattern biological image with uneven illumination, so that the influence of uneven illumination on the accuracy of image segmentation is well removed. At the same time, an improved generative adversarial network is designed, which achieves an average accuracy of 99.22% on the BBBC010 dataset.

最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention, but not to limit them; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: The technical solutions described in the foregoing embodiments can still be modified, or some or all of the technical features thereof can be equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present invention. scope.

Claims (10)

1. A segmentation method of a nematode experimental image is characterized by comprising the following specific steps:
s1, downloading a BBBC010 nematode experiment image data set, and dividing the data set into a training set and a testing set;
s2, removing uneven illumination in the nematode experiment image by using a bottom-cap transformation method;
s3, training and segmenting the improved antagonistic generation network model by using a training set, segmenting nematodes in nematode experimental images, and obtaining images of non-nematode parts and nematode parts which are respectively marked by 0 and 1;
s4, acquiring real values in the image data of the corresponding nematode experiment in the test set, evaluating the segmentation result according to the segmentation performance evaluation parameters, and storing the evaluation result in Excel;
s5, exchanging nematode experimental images of the test set and the data set, and repeating the steps S2, S3 and S4;
and S6, calculating the average accuracy to obtain the average accuracy of nematode experimental image segmentation.
2. The method of claim 1, wherein the improved challenge-generating network model comprises a generator network and a discriminator network, the generator network comprising 8 convolutional layers and 8 anti-convolutional layers.
3. The method for segmenting nematode experimental images according to claim 2, wherein said generator network segments nematode images using a net-structured neural network, which includes a contraction path and an expansion path.
4. The method for segmenting an experimental nematode image as claimed in claim 3, wherein said contraction path is as follows:
(1) extracting features of the input nematode experimental images under different scales through convolution and pooling operation; wherein, in the feature extraction process, the size of a convolution kernel of the whole improved countermeasure generation network model is set to be 3 x 3;
(2) convolving the nematode experimental image by using 2 times of 3-by-3 convolution layers in each layer, and keeping the input of each layer of neural network by using a batcnorm layer in the training process of the improved confrontation generation network model;
(3) and reducing the size of the nematode experiment image by half through a pooling layer, and carrying out down-sampling operation for 4 times.
5. The method for segmenting an experimental nematode image according to claim 3 or 4, wherein the expanded path part is specifically as follows:
recovering a nematode experimental image with a partially reduced contraction path from a bottom layer through an upsampling operation;
fusing the recovered nematode experiment image with the nematode experiment image of the corresponding layer of the contraction path part through concat operation, and improving the image precision by adopting dropout optimization operation, wherein half of nodes are inactivated randomly each time;
thirdly, after the fused nematode experimental image is convoluted by 3 × 3 twice, the first step and the second step are continuously repeated until the nematode experimental image is restored to the input size;
fourthly, obtaining a final output segmentation result prediction graph through the 1 x 1 convolution layer.
6. The method for segmenting the nematode experimental image of claim 3, wherein the discriminator network adopts a neural network consisting of five convolutional layers, and the discriminator network is used for performing classification discrimination on the true and false of the input data.
7. The method for segmenting the nematode experimental image of claim 6, wherein the discriminator network is used for performing two-classification discrimination on the true and false of the input data as follows:
the input layer combines a mask image of BBBC008 and a segmentation image generated by a generator network through concat operation and inputs the combined image into a convolution network;
(II) convolving the result by three 4-by-4 convolution layers with the step length of 2;
(III) convolving the result by two 4-by-4 convolution layers with the step size of 1;
and (IV) at the end of the arbiter network, using a sigmod activating function for outputting a recognition result of 0 or 1.
8. The method for segmenting the nematode experimental image of claim 1, wherein the training set is used to train the improved antagonistic generative network model in step S3 as follows:
s301, generating a nematode experiment picture through generator network prediction;
s302, sending the nematode experiment segmentation picture predicted by the generator network and the real nematode experiment label picture to a discriminator network for discrimination:
firstly, if the generated nematode experiment segmentation image is the same as the true nematode segmentation image, the discriminator outputs a number close to 1;
if the generated nematode experiment segmentation image is different from the true nematode segmentation image, outputting a number close to 0;
s303, continuously generating a predicted nematode experiment segmentation image by the generator network according to the output of the discriminator network;
s304, the discriminator network cannot distinguish whether the nematode experiment image sent to the discriminator network is the nematode experiment image output by the generator network or the real nematode segmentation image, namely the network reaches a Nash equilibrium state, and training of the segmentation network is completed.
9. The method for segmenting the nematode experimental image according to claim 1, characterized in that in the training process of the improved confrontation generation network model by using the training set, the learning rate is 0.0002 by using Relu activation function and using Adam optimization algorithm; and carrying out batch normalization on the output results of all layers.
10. The method for segmenting an experimental nematode image as claimed in claim 1, wherein the improved loss function for the antagonistic generative network model is:
Figure FDA0003356142560000031
GD
wherein,
Figure FDA0003356142560000032
Figure FDA0003356142560000033
Figure FDA0003356142560000034
improved challenge generation network model requires learning between picture x and random noise zTo output the image y, G: { x, z } ->y;PdataRepresenting the true data distribution, z being taken from the prior distribution Pz(z), E represents the calculated expected value; g represents a generator network that attempts to minimize a loss function; d denotes the arbiter network, which tries to maximize the loss function.
CN202111352070.XA2021-11-162021-11-16Segmentation method of nematode experimental imagePendingCN114022470A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202111352070.XACN114022470A (en)2021-11-162021-11-16Segmentation method of nematode experimental image

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202111352070.XACN114022470A (en)2021-11-162021-11-16Segmentation method of nematode experimental image

Publications (1)

Publication NumberPublication Date
CN114022470Atrue CN114022470A (en)2022-02-08

Family

ID=80064441

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111352070.XAPendingCN114022470A (en)2021-11-162021-11-16Segmentation method of nematode experimental image

Country Status (1)

CountryLink
CN (1)CN114022470A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115830316A (en)*2022-11-232023-03-21齐鲁工业大学Nematode image segmentation method and system based on deep learning and iterative feature fusion

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109166126A (en)*2018-08-132019-01-08苏州比格威医疗科技有限公司A method of paint crackle is divided on ICGA image based on condition production confrontation network
CN111524144A (en)*2020-04-102020-08-11南通大学 An intelligent diagnosis method for pulmonary nodules based on GAN and Unet network
CN112102323A (en)*2020-09-172020-12-18陕西师范大学Adherent nucleus segmentation method based on generation of countermeasure network and Caps-Unet network
CN113344936A (en)*2021-07-022021-09-03吉林农业大学Soil nematode image segmentation and width measurement method based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109166126A (en)*2018-08-132019-01-08苏州比格威医疗科技有限公司A method of paint crackle is divided on ICGA image based on condition production confrontation network
CN111524144A (en)*2020-04-102020-08-11南通大学 An intelligent diagnosis method for pulmonary nodules based on GAN and Unet network
CN112102323A (en)*2020-09-172020-12-18陕西师范大学Adherent nucleus segmentation method based on generation of countermeasure network and Caps-Unet network
CN113344936A (en)*2021-07-022021-09-03吉林农业大学Soil nematode image segmentation and width measurement method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SWATI P.PAWAR等: "LungSeg-Net:Lung field segmentation using generative adversarial network", BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 4 November 2020 (2020-11-04), pages 3*
田雅宁: "基于机器学习的人体动作识别", CNKI优秀硕士学位论文全文库, 15 March 2018 (2018-03-15), pages 25*

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115830316A (en)*2022-11-232023-03-21齐鲁工业大学Nematode image segmentation method and system based on deep learning and iterative feature fusion

Similar Documents

PublicationPublication DateTitle
Li et al.Deep convolutional neural network for latent fingerprint enhancement
CN108921019B (en)Gait recognition method based on GEI and TripletLoss-DenseNet
Badawi et al.A hybrid memetic algorithm (genetic algorithm and great deluge local search) with back-propagation classifier for fish recognition
CN110059768A (en)The semantic segmentation method and system of the merging point and provincial characteristics that understand for streetscape
CN111145145B (en)Image surface defect detection method based on MobileNet
CN114445665A (en)Hyperspectral image classification method based on Transformer enhanced non-local U-shaped network
CN113870286A (en)Foreground segmentation method based on multi-level feature and mask fusion
Jordao et al.Deep network compression based on partial least squares
Alqahtani et al.Neuron-based network pruning based on majority voting
CN114022470A (en)Segmentation method of nematode experimental image
CN119479079A (en) Fine-grained sports detection method and device based on deep skeleton feature learning
CN107798286B (en)Hyperspectral image evolution classification method based on labeled sample position
CN113989567A (en) Garbage picture classification method and device
CN115100694B (en) A fast fingerprint retrieval method based on self-supervised neural network
Karki et al.Skin cancer classification using deep networks
Liu et al.SGCNN for 3D point cloud classification
Rui et al.Data Reconstruction based on supervised deep auto-encoder
Antar et al.Robust Object Recognition with Deep Learning on a Variety of Datasets.
Machaca et al.Data augmentation using generative adversarial network for gastrointestinal parasite microscopy image classification
Samraj et al.Deep Learning Models of Melonoma Image Texture Pattern Recognition
CN113807164A (en)Face recognition method based on cosine loss function
Dash et al.Representative primary capsule in capsule network architecture for fast convergence
Ruiz et al.Weakly supervised polyp segmentation from an attention receptive field mechanism
Shinde et al.Comparative analysis of machine learning and deep learning approaches for multiclass identification and classification of Alzheimer's disease
Bharadwaj et al.Comparative Study of Mobilenetv2, Simple CNN And Vgg19 For Image Classification

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
CB02Change of applicant information

Country or region after:China

Address after:250353 University Road, Changqing District, Ji'nan, Shandong Province, No. 3501

Applicant after:Qilu University of Technology (Shandong Academy of Sciences)

Address before:250353 University Road, Changqing District, Ji'nan, Shandong Province, No. 3501

Applicant before:Qilu University of Technology

Country or region before:China

CB02Change of applicant information

[8]ページ先頭

©2009-2025 Movatter.jp