技术领域Technical field
本发明涉及图像处理技术,尤其涉及乳腺病理图像有丝分裂检测方法、系统、存储介质、设备。The invention relates to image processing technology, and in particular to mitosis detection methods, systems, storage media and equipment for breast pathology images.
背景技术Background technique
有丝分裂计数是判别乳腺癌恶性程度的重要指标,可以辅助诊断、治疗及预后。在临床实践上,乳腺癌切片中有丝分裂细胞的检测主要是通过人工进行的,病理学家通常在高功率场(通常为40倍放大的)的显微镜下观察病理组织切片以识别感兴趣区域(ROI),随后分析细胞的整体组织结构和局部信息,再根据自身经验做出判断。一方面,人工检测的过程非常复杂繁琐且耗时,且要求病理学家需要极强的专业能力。另一方面,人工检测是基于病理学家的个人经验和主观判断的,不同的病理学家对同一个病理切片常常会得到不同的结果,分级诊断结果的一致性不高。为此,人工检测流程的局限性产生了有丝分裂细胞计数自动化的需要,对检测效率的提高和计数结果的可靠性至关重要。Mitotic count is an important indicator to determine the malignancy of breast cancer and can assist diagnosis, treatment and prognosis. In clinical practice, the detection of mitotic cells in breast cancer sections is mainly performed manually. Pathologists usually observe pathological tissue sections under a microscope with a high-power field (usually 40x magnification) to identify regions of interest (ROI). ), then analyze the overall organizational structure and local information of the cells, and then make judgments based on their own experience. On the one hand, the manual detection process is very complex, cumbersome and time-consuming, and requires pathologists to have extremely strong professional capabilities. On the other hand, manual detection is based on the personal experience and subjective judgment of pathologists. Different pathologists often obtain different results for the same pathological section, and the consistency of graded diagnosis results is not high. For this reason, the limitations of the manual detection process have created the need for automation of mitotic cell counting, which is crucial to improving detection efficiency and reliability of counting results.
随着深度学习发展,计算机视觉领域取得了长足的进步。有丝分裂检测基于深度学习框架提出了多种模型,主要可以分为像素分类、语义分割和物体检测。用于这些模型训练过程使用的有丝分裂数据集主要为质心注释。质心标注(即弱标记)只提供每个有丝分裂细胞的质心坐标,这种更容易标记。通常卷积神经网络R-CNN会被用于处理这种质心注释的数据集,但由于缺乏质心标注的包围框标签,处理弱标注的数据集效率较低。深度检测模型通常需要精确的有丝分裂边界框标注。然而,弱监督深度分割网络生成的边界框通常是粗略估计,基准真实值(ground truth)监督不可靠,影响了质心标注数据集的检测性能。目前,现有的有丝分裂检测方法仍不能有效解决有丝细胞类内形态变化大的问题,由于有丝分裂细胞在分裂的四个阶段(前期、中期、后期和末期)中染色体的形态变化多样,这将导致检测结果存在较高的假阴性和假阳性,从而影响整个模型的检测性能和泛化性。With the development of deep learning, the field of computer vision has made great progress. Mitosis detection has proposed a variety of models based on deep learning frameworks, which can be mainly divided into pixel classification, semantic segmentation and object detection. The mitosis dataset used for the training process of these models is primarily centroid annotated. Centroid annotation (i.e. weak labeling) only provides the centroid coordinates of each mitotic cell, which is easier to label. Usually the convolutional neural network R-CNN is used to process such centroid annotated data sets, but due to the lack of centroid annotated bounding box labels, processing weakly annotated data sets is less efficient. Depth detection models often require accurate mitotic bounding box annotations. However, the bounding boxes generated by weakly supervised deep segmentation networks are usually rough estimates, and the ground truth supervision is unreliable, which affects the detection performance of centroid annotation datasets. At present, existing mitosis detection methods still cannot effectively solve the problem of large morphological changes within mitotic cells. Since mitotic cells have diverse morphological changes in the four stages of division (prophase, metaphase, anaphase, and telophase), this will This results in high false negatives and false positives in the detection results, thus affecting the detection performance and generalization of the entire model.
发明内容Contents of the invention
为了解决上述问题,本发明提出任务引导的径向基网络对乳腺病理图像有丝分裂检测方法,包括以下步骤:In order to solve the above problems, the present invention proposes a task-guided radial basis network mitosis detection method for breast pathology images, which includes the following steps:
S1、获取乳腺病理图像,从乳腺病理图像中采集图像块样本,图像块样本包含正样本和负样本,正样本包含图像块样本的类别信息和中心偏移量信息,负样本仅包含图像块样本的类别信息,并从图像块样本中选取有丝分裂样本;S1. Obtain the breast pathology image and collect image block samples from the breast pathology image. The image block samples include positive samples and negative samples. The positive samples include the category information and center offset information of the image block samples, and the negative samples only include image block samples. category information, and select mitosis samples from the image block samples;
S2、构建有丝分裂检测的训练模型,包括:候选模块和验证模块,候选模块包括特征提取层、侧输出层、融合层;S2. Construct a training model for mitosis detection, including: candidate module and verification module. The candidate module includes feature extraction layer, side output layer, and fusion layer;
特征提取层输出不同尺度的特征图,侧输出层利用深度监督与注意力机制对每个特征提取层输出的不同尺度的特征图进行监督学习并输出,融合层对不同尺度的侧输出层的输出进行加权融合,得到加权融合后的特征;The feature extraction layer outputs feature maps of different scales. The side output layer uses deep supervision and attention mechanisms to supervised learning and output the feature maps of different scales output by each feature extraction layer. The fusion layer outputs the side output layers of different scales. Perform weighted fusion to obtain the features after weighted fusion;
验证模块包括特征提取模块、径向基网络、中心更新模块;The verification module includes feature extraction module, radial basis network, and center update module;
特征提取模块基于加权融合后的特征对病理图像有丝分裂细胞进行初步分类,径向基网络使用嵌入径向基函数的卷积网络,径向基网络根据有丝样本特征,对病理图像有丝分裂细胞进行进一步类别判断,得到病理图像有丝分裂检测结果,中心更新模块用于迭代更新径向基网络的径向基函数中心;The feature extraction module performs a preliminary classification of mitotic cells in pathological images based on weighted fusion features. The radial basis network uses a convolutional network embedded with radial basis functions. The radial basis network further performs a further classification of mitotic cells in pathological images based on the characteristics of mitotic samples. Category judgment is used to obtain the mitosis detection results of pathological images. The center update module is used to iteratively update the radial basis function center of the radial basis network;
S3、使用所述图像块样本对所述候选模块进行训练,得到训练后的候选模块;S3. Use the image block samples to train the candidate module and obtain the trained candidate module;
将乳腺病理图像输入训练后的候选模块,得到锚点阵列,其中正锚点作为有丝分裂细胞候选点,以有丝分裂细胞候选点为中心从乳腺病理图像中提取候选块,根据每个有丝分裂细胞候选点坐标与手工标注位置坐标的距离确定每个候选块的标签;Input the breast pathology image into the trained candidate module to obtain the anchor point array, in which the positive anchor point is used as the mitotic cell candidate point. Candidate blocks are extracted from the breast pathology image with the mitotic cell candidate point as the center. According to the coordinates of each mitotic cell candidate point The distance from the manually annotated location coordinates determines the label of each candidate patch;
S4、构建一个特征提取器,所述特征提取器和验证模块的特征提取模块相同,将提取的候选块输入验证模块的特征提取模块进行分类预训练,得到特征提取模块的初始权值;径向基网络基于候选块分类任务引导径向基函数的中心初始化,并通过中心更新模块迭代更新确定最优径向基函数中心,将有丝分裂样本输入所述特征提取器,并用所述初始权值更新特征提取器权重,所述特征提取器输出有丝分裂样本的特征表达,对所述有丝分裂样本的特征表达运行K-means聚类算法初始化嵌入径向基函数的卷积网络的径向基函数中心,使用初始化后的嵌入径向基函数的卷积网络对特征提取模块的输出进行再分类,采用迭代聚类更新确定最优径向基函数中心;S4. Construct a feature extractor. The feature extractor is the same as the feature extraction module of the verification module. The extracted candidate blocks are input into the feature extraction module of the verification module for classification pre-training to obtain the initial weight of the feature extraction module; radial The base network guides the center initialization of the radial basis function based on the candidate block classification task, and determines the optimal radial basis function center through iterative updates through the center update module, inputs mitotic samples into the feature extractor, and updates features with the initial weights Extractor weight, the feature extractor outputs the feature expression of the mitosis sample, runs the K-means clustering algorithm on the feature expression of the mitosis sample, initializes the radial basis function center of the convolutional network embedding the radial basis function, and uses initialization The final convolutional network embedded with the radial basis function reclassifies the output of the feature extraction module, and uses iterative clustering update to determine the optimal radial basis function center;
S5、连接训练好的候选模块和验证模块的径向基网络,构成有丝分裂检测模型,将待检测的整幅乳腺病理组织图像输入所述有丝分裂检测模型,得到有丝分裂细胞的检测结果。S5. Connect the radial basis network of the trained candidate module and the verification module to form a mitosis detection model. Input the entire breast pathological tissue image to be detected into the mitosis detection model to obtain the detection results of mitotic cells.
进一步地,正样本采样以乳腺病理图像中人工标注的位置为参照点进行随机偏移,之后以偏移后的位置为中心采样相同尺寸且参照点位于内部的图像块,并保存该偏移量用于位置回归任务;负样本由乳腺病理图像中随机采样和正样本相同尺寸的图像块产生,负样本中心和参照点距离大于图像块采样尺寸。Further, the positive sample sampling is randomly offset using the manually marked position in the breast pathology image as the reference point, and then an image block of the same size with the reference point located inside is sampled around the offset position, and the offset is saved. Used for position regression tasks; negative samples are generated by randomly sampling image blocks of the same size as the positive samples in breast pathology images, and the distance between the center of the negative sample and the reference point is greater than the image block sampling size.
进一步地,特征提取层以卷积神经网络为主干网络,所述主干网络不采用全连接层,主干网络的每层卷积后采用批量归一化层和激活函数Relu,连续的卷积层之间插入最大池化层;Further, the feature extraction layer uses a convolutional neural network as the backbone network. The backbone network does not use a fully connected layer. After convolution of each layer of the backbone network, a batch normalization layer and an activation function Relu are used. Between consecutive convolution layers Insert max pooling layer in between;
侧输出层的输入连接特征提取层的每个池化层和最后一个卷积层,每个侧输出层的输入经过一个注意力模块重新校准通道特征,得到校准后的不同通道的特征,使用一个核大小为1×1的卷积层对校准后的不同通道的特征进行组合,得到组合特征,使用完全卷积层做为组合特征的分类器,通过softmax层得到侧输出层的输出;融合层对不同尺度的侧输出层的输出进行加权融合,得到加权融合后的特征。The input of the side output layer is connected to each pooling layer and the last convolution layer of the feature extraction layer. The input of each side output layer recalibrates the channel features through an attention module to obtain the calibrated features of different channels. Using a The convolutional layer with a kernel size of 1×1 combines the calibrated features of different channels to obtain the combined features. The fully convolutional layer is used as a classifier for the combined features, and the output of the side output layer is obtained through the softmax layer; the fusion layer Weighted fusion is performed on the outputs of the side output layers of different scales to obtain weighted fused features.
进一步地,候选模块训练时的总损失函数由侧输出层的损失函数和融合层的损失函数两部分组成;Further, the total loss function during candidate module training consists of two parts: the loss function of the side output layer and the loss function of the fusion layer;
侧输出层的损失函数包括位置损失、类别的损失;The loss function of the side output layer includes position loss and category loss;
融合层的损失函数包括位置损失、类别的损失;The loss function of the fusion layer includes position loss and category loss;
候选模块训练时的总损失函数的表达式如下:The expression of the total loss function during candidate module training is as follows:
其中,L(W)为候选模块训练时的总损失函数,W表示在候选模块中要学习的所有参数,表示侧输出层的第s个输出的类别损失,l表示图像块样本的二值标签真值,负样本用0标识,正样本用1标识,/>为每个侧输出层Softmax分类器得到的[0,1]范围内的预测有丝分裂细胞核概率;/>表示融合层的类别损失,γ是一个平衡分类和回归任务之间成本的超参数,/>为侧输出层的位置损失,(x,y)为图像块采样过程中所记录的细胞核中心偏移量的真值,/>为候选模块估计的候选块的偏移量,/>为融合层的位置损失。Among them, L(W) is the total loss function when training the candidate module, and W represents all the parameters to be learned in the candidate module. Represents the category loss of the s-th output of the side output layer, l represents the true value of the binary label of the image block sample, negative samples are identified by 0, and positive samples are identified by 1, /> The predicted mitotic nucleus probability in the range [0,1] obtained by the Softmax classifier for each side output layer;/> Represents the category loss of the fusion layer, γ is a hyperparameter that balances the cost between classification and regression tasks, /> is the position loss of the side output layer, (x, y) is the true value of the nucleus center offset recorded during the image block sampling process,/> The offset of the candidate block estimated for the candidate module, /> is the position loss of the fusion layer.
进一步地,侧输出层的每个输出的类别损失由交叉熵函数定义,表示如下:Further, the category loss of each output of the side output layer is defined by the cross-entropy function, expressed as follows:
其中,表示侧输出层的第s个输出的类别损失,N+表示正样本图像块的数量;N-表示负样本图像块的数量,li表示第i个输入图像块样本的二值标签真值。in, Represents the category loss of the s-th output of the side output layer, N+ represents the number of positive sample image blocks; N- represents the number of negative sample image blocks, li represents the binary label true value of the i-th input image block sample.
进一步地,融合层的位置损失使用欧几里得L2范数来定义,表示如下:Furthermore, the position loss of the fusion layer is defined using the Euclidean L2 norm, which is expressed as follows:
其中1(·)为指示函数,1(li=1)表示在li=1时为1,li表示第i个输入图像块样本的二值标签真值。Among them, 1(·) is an indicator function, 1(li =1) means it is 1 when li =1, and li represents the true value of the binary label of the i-th input image block sample.
进一步地,径向基网络的训练使用二元交叉熵损失函数对候选块进行类别判断学习:Further, the training of the radial basis network uses the binary cross-entropy loss function to perform category judgment learning on the candidate blocks:
其中,为径向基网络的类别损失,N表示样本的个数,li表示第i个候选块的二值标签真值,非有丝分裂细胞用0标识,有丝分裂细胞用1标识,/>为不同径向基函数的加权组合得到第i个候选块有丝分裂细胞核预测概率。in, is the category loss of the radial basis network, N represents the number of samples, li represents the binary label true value of the i-th candidate block, non-mitotic cells are marked with 0, and mitotic cells are marked with 1, /> The predicted probability of mitotic cell nuclei of the i-th candidate block is obtained for the weighted combination of different radial basis functions.
本发明还提出任务引导的径向基网络对乳腺病理图像有丝分裂检测系统,包括:The present invention also proposes a task-guided radial basis network mitosis detection system for breast pathology images, including:
数据获取模块,用于获取乳腺病理图像,从乳腺病理图像中采集图像块样本,图像块样本包含正样本和负样本,正样本包含图像块样本的类别信息和中心偏移量信息,负样本仅包含图像块样本的类别信息,并从图像块样本中选取有丝分裂样本;The data acquisition module is used to obtain breast pathology images and collect image block samples from breast pathology images. The image block samples contain positive samples and negative samples. The positive samples contain the category information and center offset information of the image block samples. The negative samples only Contains category information of image block samples, and selects mitotic samples from the image block samples;
模型构建模块,用于构建有丝分裂检测的训练模型,包括:候选模块和验证模块,候选模块包括特征提取层、侧输出层、融合层;The model building module is used to build a training model for mitosis detection, including: a candidate module and a verification module. The candidate module includes a feature extraction layer, a side output layer, and a fusion layer;
特征提取层输出不同尺度的特征图,侧输出层利用深度监督与注意力机制对每个特征提取层输出的不同尺度的特征图进行监督学习并输出,融合层对不同尺度的侧输出层的输出进行加权融合,得到加权融合后的特征;The feature extraction layer outputs feature maps of different scales. The side output layer uses deep supervision and attention mechanisms to supervised learning and output the feature maps of different scales output by each feature extraction layer. The fusion layer outputs the side output layers of different scales. Perform weighted fusion to obtain the features after weighted fusion;
验证模块包括特征提取模块、径向基网络、中心更新模块;The verification module includes feature extraction module, radial basis network, and center update module;
特征提取模块基于加权融合后的特征对病理图像有丝分裂细胞进行初步分类,径向基网络使用嵌入径向基函数的卷积网络,对特征提取模块初步分类结果进行进一步类别判断,得到病理图像有丝分裂检测结果,中心更新模块用于迭代更新径向基网络的径向基函数中心;The feature extraction module performs a preliminary classification of mitotic cells in pathological images based on the weighted fusion features. The radial basis network uses a convolutional network embedded with a radial basis function to further classify the preliminary classification results of the feature extraction module to obtain mitosis detection in pathological images. As a result, the center update module is used to iteratively update the radial basis function center of the radial basis network;
模型第一阶段训练模块,用于使用所述图像块样本对所述候选模块进行训练,得到训练后的候选模块;将乳腺病理图像输入训练后的候选模块,得到锚点阵列,其中正锚点作为有丝分裂细胞候选点,以有丝分裂细胞候选点为中心从乳腺病理图像中提取候选块,根据每个有丝分裂细胞候选点坐标与手工标注位置坐标的距离确定每个候选块的标签;The first stage training module of the model is used to train the candidate module using the image block samples to obtain the trained candidate module; input the breast pathology image into the trained candidate module to obtain the anchor point array, in which the positive anchor point As mitotic cell candidate points, candidate blocks are extracted from the breast pathology image with the mitotic cell candidate point as the center, and the label of each candidate block is determined based on the distance between the coordinates of each mitotic cell candidate point and the manually marked position coordinates;
模型第二阶段训练模块,用于根据下述方法训练验证模块:构建一个特征提取器,所述特征提取器和验证模块的特征提取模块相同,将提取的候选块输入验证模块的特征提取模块进行分类预训练,得到特征提取模块的初始权值;径向基网络基于候选块分类任务引导径向基函数的中心初始化,并通过中心更新模块迭代更新确定最优径向基函数中心;将有丝分裂样本输入所述特征提取器,并用所述初始权值更新特征提取器权重,所述特征提取器输出有丝分裂样本的特征表达,对所述有丝分裂样本的特征表达运行K-means聚类算法初始化嵌入径向基函数的卷积网络的径向基函数中心,使用初始化后的嵌入径向基函数的卷积网络对特征提取模块的输出进行再分类,采用迭代聚类更新确定最优径向基函数中心;The second-stage training module of the model is used to train the verification module according to the following method: construct a feature extractor, which is the same as the feature extraction module of the verification module, and input the extracted candidate blocks into the feature extraction module of the verification module. Classification pre-training to obtain the initial weight of the feature extraction module; the radial basis network guides the center initialization of the radial basis function based on the candidate block classification task, and determines the optimal radial basis function center through iterative updates of the center update module; mitotic samples are Input the feature extractor and update the feature extractor weight with the initial weight. The feature extractor outputs the feature expression of the mitotic sample. The K-means clustering algorithm is run on the feature expression of the mitotic sample to initialize the embedding radial direction. The radial basis function center of the convolutional network of the basis function uses the initialized convolutional network embedded with the radial basis function to reclassify the output of the feature extraction module, and uses iterative clustering update to determine the optimal radial basis function center;
检测模块,用于连接训练好的候选模块和验证模块的径向基网络,构成有丝分裂检测模型,将待检测的整幅乳腺病理组织图像输入所述有丝分裂检测模型,得到有丝分裂细胞的检测结果。The detection module is used to connect the radial basis network of the trained candidate module and the verification module to form a mitosis detection model. The entire breast pathological tissue image to be detected is input into the mitosis detection model to obtain the detection results of mitotic cells.
本发明还提出一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述的乳腺病理图像有丝分裂检测方法的步骤。The present invention also proposes a computer-readable storage medium, which stores a computer program. When the computer program is executed by a processor, the steps of the mitosis detection method for breast pathology images are implemented.
本发明还提出一种电子设备,包括处理器和存储器,所述处理器与所述存储器相互连接,其中,所述存储器用于存储计算机程序,所述计算机程序包括计算机可读指令,所述处理器被配置用于调用所述计算机可读指令,执行上述的乳腺病理图像有丝分裂检测。The present invention also proposes an electronic device, including a processor and a memory, the processor and the memory are connected to each other, wherein the memory is used to store a computer program, the computer program includes computer readable instructions, and the processing The device is configured to invoke the computer-readable instructions to perform the above-mentioned mitosis detection on the breast pathology image.
本发明提供的技术方案带来的有益效果是:The beneficial effects brought by the technical solution provided by the present invention are:
本发明构建了任务引导的深度径向基网络,包括候选模块和验证模块,使用正负样本训练候选模块,利用训练好的候选模块和整张乳腺病理图像获取候选点从而得到候选块,将位置表达添加到了图像块信息中,在候选块检测网络中融入了深监督机制以获取高质量的有丝分裂细胞候选块和更加精确的位置定位,验证模块使用特征提取和径向基函数表达整合为统一框架,充分发挥了径向基网络良好的逼近能力和泛化能力的优势;将径向基函数中心定义结合了有丝细胞鉴别任务,利用不同的径向基函数中心更好地处理有丝分裂细胞形态结构的多变性。The present invention constructs a task-guided deep radial basis network, including a candidate module and a verification module, uses positive and negative samples to train the candidate module, uses the trained candidate module and the entire breast pathology image to obtain candidate points to obtain candidate blocks, and the positions The expression is added to the image patch information, and a deep supervision mechanism is incorporated into the candidate patch detection network to obtain high-quality mitotic cell candidate patches and more precise positioning. The verification module uses feature extraction and radial basis function expression to integrate into a unified framework , giving full play to the advantages of the good approximation ability and generalization ability of the radial basis network; combining the definition of the radial basis function center with the task of mitotic cell identification, and using different radial basis function centers to better handle the morphological structure of mitotic cells of variability.
附图说明Description of drawings
图1是本发明实施例的乳腺病理图像有丝分裂检测方法的流程图;Figure 1 is a flow chart of a mitosis detection method in breast pathology images according to an embodiment of the present invention;
图2是本发明实施例中的候选模块结构图;Figure 2 is a candidate module structure diagram in the embodiment of the present invention;
图3是本发明实施例中的候选模块的训练过程图;Figure 3 is a training process diagram of the candidate module in the embodiment of the present invention;
图4是本发明实施例中的验证模块的训练过程图;Figure 4 is a training process diagram of the verification module in the embodiment of the present invention;
图5是本发明实施例中验证模块的特征提取模块的结构图;Figure 5 is a structural diagram of the feature extraction module of the verification module in the embodiment of the present invention;
图6是本发明实施例一示例性实施例中的一种电子设备的框图。FIG. 6 is a block diagram of an electronic device in an exemplary embodiment of the present invention.
具体实施方式Detailed ways
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明实施方式作进一步地描述。In order to make the purpose, technical solutions and advantages of the present invention clearer, the embodiments of the present invention will be further described below in conjunction with the accompanying drawings.
本发明实施例的乳腺病理图像有丝分裂检测方法的流程图如图1,具体包括以下步骤:The flow chart of the mitosis detection method in breast pathology images according to the embodiment of the present invention is shown in Figure 1, which specifically includes the following steps:
S1、获取乳腺病理图像,从乳腺病理图像中采集图像块样本,图像块样本包含正样本和负样本,正样本包含图像块样本的类别信息和中心偏移量信息,负样本仅包含图像块样本的类别信息,并从图像块样本中选取有丝分裂样本。S1. Obtain the breast pathology image and collect image block samples from the breast pathology image. The image block samples include positive samples and negative samples. The positive samples include the category information and center offset information of the image block samples, and the negative samples only include image block samples. category information, and select mitotic samples from the image patch samples.
其中,正样本是以乳腺病理图像中人工标注的位置为参照点进行随机偏移,以偏移后的位置为中心采样相同尺寸且参照点位于内部的图像块,并保存每个正样本的偏移量用于位置回归任务;Among them, the positive sample is randomly offset using the manually marked position in the breast pathology image as the reference point. Image blocks of the same size with the reference point located inside are sampled around the offset position, and the bias of each positive sample is saved. Shift is used for position regression tasks;
负样本由乳腺病理图像中随机采样和正样本相同尺寸的图像块产生,负样本中心和参照点距离大于图像块采样尺寸。The negative sample is generated by randomly sampling image blocks of the same size as the positive sample in the breast pathology image, and the distance between the center of the negative sample and the reference point is greater than the image block sampling size.
进一步的实施例中,将采样得到的正负样本图像块大小设置为60×60像素,正负样本比为1:2,正样本随机偏移量选取0~10像素。In a further embodiment, the size of the sampled positive and negative sample image blocks is set to 60×60 pixels, the ratio of positive and negative samples is 1:2, and the random offset of the positive samples is selected from 0 to 10 pixels.
S2、构建有丝分裂检测的训练模型,包括:候选模块和验证模块,候选模块包括特征提取层、侧输出层、融合层,候选模块结构图参考图2。S2. Construct a training model for mitosis detection, including: candidate module and verification module. The candidate module includes a feature extraction layer, a side output layer, and a fusion layer. For the structure diagram of the candidate module, refer to Figure 2.
特征提取层输出不同尺度的特征图,侧输出层利用深度监督与注意力机制对每个特征提取层输出的不同尺度的特征图进行监督学习并输出,融合层对不同尺度的侧输出层的输出进行加权融合,得到加权融合后的特征。The feature extraction layer outputs feature maps of different scales. The side output layer uses deep supervision and attention mechanisms to supervised learning and output the feature maps of different scales output by each feature extraction layer. The fusion layer outputs the side output layers of different scales. Perform weighted fusion to obtain the weighted fusion features.
特征提取层以卷积神经网络为主干网络,所述主干网络不采用全连接层,并在连续的卷积层之间插入最大池化层。The feature extraction layer uses a convolutional neural network as the backbone network. The backbone network does not use a fully connected layer, and a maximum pooling layer is inserted between consecutive convolutional layers.
进一步的实施例参考图2,特征提取层包括9个卷积,3个2×2池化,第1卷积和第2卷积的卷积核为3×3,输出通道为64,第1卷积和第2卷积相连,其后加上第1池化得到侧输出层的第1输入;侧输出层的第1输入经过第3卷积和第4卷积,并经过第2池化得到侧输出层的第2输入,第3卷积和第4卷积的卷积核为3×3,输出通道为128;侧输出层的第2输入经过第5卷积、第6卷积、第7卷积,并经过第3池化,再经过第8卷积、第9卷积,得到侧输出层的第3输入,其中第5卷积、第6卷积、第7卷积的卷积核为3×3,输出通道为256,第8卷积的卷积核为3×3,输出通道为512,第9卷积的卷积核为1×1,输出通道为1。Referring to Figure 2 for further embodiments, the feature extraction layer includes 9 convolutions, 3 2×2 pooling, the convolution kernels of the first convolution and the second convolution are 3×3, the output channel is 64, and the first The convolution is connected to the second convolution, and then the first pooling is added to obtain the first input of the side output layer; the first input of the side output layer goes through the third convolution and the fourth convolution, and then goes through the second pooling Obtain the second input of the side output layer, the convolution kernel of the third convolution and the fourth convolution is 3×3, and the output channel is 128; the second input of the side output layer passes through the 5th convolution, the 6th convolution, The 7th convolution, and after the 3rd pooling, and then the 8th convolution and the 9th convolution, the third input of the side output layer is obtained, among which the 5th convolution, the 6th convolution, and the 7th convolution are The convolution kernel is 3×3, the output channel is 256, the convolution kernel of the 8th convolution is 3×3, the output channel is 512, the convolution kernel of the 9th convolution is 1×1, and the output channel is 1.
侧输出层的输入连接特征提取层的前两个池化层和最后一个卷积层,对每个侧输出层实施监督学习,每个侧输出层的输入经过一个注意力模块(CBAM)重新校准通道特征,得到校准后的不同通道的特征,使用一个核大小为1×1的卷积层对校准后的不同通道的特征进行组合,得到组合特征,使用完全卷积层做为组合特征的分类器,通过softmax层得到侧输出层的输出;融合层对不同尺度的侧输出层的输出进行加权融合。The input of the side output layer is connected to the first two pooling layers and the last convolutional layer of the feature extraction layer. Supervised learning is implemented for each side output layer. The input of each side output layer is recalibrated by an attention module (CBAM). Channel features, obtain the features of different channels after calibration, use a convolution layer with a kernel size of 1×1 to combine the features of different channels after calibration, and obtain the combined features, and use a fully convolutional layer as the classification of the combined features The output of the side output layer is obtained through the softmax layer; the fusion layer performs weighted fusion on the output of the side output layer of different scales.
进一步的实施例中,参考图2,侧输出层的第1输入经过第1注意力模块(CBAM)后,通过第1输入的输出通道为32的第1卷积,再通过核大小为18×18、通道数为3、步长为4的第1全卷积,最后通过第1个softmax层得到分类和回归结果,输出侧输出层的第1输出;侧输出层的第2输入经过第2注意力模块(CBAM)后,通过第2输入的输出通道为32的第2卷积,再通过核大小为7×7、通道数为3、步长为2的第2全卷积,最后通过第2个softmax层得到分类和回归结果,输出侧输出层的第2输出;侧输出层的第3输入经过第3注意力模块(CBAM)后,通过第3输入的输出通道为32的第3卷积,再通过核大小为1×1,输出通道为3的第3全卷积,最后通过第3个softmax层得到分类和回归结果,输出侧输出层的第3输出。In a further embodiment, referring to Figure 2, after the first input of the side output layer passes through the first attention module (CBAM), it passes through the first convolution with an output channel of 32, and then passes through the kernel size of 18× 18. The first full convolution with a channel number of 3 and a step size of 4 finally obtains the classification and regression results through the first softmax layer, and outputs the first output of the side output layer; the second input of the side output layer passes through the second After the attention module (CBAM), pass the second convolution with an output channel of 32 as the second input, then pass the second full convolution with a kernel size of 7×7, a channel number of 3, and a stride of 2, and finally pass The second softmax layer obtains the classification and regression results, and outputs the second output of the side output layer; after the third input of the side output layer passes through the third attention module (CBAM), the output channel of the third input is 32. Convolution, and then through the third full convolution with a kernel size of 1×1 and an output channel of 3, and finally through the third softmax layer to obtain the classification and regression results, and the third output of the output side output layer.
融合层首先将输出侧输出层的3个输出进行Concat操作,再经过1个核大小为1×1,输出通道为3的卷积、1个softmax层得到最后的输出结果。The fusion layer first performs the Concat operation on the three outputs of the output side output layer, and then passes through a convolution with a kernel size of 1×1 and an output channel of 3, and a softmax layer to obtain the final output result.
验证模块包括特征提取模块、径向基网络、中心更新模块;The verification module includes feature extraction module, radial basis network, and center update module;
特征提取模块基于加权融合后的特征对病理图像有丝分裂细胞进行初步分类,径向基网络使用嵌入径向基函数的卷积网络,径向基网络根据有丝样本特征,对病理图像有丝分裂细胞进行进一步类别判断,得到病理图像有丝分裂检测结果,中心更新模块用于迭代更新径向基网络的径向基函数中心,通过不断地迭代更新径向基网络的基函数中心,来逐步减小网络产生的误差,进一步优化模型,提升有丝分裂检测的性能。The feature extraction module performs a preliminary classification of mitotic cells in pathological images based on weighted fusion features. The radial basis network uses a convolutional network embedded with radial basis functions. The radial basis network further performs a further classification of mitotic cells in pathological images based on the characteristics of mitotic samples. Category judgment is used to obtain the mitosis detection results of pathological images. The center update module is used to iteratively update the radial basis function center of the radial basis network. By continuously iteratively updating the basis function center of the radial basis network, the error generated by the network is gradually reduced. , further optimize the model and improve the performance of mitosis detection.
S3、使用所述图像块样本对所述候选模块进行训练,得到训练后的候选模块;S3. Use the image block samples to train the candidate module and obtain the trained candidate module;
将乳腺病理图像输入训练后的候选模块,得到锚点阵列,其中正锚点作为有丝分裂细胞候选点,以有丝分裂细胞候选点为中心从乳腺病理图像中提取候选块,根据每个有丝分裂细胞候选点坐标与手工标注位置坐标的距离确定每个候选块的标签。对候选模块的训练过程参考图3。Input the breast pathology image into the trained candidate module to obtain the anchor point array, in which the positive anchor point is used as the mitotic cell candidate point. Candidate blocks are extracted from the breast pathology image with the mitotic cell candidate point as the center. According to the coordinates of each mitotic cell candidate point The distance from the hand-annotated location coordinates determines the label of each candidate patch. Refer to Figure 3 for the training process of candidate modules.
候选模块训练时的总损失函数由侧输出层的损失函数和融合层的损失函数两部分组成;The total loss function during candidate module training consists of two parts: the loss function of the side output layer and the loss function of the fusion layer;
侧输出层的损失函数包括位置损失、类别的损失。The loss function of the side output layer includes position loss and category loss.
侧输出层的每个输出的类别损失由交叉熵函数定义,表示如下:The category loss for each output of the side output layer is defined by the cross-entropy function, expressed as follows:
其中,表示侧输出层的第s个输出的类别损失,N+表示正样本图像块的数量;N-表示负样本图像块的数量,li表示第i个输入图像块样本的二值标签真值。in, Represents the category loss of the s-th output of the side output layer, N+ represents the number of positive sample image blocks; N- represents the number of negative sample image blocks, li represents the binary label true value of the i-th input image block sample.
融合层的损失函数包括位置损失、类别的损失。The loss function of the fusion layer includes position loss and category loss.
融合层的位置损失使用欧几里得L2范数来定义,表示如下:The position loss of the fusion layer is defined using the Euclidean L2 norm, which is expressed as follows:
其中1(·)为指示函数,1(li=1)表示在li=1时为1,li表示第i个输入图像块样本的二值标签真值。Among them, 1(·) is an indicator function, 1(li =1) means it is 1 when li =1, and li represents the true value of the binary label of the i-th input image block sample.
候选模块训练时的总损失函数的表达式如下:The expression of the total loss function during candidate module training is as follows:
其中,L(W)为候选模块训练时的总损失函数,W表示在候选模块中要学习的所有参数,表示侧输出层的第s个输出的类别损失,l表示图像块样本的二值标签真值,负样本图像块用0标识,正样本用1标识,/>为每个侧输出层Softmax分类器得到的[0,1]范围内的预测有丝分裂细胞核概率;/>表示融合层的类别的损失,γ是一个平衡分类和回归任务之间成本的超参数,/>为侧输出层的位置损失,(x,y)为图像块采样过程中所记录的细胞核中心偏移量的真值,/>为候选模块估计的候选块的偏移量,为融合层的位置损失。Among them, L(W) is the total loss function when training the candidate module, and W represents all the parameters to be learned in the candidate module. Represents the category loss of the s-th output of the side output layer, l represents the true binary label value of the image block sample, the negative sample image block is marked with 0, and the positive sample is marked with 1, /> The predicted mitotic nucleus probability in the range [0,1] obtained by the Softmax classifier for each side output layer;/> Represents the loss of the category of the fusion layer, γ is a hyperparameter that balances the cost between classification and regression tasks, /> is the position loss of the side output layer, (x, y) is the true value of the nucleus center offset recorded during the image block sampling process,/> the offset of the candidate block estimated for the candidate module, is the position loss of the fusion layer.
S4、构建一个特征提取器,特征提取器和验证模块的特征提取模块相同,验证模块的特征提取模块的结构图参考图5。特征提取模块的输入首先经过第一3×3卷积、第一BN(Batch Normalization)层、第一Maxpooling得到第一输出,输出1依次通过第二3×3卷积、第二BN层、第一Relu激活函数、第三3×3卷积、第三BN层得到第二输出,第一输出通过跳跃连接和第二输出进行相加得到第三输出;第三输出通过第二Relu激活函数得到第四输出,第四输出依次通过第四3×3卷积、第四BN层、第三Relu激活函数、第五3×3卷积、第五BN层、得到第五输出,第四输出经过第一1×1卷积后,通过跳跃连接和第五输出相加得到第六输出;第六输出经过第四Relu激活函数得到第七输出,第七输出依次经过第六3×3卷积、第六BN层、第五Relu激活函数、第七3×3卷积、第七BN层得到第八输出,第七输出经过第二1×1卷积后,通过跳跃连接和第八输出相加得到第九输出;第九输出经过第六Relu激活函数得到第十输出,第十输出依次经过第八3×3卷积、第八BN层、第七Relu激活函数、第九3×3卷积、第九BN层得到第十一输出,第十输出经过第三1×1卷积后,通过跳跃连接和第十一输出相加得到第十二输出;第十二输出依次通过第十3×3卷积、第十BN层、第八Relu激活函数、第二Maxpooling得到最终输出。S4. Construct a feature extractor. The feature extractor is the same as the feature extraction module of the verification module. Refer to Figure 5 for the structure diagram of the feature extraction module of the verification module. The input of the feature extraction module first passes through the first 3×3 convolution, the first BN (Batch Normalization) layer, and the first Maxpooling to obtain the first output. The output 1 sequentially passes through the second 3×3 convolution, the second BN layer, and the first Maxpooling. A Relu activation function, the third 3×3 convolution, and the third BN layer obtain the second output. The first output is added to the second output through skip connection to obtain the third output; the third output is obtained through the second Relu activation function. The fourth output, the fourth output passes through the fourth 3×3 convolution, the fourth BN layer, the third Relu activation function, the fifth 3×3 convolution, the fifth BN layer, and the fifth output is obtained. The fourth output passes through After the first 1×1 convolution, the sixth output is obtained by adding the fifth output through skip connection; the sixth output is passed through the fourth Relu activation function to obtain the seventh output, and the seventh output is sequentially passed through the sixth 3×3 convolution, The sixth BN layer, the fifth Relu activation function, the seventh 3×3 convolution, and the seventh BN layer obtain the eighth output. After the seventh output undergoes the second 1×1 convolution, it is added to the eighth output through a skip connection. The ninth output is obtained; the ninth output passes through the sixth Relu activation function to obtain the tenth output, and the tenth output passes through the eighth 3×3 convolution, the eighth BN layer, the seventh Relu activation function, and the ninth 3×3 convolution in sequence. , the ninth BN layer obtains the eleventh output. After the tenth output undergoes the third 1×1 convolution, it is added to the eleventh output through skip connection to obtain the twelfth output; the twelfth output passes through the tenth 3× 3 convolutions, the tenth BN layer, the eighth Relu activation function, and the second Maxpooling get the final output.
将提取的候选块输入验证模块的特征提取模块进行分类预训练,得到特征提取模块的初始权值;径向基网络基于候选块分类任务引导径向基函数的中心初始化,并基于分类任务通过中心更新模块迭代更新确定最优径向基函数中心,利用不同的径向基函数中心更好地处理有丝分裂细胞形态结构的多变性,进一步提高有丝细胞检测准确率,具体为:将有丝分裂样本输入所述特征提取器,并用所述初始权值更新特征提取器权重,所述特征提取器输出有丝分裂样本的特征表达,对所述有丝分裂样本的特征表达运行K-means聚类算法初始化嵌入径向基函数的卷积网络的径向基函数中心,使用初始化后的嵌入径向基函数的卷积网络对特征提取模块的输出进行再分类,将再分类后得到的径向基网络权值重新赋予聚类算法进行有丝聚类得到新的聚类中心,利用这一组新的聚类中心更新下一轮迭代中径向基网络的基函数中心,采用上述的迭代聚类更新确定最优径向基函数中心。对验证模块的训练过程参考图4。Enter the extracted candidate blocks into the feature extraction module of the verification module for classification pre-training to obtain the initial weight of the feature extraction module; the radial basis network guides the center initialization of the radial basis function based on the candidate block classification task, and passes the center based on the classification task The update module iteratively updates to determine the optimal radial basis function center, uses different radial basis function centers to better handle the variability of mitotic cell morphological structures, and further improves the accuracy of mitotic cell detection. Specifically: input the mitotic sample into all The feature extractor is described, and the feature extractor weight is updated with the initial weight value. The feature extractor outputs the feature expression of the mitosis sample, and runs the K-means clustering algorithm on the feature expression of the mitosis sample to initialize the embedded radial basis function. The radial basis function center of the convolutional network is used to reclassify the output of the feature extraction module using the initialized convolutional network embedded with the radial basis function, and the radial basis network weights obtained after reclassification are reassigned to the clusters The algorithm performs filament clustering to obtain new clustering centers, uses this new set of clustering centers to update the basis function center of the radial basis network in the next iteration, and uses the above iterative clustering update to determine the optimal radial basis function center. Refer to Figure 4 for the training process of the verification module.
径向基网络的训练使用二元交叉熵损失函数对候选块进行类别判断学习:The training of the radial basis network uses the binary cross-entropy loss function to perform category judgment learning on candidate blocks:
其中,为径向基网络的类别损失,N表示样本的个数,li表示第i个候选块的二值标签真值,非有丝分裂细胞用0标识,有丝分裂细胞用1标识,/>为不同径向基函数的加权组合得到第i个候选块细胞核有丝分裂预测概率。in, is the category loss of the radial basis network, N represents the number of samples, li represents the binary label true value of the i-th candidate block, non-mitotic cells are marked with 0, and mitotic cells are marked with 1, /> The predicted probability of nuclear mitosis of the i-th candidate block is obtained for the weighted combination of different radial basis functions.
S5、连接训练好的候选模块和验证模块的径向基网络,构成有丝分裂检测模型,将待检测的整幅乳腺病理组织图像输入所述有丝分裂检测模型,得到有丝分裂细胞的检测结果。通过将图片输入有丝分裂检测模型的候选模块得到候选锚点,将得到的候选锚点输入有丝分裂检测模型的径向基网络进行进一步细化分类,得到最终乳腺病理组织图像细胞有丝分裂检测结果。S5. Connect the radial basis network of the trained candidate module and the verification module to form a mitosis detection model. Input the entire breast pathological tissue image to be detected into the mitosis detection model to obtain the detection results of mitotic cells. Candidate anchor points are obtained by inputting the image into the candidate module of the mitosis detection model, and the obtained candidate anchor points are input into the radial basis network of the mitosis detection model for further refinement and classification, and the final cell mitosis detection results of breast pathological tissue images are obtained.
本发明还提出任务引导的径向基网络对乳腺病理图像有丝分裂检测系统,包括:The present invention also proposes a task-guided radial basis network mitosis detection system for breast pathology images, including:
数据获取模块,用于获取乳腺病理图像,从乳腺病理图像中采集图像块样本,图像块样本包含正样本和负样本,正样本包含图像块样本的类别信息和中心偏移量信息,负样本仅包含图像块样本的类别信息,并从图像块样本中选取有丝分裂样本;The data acquisition module is used to obtain breast pathology images and collect image block samples from breast pathology images. The image block samples contain positive samples and negative samples. The positive samples contain the category information and center offset information of the image block samples. The negative samples only Contains category information of image block samples, and selects mitotic samples from the image block samples;
模型构建模块,用于构建有丝分裂检测的训练模型,包括:候选模块和验证模块,候选模块包括特征提取层、侧输出层、融合层;The model building module is used to build a training model for mitosis detection, including: a candidate module and a verification module. The candidate module includes a feature extraction layer, a side output layer, and a fusion layer;
特征提取层输出不同尺度的特征图,侧输出层利用深度监督与注意力机制对每个特征提取层输出的不同尺度的特征图进行监督学习并输出,融合层对不同尺度的侧输出层的输出进行加权融合,得到加权融合后的特征;The feature extraction layer outputs feature maps of different scales. The side output layer uses deep supervision and attention mechanisms to supervised learning and output the feature maps of different scales output by each feature extraction layer. The fusion layer outputs the side output layers of different scales. Perform weighted fusion to obtain the features after weighted fusion;
验证模块包括特征提取模块、径向基网络、中心更新模块;The verification module includes feature extraction module, radial basis network, and center update module;
特征提取模块基于加权融合后的特征对病理图像有丝分裂细胞进行初步分类,径向基网络使用嵌入径向基函数的卷积网络,对特征提取模块初步分类结果进行进一步类别判断,得到病理图像有丝分裂检测结果,中心更新模块用于迭代更新径向基网络的径向基函数中心;The feature extraction module performs a preliminary classification of mitotic cells in pathological images based on the weighted fusion features. The radial basis network uses a convolutional network embedded with a radial basis function to further classify the preliminary classification results of the feature extraction module to obtain mitosis detection in pathological images. As a result, the center update module is used to iteratively update the radial basis function center of the radial basis network;
模型第一阶段训练模块,用于使用所述图像块样本对所述候选模块进行训练,得到训练后的候选模块;将乳腺病理图像输入训练后的候选模块,得到锚点阵列,其中正锚点作为有丝分裂细胞候选点,以有丝分裂细胞候选点为中心从乳腺病理图像中提取候选块,根据每个有丝分裂细胞候选点坐标与手工标注位置坐标的距离确定每个候选块的标签;The first stage training module of the model is used to train the candidate module using the image block samples to obtain the trained candidate module; input the breast pathology image into the trained candidate module to obtain the anchor point array, in which the positive anchor point As mitotic cell candidate points, candidate blocks are extracted from the breast pathology image with the mitotic cell candidate point as the center, and the label of each candidate block is determined based on the distance between the coordinates of each mitotic cell candidate point and the manually marked position coordinates;
模型第二阶段训练模块,用于根据下述方法训练验证模块:构建一个特征提取器,所述特征提取器和验证模块的特征提取模块相同,将提取的候选块输入验证模块的特征提取模块进行分类预训练,得到特征提取模块的初始权值;径向基网络基于候选块分类任务引导径向基函数的中心初始化,并通过中心更新模块迭代更新确定最优径向基函数中心;将有丝分裂样本输入所述特征提取器,并用所述初始权值更新特征提取器权重,所述特征提取器输出有丝分裂样本的特征表达,对所述有丝分裂样本的特征表达运行K-means聚类算法初始化嵌入径向基函数的卷积网络的径向基函数中心,使用初始化后的嵌入径向基函数的卷积网络对特征提取模块的输出进行再分类,采用迭代聚类更新确定最优径向基函数中心;The second-stage training module of the model is used to train the verification module according to the following method: construct a feature extractor, which is the same as the feature extraction module of the verification module, and input the extracted candidate blocks into the feature extraction module of the verification module. Classification pre-training to obtain the initial weight of the feature extraction module; the radial basis network guides the center initialization of the radial basis function based on the candidate block classification task, and determines the optimal radial basis function center through iterative updates of the center update module; mitotic samples are Input the feature extractor and update the feature extractor weight with the initial weight. The feature extractor outputs the feature expression of the mitotic sample. The K-means clustering algorithm is run on the feature expression of the mitotic sample to initialize the embedding radial direction. The radial basis function center of the convolutional network of the basis function uses the initialized convolutional network embedded with the radial basis function to reclassify the output of the feature extraction module, and uses iterative clustering update to determine the optimal radial basis function center;
检测模块,用于连接训练好的候选模块和验证模块的径向基网络,构成有丝分裂检测模型,将待检测的整幅乳腺病理组织图像输入所述有丝分裂检测模型,得到有丝分裂细胞的检测结果。The detection module is used to connect the radial basis network of the trained candidate module and the verification module to form a mitosis detection model. The entire breast pathological tissue image to be detected is input into the mitosis detection model to obtain the detection results of mitotic cells.
在一示例性实施例中,包括一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,计算机程序被处理器执行时实现上述的乳腺病理图像有丝分裂检测方法的步骤。In an exemplary embodiment, a computer-readable storage medium is included. The computer-readable storage medium stores a computer program. When the computer program is executed by a processor, the steps of the mitosis detection method for breast pathology images are implemented.
请参阅图6,在一示例性实施例中,还包括一种电子设备,包括至少一处理器、至少一存储器、以及至少一通信总线。Referring to FIG. 6 , in an exemplary embodiment, an electronic device is further included, including at least one processor, at least one memory, and at least one communication bus.
其中,存储器上存储有计算机程序,计算机程序包括计算机可读指令,处理器通过通信总线调用存储器中存储的计算机可读指令,执行上述的乳腺病理图像有丝分裂检测。Wherein, a computer program is stored in the memory, and the computer program includes computer-readable instructions. The processor calls the computer-readable instructions stored in the memory through the communication bus to perform the above-mentioned mitosis detection of breast pathology images.
在本实施例中,将本发明方法的性能评价指标在基于质心标注的ICPR 2014和AMIDA2013有丝分裂细胞公开数据集上分别与其他几种传统方法进行了对比,对比结果如表1、表2所示。In this example, the performance evaluation indicators of the method of the present invention are compared with several other traditional methods on the ICPR 2014 and AMIDA2013 mitotic cell public data sets based on centroid annotation. The comparison results are shown in Table 1 and Table 2 .
表1表示不同方法在ICPR 2014验证集上的结果,其中的方法:DeepMitosis(深度检测),MSSN(multi-scale and similarity learning convnets,多尺度和相似性学习卷积网络),SegMitos(mitosis segmentation model,有丝分裂分割模型),RCNN based(regional convolutional neural network based,基于区域卷积神经网络),Resnet-101(Residual Network,残差网络),BIA+PMS(Box-supervised Instance-Aware and Pseudo-Mask-supervised Semantic,框监督实例感知和伪掩码监督语义)。Table 1 shows the results of different methods on the ICPR 2014 verification set, among which the methods: DeepMitosis (depth detection), MSSN (multi-scale and similarity learning convnets, multi-scale and similarity learning convolutional network), SegMitos (mitosis segmentation model) , mitosis segmentation model), RCNN based (regional convolutional neural network based, based on regional convolutional neural network), Resnet-101 (Residual Network, residual network), BIA+PMS (Box-supervised Instance-Aware and Pseudo-Mask- supervised Semantic, box-supervised instance-aware and pseudo-mask supervised semantics).
表1不同方法在ICPR 2014验证集上的结果Table 1 Results of different methods on the ICPR 2014 validation set
表2表示不同方法在AMIDA2013验证集上的结果,其中的方法LightweightDNN(Lightweight Deep Neural Networks,轻量级深度神经网络),DeepResNet+HoughVoting(Deep Residual Network and Hough voting,深度残差网络与霍夫投票),SegMitos-random(mitosis segmentation model with the random concentric label,带有随机同心圆标签的有丝分裂分割模型),PartMitosis(Partially Supervised Deep Learningnetwork forMitosis Detection,基于有丝分裂检测的部分监督深度学习网络)。Table 2 shows the results of different methods on the AMIDA2013 verification set, including methods LightweightDNN (Lightweight Deep Neural Networks, lightweight deep neural network), DeepResNet+HoughVoting (Deep Residual Network and Hough voting, deep residual network and Hough voting) ), SegMitos-random (mitosis segmentation model with the random concentric label, mitosis segmentation model with random concentric labels), PartMitosis (Partially Supervised Deep Learning network for Mitosis Detection, partially supervised deep learning network based on mitosis detection).
表2不同方法在AMIDA2013验证集上的结果Table 2 Results of different methods on the AMIDA2013 validation set
对比结果表明,本发明采用的利用深监督机制的检测模型与径向基函数的深度卷积网络模型进行有丝分裂检测的方法在准确率、召回率、F分数这些性能评价指标上均明显优于其他各种方法。Comparative results show that the method used in this invention to detect mitosis using a detection model using a deep supervision mechanism and a deep convolutional network model using a radial basis function is significantly better than other methods in terms of performance evaluation indicators such as accuracy, recall rate, and F-score. Various methods.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。It should be noted that, as used herein, the terms "include", "comprising" or any other variation thereof are intended to cover a non-exclusive inclusion, such that a process, method, article or system that includes a list of elements not only includes those elements, but It also includes other elements not expressly listed or that are inherent to the process, method, article or system. Without further limitation, an element defined by the statement "comprises a..." does not exclude the presence of other identical elements in the process, method, article, or system that includes that element.
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本发明。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本发明的精神或范围的情况下,在其它实施例中实现。因此,本发明将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。The above description of the disclosed embodiments enables those skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be practiced in other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311430004.9ACN117593253A (en) | 2023-10-30 | 2023-10-30 | Method, system, storage medium and device for detecting mitosis of mammary gland pathology image |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311430004.9ACN117593253A (en) | 2023-10-30 | 2023-10-30 | Method, system, storage medium and device for detecting mitosis of mammary gland pathology image |
| Publication Number | Publication Date |
|---|---|
| CN117593253Atrue CN117593253A (en) | 2024-02-23 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202311430004.9AWithdrawnCN117593253A (en) | 2023-10-30 | 2023-10-30 | Method, system, storage medium and device for detecting mitosis of mammary gland pathology image |
| Country | Link |
|---|---|
| CN (1) | CN117593253A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118469824A (en)* | 2024-07-10 | 2024-08-09 | 川北医学院附属医院 | Vocal cord image recognition system based on image enhancement |
| CN118520900A (en)* | 2024-07-23 | 2024-08-20 | 湖南南华生物技术有限公司 | Vision-assisted large-scale cell culture counting method |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118469824A (en)* | 2024-07-10 | 2024-08-09 | 川北医学院附属医院 | Vocal cord image recognition system based on image enhancement |
| CN118520900A (en)* | 2024-07-23 | 2024-08-20 | 湖南南华生物技术有限公司 | Vision-assisted large-scale cell culture counting method |
| Publication | Publication Date | Title |
|---|---|---|
| Vo et al. | Classification of breast cancer histology images using incremental boosting convolution networks | |
| CN109685776B (en) | Pulmonary nodule detection method and system based on CT image | |
| Wan et al. | Automated grading of breast cancer histopathology using cascaded ensemble with combination of multi-level image features | |
| Shin et al. | Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning | |
| CN112101451B (en) | Breast cancer tissue pathological type classification method based on generation of antagonism network screening image block | |
| CN110766051A (en) | Lung nodule morphological classification method based on neural network | |
| Wang et al. | Weakly supervised learning for whole slide lung cancer image classification | |
| CN112132818B (en) | Pulmonary nodule detection and clinical analysis method constructed based on graph convolution neural network | |
| Pan et al. | Cell detection in pathology and microscopy images with multi-scale fully convolutional neural networks | |
| CN117593253A (en) | Method, system, storage medium and device for detecting mitosis of mammary gland pathology image | |
| CN106570505A (en) | Method and system for analyzing histopathological images | |
| Bai et al. | NHL Pathological Image Classification Based on Hierarchical Local Information and GoogLeNet‐Based Representations | |
| CN114565919B (en) | Tumor microenvironment spatial relationship modeling system and method based on digital pathological image | |
| CN110705565A (en) | Lymph node tumor region identification method and device | |
| CN114783604A (en) | A method, system and storage medium for predicting sentinel lymph node metastasis of breast cancer | |
| Taher et al. | Bayesian classification and artificial neural network methods for lung cancer early diagnosis | |
| CN114970862B (en) | A PDL1 expression level prediction method based on multi-instance knowledge distillation model | |
| CN111914902A (en) | A method for Chinese medicine identification and surface defect detection based on deep neural network | |
| CN117011714A (en) | Hyperspectral image classification method based on pseudo tag assistance | |
| Tripathi et al. | Ensembling handcrafted features with deep features: an analytical study for classification of routine colon cancer histopathological nuclei images | |
| CN110263804A (en) | A kind of medical image dividing method based on safe semi-supervised clustering | |
| CN114445356A (en) | Multi-resolution-based full-field pathological section image tumor rapid positioning method | |
| Yildiz et al. | Nuclei segmentation in colon histology images by using the deep CNNS: a U-Net based multi-class segmentation analysis | |
| CN117173697A (en) | Cell mass classification and identification method, device, electronic equipment and storage medium | |
| CN113762151A (en) | A fault data processing method, system and fault prediction method |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| WW01 | Invention patent application withdrawn after publication | Application publication date:20240223 | |
| WW01 | Invention patent application withdrawn after publication |