本发明属于图像分割技术领域,特别涉及一种基于互补信息的损失函数的影像分割方法。The invention belongs to the technical field of image segmentation, and in particular relates to an image segmentation method based on a loss function of complementary information.
利用计算机自动地对超声图像中病变区域进行精准分割对于计算机辅助临床检查和治疗至关重要的。该任务可以被表述为一个对超声波单张图像进行二进制标注的问题,即利用计算机辅助系统对超声数据中的病变区域自动地进行像素级标注。The accurate segmentation of lesion areas in ultrasound images automatically by computer is crucial for computer-assisted clinical examination and treatment. This task can be formulated as a binary labeling problem for a single ultrasound image, that is, using a computer-assisted system to automatically label the lesion areas in ultrasound data at the pixel level.
最近,深度学习模型依靠深层的网络架构、大量的可训练参数已经在图像处理领域展现出优异的性能。很多学者提出了新颖且性能出色的多层深度学习模型架构,但是目前深度学习模型的训练方式却鲜有创新。Recently, deep learning models have shown excellent performance in the field of image processing by relying on deep network architectures and a large number of trainable parameters. Many scholars have proposed novel and high-performance multi-layer deep learning model architectures, but there is little innovation in the training methods of deep learning models.
目前常见的多层深度学习模型训练方式是使用损失函数对深度学习模型进行深监督训练,即:直接使用损失函数对模型每一层的特征图进行约束。但是,交叉熵损失函数只能量化每一层特征图与地面真实结果之间的差异,其无法对特征图与特征图之间的关系进行约束,因此深度学习模型无法根据不同特征图之间的互补性信息对每一张特征图进行针对性地优化,因此其分割精度依然差强人意。At present, the common training method of multi-layer deep learning models is to use loss functions to perform deep supervision training on deep learning models, that is, directly use loss functions to constrain the feature maps of each layer of the model. However, the cross entropy loss function can only quantify the difference between the feature maps of each layer and the ground truth results, and it cannot constrain the relationship between feature maps. Therefore, the deep learning model cannot optimize each feature map in a targeted manner based on the complementary information between different feature maps, so its segmentation accuracy is still unsatisfactory.
发明内容Summary of the invention
本说明书实施例的目的是提供一种基于互补信息的损失函数的影像分割方法。The purpose of the embodiments of this specification is to provide an image segmentation method based on a loss function of complementary information.
为解决上述技术问题,本申请实施例通过以下方式实现的:To solve the above technical problems, the embodiments of the present application are implemented in the following ways:
本申请提供一种基于互补信息的损失函数的影像分割方法,该方法包括:The present application provides an image segmentation method based on a loss function of complementary information, the method comprising:
获取待分割影像;Obtain the image to be segmented;
将待分割影像输入至多层深度学习模型,得到包括病灶区域的遮照;The image to be segmented is input into a multi-layer deep learning model to obtain a mask including the lesion area;
其中,采用损失函数对多层深度学习模型进行监督训练,损失函数包括基于互补信息的假阳性-阴性损失函数,基于互补信息的假阳性-阴性损失函数用于表征不同层特征图之间的互补性信息。Among them, the loss function is used to supervise the training of the multi-layer deep learning model. The loss function includes False positive-negative loss function based on complementary information,The false positive-negative loss function based on complementary information is used to characterize the complementary information between feature maps of different layers.
在其中一个实施例中,基于互补信息的假阳性-阴性损失函数包括假阳性损失函数和假阴性损失函数;In one of the embodiments, the false positive-negative loss function based on complementary information includes a false positive loss function and a false negative loss function;
其中,假阳性损失函数用于抑制深层低空间分辨率的特征图,以减少非病灶区域被误认为病灶区域;Among them, the false positive loss function is used to suppress the feature map of deep layer with low spatial resolution to reduce the non-lesion area being mistaken for the lesion area;
假阴性损失函数用于约束浅层高空间分辨率的特征图,以减少病变区域被错误归类为非病变区域。The false negative loss function is used to constrain the shallow high spatial resolution feature map to reduce the misclassification of lesion areas as non-lesion areas.
在其中一个实施例中,基于互补信息的假阳性-阴性损失函数根据假阳性损失函数和假阴性损失函数加权和确定。In one of the embodiments, the false positive-negative loss function based on complementary information is determined by weighting the false positive loss function and the false negative loss function.
在其中一个实施例中,第i层特征图Fi的假阳性损失函数LFPi为:
In one embodiment, the false positive loss function LFPi of the i-th layer feature map Fi is:
其中,GT表示当前待分割影像对应的真实病灶区域,而是GT的逆转,FPg为从相对于当前层特征图更深一层的特征图Fg中获得假阳性分割掩码;LDice表示特征图的假阳性分割掩码FPg与之间的Dice损失函数值。Among them, GT represents the real lesion area corresponding to the current image to be segmented, and It is the inversion of GT, FPg is the false positive segmentation mask obtained from the feature map Fg that is deeper than the current layer feature map; LDice represents the false positive segmentation mask FPg of the feature map and The Dice loss function value between .
在其中一个实施例中,假阳性分割掩码FPg通过特征图Fg上的1×1卷积层和Sigmoid函数计算得到:
FPg=Sigmoid(Conv(Fg\GT))In one embodiment, the false positive segmentation maskFPg is calculated by a 1×1 convolutional layer and a sigmoid function on the feature mapFg :
FPg =Sigmoid(Conv(Fg \GT))
其中,Conv表示特征图Fg上的1×1卷积层;Sigmoid()表示sigmoid激活函数;Fg\GT表示特征图Fg和地面真实病灶区域之间的集合差值运算操作。Among them, Conv represents the 1×1 convolution layer on the feature mapFg ; Sigmoid() represents the sigmoid activation function;Fg \GT represents the set difference operation between the feature mapFg and the ground truth lesion area.
在其中一个实施例中,假阴性损失函数为:
In one embodiment, the false negative loss function for:
其中,FNi为第i层特征图Fi对应的假阴性分割模板;Rg表示特征图Fg的分割结果;LDice表示FNi和Rg之间的Dice损失函数值。Among them, FNi is the false negative segmentation template corresponding to the i-th layer feature mapFi ; Rg represents the segmentation result of the feature map Fg ; LDice represents the Dice loss function value between FNi and Rg .
在其中一个实施例中,假阴性分割模板FNi通过当前待分割影像对应的真实病灶区域GT和当前第i层特征图Fi计算得到:
FNi=GT-GT∩Sigmoid(Conv(Fi))In one embodiment, the false negative segmentation template FNi is calculated by the real lesion area GT corresponding to the current image to be segmented and the current i-th layer feature map Fi :
FNi =GT-GT∩Sigmoid(Conv(Fi ))
其中,Sigmoid()表示sigmoid激活函数,Conv表示的1×1卷积层。Among them, Sigmoid() represents the sigmoid activation function, and Conv represents the 1×1 convolutional layer.
在其中一个实施例中,损失函数还包括分割损失函数。In one embodiment, the loss function also includes a segmentation loss function.
在其中一个实施例中,分割损失函数包括Dice损失函数和CE损失函数;Dice损失函数和CE损失函数布置于多层深度学习模型的每一层,用于约束当前层特征图与地面真实结果之间的差异。In one of the embodiments, the segmentation loss function includes a Dice loss function and a CE loss function; the Dice loss function and the CE loss function are arranged in each layer of the multi-layer deep learning model to constrain the difference between the feature map of the current layer and the ground truth result.
在其中一个实施例中,损失函数根据基于互补信息的假阳性-阴性损失函数和分割损失函数加权和确定。In one of the embodiments, the loss function is determined by weighting a false positive-negative loss function based on complementary information and a segmentation loss function.
由以上本说明书实施例提供的技术方案可见,该方案:充分利用了不同层特征图之间的互补性信息,对每一层特征图进行针对性优化以提升分割结果,拥有更高的检测精度。It can be seen from the technical solution provided in the above embodiments of this specification that this solution: makes full use of the complementary information between feature maps of different layers, performs targeted optimization on each layer of feature maps to improve the segmentation results, and has higher detection accuracy.
为了更清楚地说明本说明书实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本说明书中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of this specification or the technical solutions in the prior art, the drawings required for use in the embodiments or the description of the prior art will be briefly introduced below. Obviously, the drawings described below are only some embodiments recorded in this specification. For ordinary technicians in this field, other drawings can be obtained based on these drawings without paying creative labor.
图1为本申请提供的基于互补信息的损失函数的影像分割方法的流程示意图;FIG1 is a schematic diagram of a flow chart of an image segmentation method based on a loss function of complementary information provided by the present application;
图2为本申请提供的基于互补信息的假阳性-阴性损失函数中特征图生成过程的可视化示意图;FIG2 is a visualization diagram of the feature map generation process in the false positive-negative loss function based on complementary information provided by the present application;
图3为本申请提供的多层深度学习模型的训练方法示意图。FIG3 is a schematic diagram of the training method of the multi-layer deep learning model provided in this application.
为了使本技术领域的人员更好地理解本说明书中的技术方案,下面将结合本说明书实施例中的附图,对本说明书实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本说明书一部分实施例,而不是全部的实施例。基于本说明书中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都应当属于本说明书保护的范围。In order to enable those skilled in the art to better understand the technical solutions in this specification, the technical solutions in the embodiments of this specification will be clearly and completely described below in conjunction with the drawings in the embodiments of this specification. Obviously, the described embodiments are only part of the embodiments of this specification, not all of the embodiments. Based on the embodiments in this specification, all other embodiments obtained by ordinary technicians in this field without creative work should fall within the scope of protection of this specification.
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。In the following description, for the purpose of explanation rather than limitation, specific system structures, technical The specific details of the present invention are not provided in detail to provide a thorough understanding of the embodiments of the present invention. However, it should be clear to those skilled in the art that the present invention can also be implemented in other embodiments without these specific details. In other cases, detailed descriptions of well-known systems, devices, circuits, and methods are omitted to prevent unnecessary details from obstructing the description of the present invention.
在不背离本申请的范围或精神的情况下,可对本申请说明书的具体实施方式做多种改进和变化,这对本领域技术人员而言是显而易见的。由本申请的说明书得到的其他实施方式对技术人员而言是显而易见得的。本申请说明书和实施例仅是示例性的。It will be apparent to those skilled in the art that various modifications and variations may be made to the specific embodiments of the present application description without departing from the scope or spirit of the present application. Other embodiments derived from the present application description will be apparent to those skilled in the art. The present application description and examples are merely exemplary.
关于本文中所使用的“包含”、“包括”、“具有”、“含有”等等,均为开放性的用语,即意指包含但不限于。The words “include,” “including,” “have,” “contain,” etc. used in this document are open-ended terms, meaning including but not limited to.
本申请中的“份”如无特别说明,均按质量份计。Unless otherwise specified, "parts" in this application are calculated by mass.
相关技术中,采用Deep Supervision(深监督)训练方法训练的深度学习模型,对于一个多层的深度学习模型,其编码器(Encoder)会产生4个不同空间分辨率的特征图(F1,F2,F3,F4)。然后,多层的深度学习模型的解码器(Decoder)将上述4个不同空间分辨率的特征图(F1,F2,F3,F4)作为输入,通过上采样和卷积操作依次生成3个解码器特征图(D1,D2,D3)。对于深监督训练方式,每一个解码器特征图(D1,D2,D3)都通过损失函数分别与地面真实结果(Ground Truth)进行度量,生成对应的3个损失函数值(L1,L2,L3)。然后多层深度学习模型利用3个损失函数值(L1,L2,L3)对该模型进行参数更新、优化网络参数。In the related technology, for a deep learning model trained using the Deep Supervision training method, for a multi-layer deep learning model, its encoder (Encoder) will generate 4 feature maps (F1, F2, F3, F4) with different spatial resolutions. Then, the decoder (Decoder) of the multi-layer deep learning model takes the above 4 feature maps (F1, F2, F3, F4) with different spatial resolutions as input, and generates 3 decoder feature maps (D1, D2, D3) in sequence through upsampling and convolution operations. For the deep supervision training method, each decoder feature map (D1, D2, D3) is measured with the ground truth result (Ground Truth) through the loss function to generate the corresponding 3 loss function values (L1, L2, L3). Then the multi-layer deep learning model uses the 3 loss function values (L1, L2, L3) to update the parameters of the model and optimize the network parameters.
而上述深监督深度学习模型训练方法是针对自然影像提出的;以及由于该训练方法是将每一层的特征图(D1,D2,D3)单独与地面真实结果(GroundTruth)进行度量,因此其无法表征出不同层特征图之间的关联性,深度学习模型也就无法从其它层的特征图中挖掘到当前层特征图所缺失的特征信息。The above-mentioned deep supervision deep learning model training method is proposed for natural images; and because the training method measures the feature map of each layer (D1, D2, D3) separately with the ground truth result (GroundTruth), it is unable to characterize the correlation between feature maps of different layers, and the deep learning model is unable to mine the feature information missing from the feature maps of other layers in the current layer.
基于上述缺陷,本申请结合超声影像中病灶区域(或称为病变区域)的特征,提出一种基于互补信息的损失函数的影像分割方法,可以针对性的对多层深度学习模型中不同层特征图之间的关系进行约束,进而在训练的过程中深度学习模型对每一层特征图进行针对性地优化补全,大大提升病灶分割精度,为基于深度学习的计算机辅助系统在临床医学影像领域的应用提供了更多的可能性。Based on the above-mentioned defects, this application combines the characteristics of the lesion area (or called lesion area) in the ultrasound image and proposes an image segmentation method based on the loss function of complementary information, which can specifically constrain the relationship between the feature maps of different layers in the multi-layer deep learning model. Then, during the training process, the deep learning model optimizes and completes each layer of the feature map in a targeted manner, greatly improving the accuracy of lesion segmentation, and providing more possibilities for the application of computer-aided systems based on deep learning in the field of clinical medical imaging.
下面结合附图和实施例对本发明进一步详细说明。The present invention is further described in detail below with reference to the accompanying drawings and embodiments.
参照图1,其示出了适用于本申请实施例提供的基于互补信息的损失函数的影像分割方法的流程示意图。1 , which shows a flow chart of an image segmentation method based on a complementary information loss function provided in an embodiment of the present application.
如图1所示,基于互补信息的损失函数的影像分割方法,可以包括:As shown in FIG1 , the image segmentation method based on the loss function of complementary information may include:
S110、获取待分割影像。S110: Obtain an image to be segmented.
具体的,待分割影像可以为超声图像,该图像可以为存储的数据集中的图像,也可以为临床采集的图像。Specifically, the image to be segmented may be an ultrasound image, which may be an image in a stored data set or an image collected clinically.
S120、将待分割影像输入至多层深度学习模型,得到包括病灶区域的遮照;S120, inputting the image to be segmented into a multi-layer deep learning model to obtain a mask including the lesion area;
其中,采用损失函数对多层深度学习模型进行监督训练,损失函数包括基于互补信息的假阳性-阴性损失函数,基于互补信息的假阳性-阴性损失函数用于表征不同层特征图之间的互补性信息。Among them, a loss function is used to supervise the training of the multi-layer deep learning model. The loss function includes a false positive-negative loss function based on complementary information. The false positive-negative loss function based on complementary information is used to characterize the complementary information between feature maps of different layers.
具体的,基于互补信息的假阳性-阴性损失函数以使一个多层深度学习模型(或简称深度学习模型,或称为基于互补信息的假阳性-阴性损失函数的超声影像分割网络,或超声影像分割网络等)能够进一步抑制浅层特征图的非病变组织噪声,并在深层特征图中将更多的目标肝脏病变组织细节进行增强。Specifically, a false positive-negative loss function based on complementary information enables a multi-layer deep learning model (or simply a deep learning model, or an ultrasound image segmentation network called a false positive-negative loss function based on complementary information, or an ultrasound image segmentation network, etc.) to further suppress non-lesion tissue noise in shallow feature maps and enhance more target liver lesion tissue details in deep feature maps.
训练多层深度学习模型时采用的基于互补信息的假阳性-阴性损失函数可以表征出不同层特征图之间的互补性信息,帮助多层深度学习模型对每一层特征图进行针对性的优化补全,用于提升计算机系统辅助超声图像病灶自动检测速度和精度。The false positive-negative loss function based on complementary information used in training multi-layer deep learning models can characterize the complementary information between feature maps of different layers, helping the multi-layer deep learning model to perform targeted optimization and completion of each layer of feature maps, which is used to improve the speed and accuracy of automatic detection of lesions in computer-assisted ultrasound images.
其中,基于互补信息的假阳性-阴性损失函数包括假阳性损失函数和假阴性损失函数;Among them, the false positive-negative loss function based on complementary information includes a false positive loss function and a false negative loss function;
其中,假阳性损失函数用于抑制深层低空间分辨率的特征图,以尽量减少非病灶区域被误认为病灶区域的可能性;Among them, the false positive loss function is used to suppress the feature maps with low spatial resolution in the deep layer to minimize the possibility that the non-lesion area is mistaken for the lesion area;
假阴性损失函数用于约束浅层高空间分辨率的特征图,以减少病变区域被错误归类为非病变区域的可能性。The false negative loss function is used to constrain the shallow high spatial resolution feature maps to reduce the possibility of lesion areas being misclassified as non-lesion areas.
如图2所示为基于互补信息的假阳性-阴性损失函数(或简称为假阳性-阴性损失函数)中特征图生成过程的可视化:FPg、Rg和FNi。其中,黄色矩形表示相对于当前层特征图更深一层的特征图Fg肿瘤分割区域,蓝色矩形表示当前层(第i层)特征图Fi的分割肿瘤区域,绿色矩形是地面真实肿瘤区域(Ground Truth)。注意,只有粉红色区域像素值被设置为1,其它像素值被设置为0。As shown in Figure 2, the feature map generation process in the false positive-negative loss function (or simply referred to as the false positive-negative loss function) based on complementary information is visualized: FPg , Rg and FNi . The yellow rectangle represents the tumor segmentation area of the feature map Fg that is one layer deeper than the current layer feature map, the blue rectangle represents the segmented tumor area of the feature map Fi of the current layer (i-th layer), and the green rectangle is the ground truth tumor area (Ground Truth). Note that only the pink area pixel values are set to 1, and all other pixel values are set to 0.
本申请提出的基于互补信息的假阳性-阴性损失函数LFPN,以补全多层深度学习模型每一层特征图Fi(1<i<4)所缺失的肿瘤病变区域,并抑制位于潜在肿瘤区域中的非肿瘤区域部分。The false positive-negative loss functionLFPN based on complementary information proposed in this application is used to complete the tumor lesion area missing in the feature map Fi (1<i<4) of each layer of the multi-layer deep learning model, and suppress the non-tumor area part located in the potential tumor area.
具体地,我们首先从相对于当前层特征图更深一层的特征图Fg中获得假阳性分割掩码FPg(参见图2)。FPg可以通过利用Fg上的1×1卷积层和Sigmoid函数来计算:
FPg=Sigmoid(Conv(Fg\GT))Specifically, we first obtain the false positive segmentation maskFPg from the feature mapFg one layer deeper than the current layer feature map (see Figure 2).FPg can be calculated by using a 1×1 convolutional layer onFg and a Sigmoid function:
FPg =Sigmoid(Conv(Fg \GT))
其中,Conv表示特征图Fg上的1×1卷积层。Sigmoid()表示sigmoid激活函数。Fg\GT表示特征图Fg和地面真实肿瘤区域(Ground Truth)之间的集合差值运算操作。然后,我们利用图像分割中的经典指标Dice以表示出FPg和之间的差异大小用以指导深度学习模型的训练,从而消除FPg处的非肝脏区域,用字母表示为Where Conv represents a 1×1 convolutional layer on the feature map Fg . Sigmoid() represents the sigmoid activation function. Fg \GT represents the set difference operation between the feature map Fg and the ground truth tumor region (Ground Truth). Then, we use the classic Dice indicator in image segmentation to represent the difference between FPg and The difference between them is used to guide the training of the deep learning model to eliminate the non-liver area at FPg , which is represented by letters.
通过这样做,使得特征图Fi中的非肝脏区域得到进一步抑制。在第i层的损失函数的定义由下式给出:
By doing this, This makes the non-liver area in the feature mapFi further suppressed. The definition of the loss function is given by:
其中,GT表示当前超声影像(即待分割影像)对应的真实肿瘤区域,而是GT的逆转,即:GT中像素值1的地方变成0,GT中像素值为0的地方变成1。LDice表示特征图FPg与之间的Dice损失函数值。Among them, GT represents the real tumor area corresponding to the current ultrasound image (i.e., the image to be segmented), and It is the inverse of GT, that is, the pixel value 1 in GT becomes 0, and the pixel value 0 in GT becomes 1.L Dice represents the feature map FPg and The Dice loss function value between .
我们进一步利用GT和当前第i层特征图Fi进行计算,得到Fi对应的假阴性分割模板FNi。具体来说,FNi是利用GT减去GT与Fi的交集所得到的,其数学公式表述如下:
FNi=GT-GT∩Sigmoid(Conv(Fi))We further use GT and the current i-th layer feature map Fi to calculate and obtain the false negative segmentation template FNi corresponding to Fi . Specifically, FNi is obtained by subtracting the intersection of GT and Fi from GT, and its mathematical formula is as follows:
FNi =GT-GT∩Sigmoid(Conv(Fi ))
其中,Conv表示的1×1卷积层。此外,我们使用Rg来表示Fg的分割结果,因此Rg可以被计算为:
Rg=Sigmoid(Conv(Fg))Among them, Conv represents a 1×1 convolution layer. In addition, we useRg to represent the segmentation result ofFg , soRg can be calculated as:
Rg =Sigmoid(Conv(Fg ))
然后,我们利用图像分割中的经典指标Dice表示第i层中的FNi与Rg之间的差异性,用字母表示为以这种方式,我们可以引导深度学习模型更好地补全第i层的特征图Fi所缺失的肿瘤病变区域。Then, we use the classic Dice indicator in image segmentation to represent the difference betweenFNi andRg in the i-th layer, represented by letters: In this way, we can guide the deep learning model to better complete the tumor lesion areas missing in the feature mapFi of the i-th layer.
数学上,计算公式可以表示为:
Mathematically, The calculation formula can be expressed as:
其中,LDice表示FNi和Rg之间的Dice损失函数值。Among them, LDice represents the Dice loss function value betweenFNi andRg .
最后,我们将网络中每一层特征图对应的损失函数值与加权相加得到整个假阳性-阴性损失函数LFPN:
Finally, we take the loss function value corresponding to each layer of feature map in the network and The weighted addition gives the entire false positive-negative loss functionLFPN :
其中,权重λ1可以根据实验结果设定,例如设为1。The weight λ1 can be set according to experimental results, for example, set to 1.
可以理解的,本申请实施例损失函数中采用的Dice损失函数可以采用其他流程的损失函数,例如交叉熵(CE)等。It can be understood that the Dice loss function used in the loss function of the embodiment of the present application can adopt the loss function of other processes, such as cross entropy (CE) and the like.
一个实施例中,损失函数还包括分割损失函数。其中,分割损失函数LSeg包括Dice损失函数和CE损失函数;Dice损失函数和CE损失函数布置于多层深度学习模型的每一层,用于约束当前层特征图与地面真实结果(GroundTruth)之间的差异,指导多层深度学习模型的训练。In one embodiment, the loss function also includes a segmentation loss function. The segmentation loss function LSeg includes a Dice loss function and a CE loss function; the Dice loss function and the CE loss function are arranged in each layer of the multi-layer deep learning model to constrain the difference between the feature map of the current layer and the ground truth result (GroundTruth), and guide the training of the multi-layer deep learning model.
通过将LFPN和分割损失函数LSeg相加,我们可以得到网络的整体损失函数LTotal:
LTotal=LFPN+λ2LSegBy adding LFPN and the segmentation loss function LSeg , we can get the overall loss function of the network LTotal :
LTotal = LFPN + λ2 LSeg
其中,权重λ2用于平衡LFPN和LSeg,λ2可以根据实验结果设定,例如将其设定为1(即λ2=1)。The weight λ2 is used to balance LFPN and LSeg , and λ2 can be set according to experimental results, for example, set to 1 (ie, λ2 =1).
本申请最小化整体损失函数LTotal。This application minimizes the overall loss function LTotal .
如图3所示为多层深度学习模型的训练方法示意图,首先,将输入图像输入至多层的编码器中,编码器依次会产生不同空间分辨率的特征图(F1,F2,F3,F4)。接下来,编码器生成的特征图(F1,F2,F3,F4)依次输入至多层的解码器中,解码器产生多个不同空间分辨率的特征图(D1,D2,D3)。在解码器的每一层,布置了Dice损失函数和交叉熵损失函数来表征出每一层的解码器特征图(D1,D2,D3)与地面真实结果之间的差异,并通过反向传播更新模型的参数。此外,本申请利用提出的假阳性-阴性损失函数将相邻两层的特征图关联起来,并用对应的损失函数值表征出二者的互补性损失函数值,通过反向传播更新模型的参数,促进模型产生更加精准的输出结果。As shown in Figure 3, it is a schematic diagram of the training method of the multi-layer deep learning model. First, the input image is input into the multi-layer encoder, and the encoder will generate feature maps (F1, F2, F3, F4) of different spatial resolutions in turn. Next, the feature map (F1, F2, F3, F4) generated by the encoder is input into the multi-layer decoder in turn, and the decoder generates multiple feature maps (D1, D2, D3) of different spatial resolutions. In each layer of the decoder, the Dice loss function and the cross entropy loss function are arranged to characterize the difference between the decoder feature map (D1, D2, D3) of each layer and the ground truth result, and the parameters of the model are updated by back propagation. In addition, the present application uses the proposed false positive-negative loss function to associate the feature maps of two adjacent layers, and uses the corresponding loss function value to characterize the complementary loss function values of the two, and updates the parameters of the model by back propagation, so as to promote the model to produce more accurate output results.
本申请提供的基于互补信息的损失函数的影像分割方法,充分利用了不同层特征图之间的互补性信息,对每一层特征图进行针对性优化以提升分割结果,拥有更高的检测精度。The image segmentation method based on the complementary information loss function provided in this application makes full use of the complementary information between feature maps of different layers, performs targeted optimization on each layer of feature maps to improve the segmentation results, and has higher detection accuracy.
本申请已经进行试验验证,实验结果表明,采用本申请提供基于互补信息的损失函数的影像分割方法与现有技术相比,能够显著提升多层深度学习模型的精度。This application has been experimentally verified, and the experimental results show that the image segmentation method using the loss function based on complementary information provided by this application can significantly improve the accuracy of the multi-layer deep learning model compared with the existing technology.
我们使用519张分辨率为240×240的临床肝脏肿瘤超声图像数据对本申请进行检验测试。为了定量对比,除帧率外我们还使用5个具有说服力的指标参数进行比较,分别是:Dice、Accuracy、Jaccard、APD、HD。We used 519 clinical liver tumor ultrasound image data with a resolution of 240×240 to test this application. For quantitative comparison, in addition to frame rate, we also used 5 convincing indicator parameters for comparison, namely: Dice, Accuracy, Jaccard, APD, and HD.
实验结果如下所示:
The experimental results are as follows:
由表格可知,本申请提供的方法在所有主流检测精度中都优于现有方法。As can be seen from the table, the method provided by this application is superior to existing methods in all mainstream detection accuracy.
需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括要素的过程、方法、商品或者设备中还存在另外的相同要素。It should be noted that the terms "include", "comprises" or any other variations thereof are intended to cover non-exclusive inclusion, so that a process, method, commodity or device including a series of elements includes not only those elements but also elements, and also includes other elements not explicitly listed, or also includes elements inherent to such process, method, commodity or equipment. In the absence of more restrictions, the elements defined by the sentence "including a..." do not exclude the existence of other identical elements in the process, method, commodity or equipment that includes the elements.
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。Each embodiment in this specification is described in a progressive manner, and the same or similar parts between the embodiments can be referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, for the system embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and the relevant parts can be referred to the partial description of the method embodiment.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2023/080093WO2024183000A1 (en) | 2023-03-07 | 2023-03-07 | Image segmentation method based on complementary information-based loss function |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2023/080093WO2024183000A1 (en) | 2023-03-07 | 2023-03-07 | Image segmentation method based on complementary information-based loss function |
| Publication Number | Publication Date |
|---|---|
| WO2024183000A1true WO2024183000A1 (en) | 2024-09-12 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2023/080093PendingWO2024183000A1 (en) | 2023-03-07 | 2023-03-07 | Image segmentation method based on complementary information-based loss function |
| Country | Link |
|---|---|
| WO (1) | WO2024183000A1 (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112102321A (en)* | 2020-08-07 | 2020-12-18 | 深圳大学 | Focal image segmentation method and system based on deep convolutional neural network |
| CN112819763A (en)* | 2021-01-25 | 2021-05-18 | 北京工业大学 | End-to-end three-dimensional breast ultrasound image tumor segmentation method based on Guide-MultiScale-Net |
| WO2022151755A1 (en)* | 2021-01-15 | 2022-07-21 | 上海商汤智能科技有限公司 | Target detection method and apparatus, and electronic device, storage medium, computer program product and computer program |
| US20220383621A1 (en)* | 2020-03-06 | 2022-12-01 | Genentech, Inc. | Class-disparate loss function to address missing annotations in training data |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20220383621A1 (en)* | 2020-03-06 | 2022-12-01 | Genentech, Inc. | Class-disparate loss function to address missing annotations in training data |
| CN112102321A (en)* | 2020-08-07 | 2020-12-18 | 深圳大学 | Focal image segmentation method and system based on deep convolutional neural network |
| WO2022151755A1 (en)* | 2021-01-15 | 2022-07-21 | 上海商汤智能科技有限公司 | Target detection method and apparatus, and electronic device, storage medium, computer program product and computer program |
| CN112819763A (en)* | 2021-01-25 | 2021-05-18 | 北京工业大学 | End-to-end three-dimensional breast ultrasound image tumor segmentation method based on Guide-MultiScale-Net |
| Publication | Publication Date | Title |
|---|---|---|
| Hu et al. | Reinforcement learning in medical image analysis: Concepts, applications, challenges, and future directions | |
| WO2022151755A1 (en) | Target detection method and apparatus, and electronic device, storage medium, computer program product and computer program | |
| Meng et al. | A cervical histopathology dataset for computer aided diagnosis of precancerous lesions | |
| CN113554665A (en) | A kind of blood vessel segmentation method and device | |
| CN106599549A (en) | Computer-aided diagnosis system and method, and medical system | |
| CN114419020A (en) | Medical image segmentation method, device, computer equipment and storage medium | |
| CN118334336A (en) | Colposcope image segmentation model construction method, image classification method and device | |
| Qi et al. | Automatic lacunae localization in placental ultrasound images via layer aggregation | |
| CN114549452A (en) | A novel coronavirus pneumonia CT image analysis method based on semi-supervised deep learning | |
| Wang et al. | Medical matting: a new perspective on medical segmentation with uncertainty | |
| Jha et al. | Instance segmentation for whole slide imaging: end-to-end or detect-then-segment | |
| CN113011514B (en) | Intracranial hemorrhage sub-type classification algorithm applied to CT image based on bilinear pooling | |
| CN118710912A (en) | A semi-supervised image segmentation method | |
| CN119943297B (en) | Medical image data processing method and system based on deep learning | |
| CN117351199A (en) | Polyp segmentation model establishment method and polyp segmentation method based on box annotation training | |
| CN113379770B (en) | Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device | |
| Xiong et al. | A Three‐Step Automated Segmentation Method for Early Cervical Cancer MRI Images Based on Deep Learning | |
| WO2024183000A1 (en) | Image segmentation method based on complementary information-based loss function | |
| CN115082502B (en) | Image segmentation method based on distance guidance deep learning strategy | |
| CN116978549A (en) | Organ disease prediction method, device, equipment and storage medium | |
| Liu et al. | An end to end thyroid nodule segmentation model based on optimized U-net convolutional neural network | |
| JP2023127578A (en) | Medical image detecting system, training method and medical analyzing method | |
| CN116469103A (en) | Automatic labeling method for medical image segmentation data | |
| CN116433687B (en) | An image segmentation method based on loss function of complementary information | |
| Zhu et al. | Implementation of resource-efficient fetal echocardiography detection algorithms in edge computing |
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | Ref document number:23925723 Country of ref document:EP Kind code of ref document:A1 |