Disclosure of Invention
The invention aims to provide a full-view self-adaptive segmentation network configuration method based on lump differentiation classification, which aims to solve the problems of complicated and low-efficiency extraction of an interested target lump region by people or by means of a detection technology.
In order to solve the technical problems, the technical scheme of the invention is as follows: the full-view adaptive segmentation network configuration method based on the lump differentiation classification comprises the following steps: step 1: preprocessing the full-view image, enhancing the local contrast of the image and reducing other noise; step 2: performing morphological corrosion operation on the preprocessed image, and reducing the boundary of each tumor to obtain a target image; step 3: classifying the target image into an irregular multi-tumor, a smooth multi-tumor, an irregular single-tumor and a smooth single-tumor by feeding the target image into a generated countermeasure network; step 4: performing morphological expansion operation on the classified images, and reducing the tumor shrinkage caused by the corrosion operation performed in the step 2 to obtain images with classification labels; step 5: respectively designing four segmentation network models and segmenting the tumor according to the characteristics of the four types of target images in the step 3; step 6: and measuring the segmentation effect of the full-field self-adaptive segmentation network based on the lump differentiation classification according to the segmentation index.
Further, in step 1, preprocessing the full field image includes: histogram equalization, binary filtering, and gamma conversion.
Further, in step 5, the model design and mass segmentation are performed for four segmented networks: aiming at irregular multi-tumor, a hole space convolution pooling pyramid module is added while an attention mechanism module is added on the basis of an R2U-Net network, and an original cyclic convolution layer is replaced by a cross feature accumulation module; aiming at smooth multiple tumors, a plurality of tumor perception modules are added while an attention module is added on the basis of an R2U-Net network, and each convolution layer in the plurality of tumor perception modules uses three kernels with different sizes to construct a feature map; aiming at an irregular single tumor, the super-pixel image and the original image are connected to serve as the input of the network while an attention mechanism module is added on the basis of an R2U-Net network, and the edge contour information of the irregular tumor is extracted; for smooth single tumor, an attention mechanism module is added on the basis of an R2U-Net network, the network trained by the attention mechanism module suppresses irrelevant areas and highlights useful characteristics, and target structures focused on different shapes and sizes are automatically learned.
Further, the segmentation indicators include sensitivity, specificity, accuracy, recall, dice coefficient, and Jacquard similarity coefficient.
According to the full-view adaptive segmentation network configuration method based on the lump differentiation classification, provided by the invention, the segmentation of the lump in the whole medical image is fully considered, and compared with the manual extraction of the interested target lump region, the full-view adaptive segmentation network configuration method based on the lump differentiation classification is more intelligent and efficient; the method adopts the corrosion and expansion operation in morphological treatment, so that the accuracy of lump classification is improved, the original forms of the lump are basically reserved, and the lump segmentation effect is improved; the method has the advantages that the network self-adaptive configuration of classification and then segmentation is carried out, the tumors are divided into four categories through the classification model, and the tumors are segmented through different segmentation networks, so that segmentation indexes such as segmentation accuracy and recall ratio are greatly improved.
Detailed Description
The invention provides a full-field adaptive segmentation network configuration method based on lump differentiation classification, which is further described in detail below with reference to the accompanying drawings and specific embodiments. Advantages and features of the invention will become more apparent from the following description and from the claims. It is noted that the drawings are in a very simplified form and utilize non-precise ratios, and are intended to facilitate a convenient, clear, description of the embodiments of the invention.
The invention has the core ideas that the full-view self-adaptive segmentation network configuration method based on the lump differentiation classification fully considers the segmentation of the lump in the whole medical image, and compared with the manual extraction of the interested target lump region, the full-view self-adaptive segmentation network configuration method based on the lump differentiation classification is more intelligent and efficient; the method adopts the corrosion and expansion operation in morphological treatment, so that the accuracy of lump classification is improved, the original forms of the lump are basically reserved, and the lump segmentation effect is improved; the method has the advantages that the network self-adaptive configuration of classification and then segmentation is carried out, the tumors are divided into four categories through the classification model, and the tumors are segmented through different segmentation networks, so that segmentation indexes such as segmentation accuracy and recall ratio are greatly improved.
According to the technical scheme, the invention provides a full-field self-adaptive segmentation network configuration method based on lump differentiation classification, and fig. 1 is a flow chart of steps of the full-field self-adaptive segmentation network configuration method based on lump differentiation classification provided by the embodiment of the invention. Referring to fig. 1, the full-view adaptive segmentation network configuration method based on mass differentiation classification includes the steps of:
s11: preprocessing the full-view image, enhancing the local contrast of the image and reducing other noise;
s12: performing morphological corrosion operation on the preprocessed image, and reducing the boundary of each tumor to obtain a target image;
s13: classifying the target image into an irregular multi-tumor, a smooth multi-tumor, an irregular single-tumor and a smooth single-tumor by feeding the target image into a generated countermeasure network;
s14: performing morphological expansion operation on the classified images, and reducing the tumor shrinkage caused by the corrosion operation performed in the step S12 to obtain images with classification labels;
s15: respectively designing four segmentation network models and segmenting the tumor according to the characteristics of the four types of target images in the S13;
s16: further, in S15, a dynamic timer is set according to the priority of the node, and the node waiting time is shorter when the priority is higher.
In the embodiment of the invention, the image is converted into the gray image, the labels and other interferences in the background are removed by a local threshold method and a small area method, the breast and pectoral muscle parts are reserved, and then the effects of increasing the local contrast of the image, smoothing and removing dryness of the image and enhancing the image are realized by histogram equalization, binary filtering and gamma conversion operation.
In step 2, the etching operation is defined as: when the origin of the structural element b is located at (x, y), the erosion of the image f at (x, y) with a flat structural element b is defined as the minimum of the overlapping area of the image f and b. The corrosion of the structural element b at (x, y) of an image f is given by:
that is, to seek b to f erosion, we place the origin of the structural element at the position of each pixel of the image, and erosion at any position is determined by selecting the minimum value from all values of f contained in the b overlapping region. And obtaining a target image after the corrosion operation, wherein the target image is an image with image content to be classified.
In the step 3, the breast tumor is classified by the semi-coupling generation type countermeasure network, and compared with the traditional convolutional neural network, the method has higher precision, reduces the calculation cost, and improves the robustness of countermeasure training.
In step 4, the expansion operation is defined as: when the origin of b is located at position (x, y), the expansion of the flat structural element b at any position (x, y) for the image f is defined as the maximum of the overlapping area of the image f and b, i.e.:
the target image obtained at this time is an image whose image content is to be divided while having a classification tag.
In step 5, for irregular multi-tumor, a hole space convolution pooling pyramid (ASPP) module is added while an attention mechanism (AG) module is added on the basis of an R2U-Net network, and an original cyclic convolution layer is replaced by a cross feature accumulation module (CFA), so that information of the irregular multi-tumor can be effectively extracted; aiming at a smooth multi-tumor mass, a multi-tumor perception module (MSM) is added while an Attention (AG) module is added on the basis of an R2U-Net network, three kernels with different sizes are used for constructing a feature map by each convolution layer in the MSM module, different image features of a receiving domain are extracted, and the feature map carries multi-scale context information and stores fine tumor position information; aiming at an irregular single tumor, the method adopts an attention mechanism (AG) module added on the basis of an R2U-Net network, and simultaneously, in order to strengthen the outline and provide structural information, a super-pixel image and an original image are connected to serve as the input of the network, so that the edge outline information of the irregular tumor is better extracted; for round single bumps, an attention mechanism (AG) module is added on the basis of an R2U-Net network, and the network with AG model training can restrain irrelevant areas and highlight useful characteristics, and the model automatically learns to focus on target structures with different shapes and sizes.
The purpose of breast image segmentation is to obtain a segmentation result for each pixel, determining whether the pixel is a tumor or background. In step 6, by comparing the Group Trunk (GT) with Segmentation Results (SR), there are four cases, true Positive (TP), indicating the number of pixel units that correctly divide the lesion into positive categories; false Positive (FP) indicates the number of pixel units that erroneously divide the background into positive categories. True Negative (TN), which indicates the number of pixels that correctly divide the pixels into negative categories; false Negative (FN) indicates the number of pixels that erroneously classify the pixel into a negative class. The most commonly used evaluation criteria evaluate the performance of the experiment, sensitivity (SE), specificity (SP), accuracy (Acc), recall (PPV), F-Measure (F1), dice Coefficient (DC) and Jacquard similarity coefficient (JC). Their specific definitions are as follows:
through the evaluation of the segmentation indexes, the segmentation effect of the full-field self-adaptive segmentation network based on the lump differentiation classification is good.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.