Movatterモバイル変換


[0]ホーム

URL:


CN112241954B - Full-view self-adaptive segmentation network configuration method based on lump differentiation classification - Google Patents

Full-view self-adaptive segmentation network configuration method based on lump differentiation classification
Download PDF

Info

Publication number
CN112241954B
CN112241954BCN202011140808.1ACN202011140808ACN112241954BCN 112241954 BCN112241954 BCN 112241954BCN 202011140808 ACN202011140808 ACN 202011140808ACN 112241954 BCN112241954 BCN 112241954B
Authority
CN
China
Prior art keywords
tumor
full
lump
image
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011140808.1A
Other languages
Chinese (zh)
Other versions
CN112241954A (en
Inventor
陈颖昭
焦佳佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Maritime University
Original Assignee
Shanghai Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Maritime UniversityfiledCriticalShanghai Maritime University
Priority to CN202011140808.1ApriorityCriticalpatent/CN112241954B/en
Publication of CN112241954ApublicationCriticalpatent/CN112241954A/en
Application grantedgrantedCritical
Publication of CN112241954BpublicationCriticalpatent/CN112241954B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention provides a full-view self-adaptive segmentation network configuration method based on lump differentiation classification, which comprises the steps of preprocessing full-view images, enhancing local contrast of the images and reducing other noise; performing morphological corrosion operation on the preprocessed image, and reducing the boundary of each tumor to obtain a target image; classifying the target image into irregular multi-tumor, smooth multi-tumor, irregular single-tumor and smooth single-tumor by feeding the target image into the generation type countermeasure network; performing morphological expansion operation on the classified images to obtain images with classification labels; designing four segmentation network models and segmenting the tumor; and measuring the segmentation effect of the full-field self-adaptive segmentation network based on the lump differentiation classification according to the segmentation index. Compared with the method for configuring the full-view adaptive segmentation network based on the lump differentiation classification, the method for configuring the full-view adaptive segmentation network based on the lump differentiation classification provided by the invention is more intelligent and efficient in comparison with the method for manually extracting the interested target lump region.

Description

Full-view self-adaptive segmentation network configuration method based on lump differentiation classification
Technical Field
The invention relates to the technical field of image segmentation, in particular to a full-view self-adaptive segmentation network configuration method based on lump differentiation classification.
Background
In recent years, with the continuous development of computer vision, image segmentation techniques have been applied to various industries, in which breast mass segmentation has also received attention from many researchers. Breast molybdenum target screening is currently the most common and effective method of pre-breast cancer screening. Radiologists are often influenced by subjective factors or diagnostic experience in the process of breast molybdenum target image analysis, so that differences exist between entities and observers, and therefore, the detection of abnormalities such as bumps, calcifications and the like in molybdenum target photos by using a computer-aided detection or diagnosis technology plays an important role, and therefore, the design of an effective breast bump segmentation auxiliary system is important.
In the past few decades, a great deal of research has been conducted for developing breast molybdenum target image tumor segmentation, wherein deep learning has many progress in breast tumor segmentation, but the current breast tumor segmentation is mostly carried out after the target tumor region of interest is manually or by means of detection technology, and the manual extraction of the target region containing tumor is a tedious and difficult work for radiologists, so the automatic breast tumor segmentation technology for constructing a full-field range has high application value, and few researches on identification and segmentation of a plurality of breast tumors are carried out simultaneously.
Disclosure of Invention
The invention aims to provide a full-view self-adaptive segmentation network configuration method based on lump differentiation classification, which aims to solve the problems of complicated and low-efficiency extraction of an interested target lump region by people or by means of a detection technology.
In order to solve the technical problems, the technical scheme of the invention is as follows: the full-view adaptive segmentation network configuration method based on the lump differentiation classification comprises the following steps: step 1: preprocessing the full-view image, enhancing the local contrast of the image and reducing other noise; step 2: performing morphological corrosion operation on the preprocessed image, and reducing the boundary of each tumor to obtain a target image; step 3: classifying the target image into an irregular multi-tumor, a smooth multi-tumor, an irregular single-tumor and a smooth single-tumor by feeding the target image into a generated countermeasure network; step 4: performing morphological expansion operation on the classified images, and reducing the tumor shrinkage caused by the corrosion operation performed in the step 2 to obtain images with classification labels; step 5: respectively designing four segmentation network models and segmenting the tumor according to the characteristics of the four types of target images in the step 3; step 6: and measuring the segmentation effect of the full-field self-adaptive segmentation network based on the lump differentiation classification according to the segmentation index.
Further, in step 1, preprocessing the full field image includes: histogram equalization, binary filtering, and gamma conversion.
Further, in step 5, the model design and mass segmentation are performed for four segmented networks: aiming at irregular multi-tumor, a hole space convolution pooling pyramid module is added while an attention mechanism module is added on the basis of an R2U-Net network, and an original cyclic convolution layer is replaced by a cross feature accumulation module; aiming at smooth multiple tumors, a plurality of tumor perception modules are added while an attention module is added on the basis of an R2U-Net network, and each convolution layer in the plurality of tumor perception modules uses three kernels with different sizes to construct a feature map; aiming at an irregular single tumor, the super-pixel image and the original image are connected to serve as the input of the network while an attention mechanism module is added on the basis of an R2U-Net network, and the edge contour information of the irregular tumor is extracted; for smooth single tumor, an attention mechanism module is added on the basis of an R2U-Net network, the network trained by the attention mechanism module suppresses irrelevant areas and highlights useful characteristics, and target structures focused on different shapes and sizes are automatically learned.
Further, the segmentation indicators include sensitivity, specificity, accuracy, recall, dice coefficient, and Jacquard similarity coefficient.
According to the full-view adaptive segmentation network configuration method based on the lump differentiation classification, provided by the invention, the segmentation of the lump in the whole medical image is fully considered, and compared with the manual extraction of the interested target lump region, the full-view adaptive segmentation network configuration method based on the lump differentiation classification is more intelligent and efficient; the method adopts the corrosion and expansion operation in morphological treatment, so that the accuracy of lump classification is improved, the original forms of the lump are basically reserved, and the lump segmentation effect is improved; the method has the advantages that the network self-adaptive configuration of classification and then segmentation is carried out, the tumors are divided into four categories through the classification model, and the tumors are segmented through different segmentation networks, so that segmentation indexes such as segmentation accuracy and recall ratio are greatly improved.
Drawings
The invention is further described below with reference to the accompanying drawings:
fig. 1 is a flowchart illustrating steps of a method for configuring a full-field adaptive segmentation network based on mass differentiation classification according to an embodiment of the present invention.
Detailed Description
The invention provides a full-field adaptive segmentation network configuration method based on lump differentiation classification, which is further described in detail below with reference to the accompanying drawings and specific embodiments. Advantages and features of the invention will become more apparent from the following description and from the claims. It is noted that the drawings are in a very simplified form and utilize non-precise ratios, and are intended to facilitate a convenient, clear, description of the embodiments of the invention.
The invention has the core ideas that the full-view self-adaptive segmentation network configuration method based on the lump differentiation classification fully considers the segmentation of the lump in the whole medical image, and compared with the manual extraction of the interested target lump region, the full-view self-adaptive segmentation network configuration method based on the lump differentiation classification is more intelligent and efficient; the method adopts the corrosion and expansion operation in morphological treatment, so that the accuracy of lump classification is improved, the original forms of the lump are basically reserved, and the lump segmentation effect is improved; the method has the advantages that the network self-adaptive configuration of classification and then segmentation is carried out, the tumors are divided into four categories through the classification model, and the tumors are segmented through different segmentation networks, so that segmentation indexes such as segmentation accuracy and recall ratio are greatly improved.
According to the technical scheme, the invention provides a full-field self-adaptive segmentation network configuration method based on lump differentiation classification, and fig. 1 is a flow chart of steps of the full-field self-adaptive segmentation network configuration method based on lump differentiation classification provided by the embodiment of the invention. Referring to fig. 1, the full-view adaptive segmentation network configuration method based on mass differentiation classification includes the steps of:
s11: preprocessing the full-view image, enhancing the local contrast of the image and reducing other noise;
s12: performing morphological corrosion operation on the preprocessed image, and reducing the boundary of each tumor to obtain a target image;
s13: classifying the target image into an irregular multi-tumor, a smooth multi-tumor, an irregular single-tumor and a smooth single-tumor by feeding the target image into a generated countermeasure network;
s14: performing morphological expansion operation on the classified images, and reducing the tumor shrinkage caused by the corrosion operation performed in the step S12 to obtain images with classification labels;
s15: respectively designing four segmentation network models and segmenting the tumor according to the characteristics of the four types of target images in the S13;
s16: further, in S15, a dynamic timer is set according to the priority of the node, and the node waiting time is shorter when the priority is higher.
In the embodiment of the invention, the image is converted into the gray image, the labels and other interferences in the background are removed by a local threshold method and a small area method, the breast and pectoral muscle parts are reserved, and then the effects of increasing the local contrast of the image, smoothing and removing dryness of the image and enhancing the image are realized by histogram equalization, binary filtering and gamma conversion operation.
In step 2, the etching operation is defined as: when the origin of the structural element b is located at (x, y), the erosion of the image f at (x, y) with a flat structural element b is defined as the minimum of the overlapping area of the image f and b. The corrosion of the structural element b at (x, y) of an image f is given by:
that is, to seek b to f erosion, we place the origin of the structural element at the position of each pixel of the image, and erosion at any position is determined by selecting the minimum value from all values of f contained in the b overlapping region. And obtaining a target image after the corrosion operation, wherein the target image is an image with image content to be classified.
In the step 3, the breast tumor is classified by the semi-coupling generation type countermeasure network, and compared with the traditional convolutional neural network, the method has higher precision, reduces the calculation cost, and improves the robustness of countermeasure training.
In step 4, the expansion operation is defined as: when the origin of b is located at position (x, y), the expansion of the flat structural element b at any position (x, y) for the image f is defined as the maximum of the overlapping area of the image f and b, i.e.:
the target image obtained at this time is an image whose image content is to be divided while having a classification tag.
In step 5, for irregular multi-tumor, a hole space convolution pooling pyramid (ASPP) module is added while an attention mechanism (AG) module is added on the basis of an R2U-Net network, and an original cyclic convolution layer is replaced by a cross feature accumulation module (CFA), so that information of the irregular multi-tumor can be effectively extracted; aiming at a smooth multi-tumor mass, a multi-tumor perception module (MSM) is added while an Attention (AG) module is added on the basis of an R2U-Net network, three kernels with different sizes are used for constructing a feature map by each convolution layer in the MSM module, different image features of a receiving domain are extracted, and the feature map carries multi-scale context information and stores fine tumor position information; aiming at an irregular single tumor, the method adopts an attention mechanism (AG) module added on the basis of an R2U-Net network, and simultaneously, in order to strengthen the outline and provide structural information, a super-pixel image and an original image are connected to serve as the input of the network, so that the edge outline information of the irregular tumor is better extracted; for round single bumps, an attention mechanism (AG) module is added on the basis of an R2U-Net network, and the network with AG model training can restrain irrelevant areas and highlight useful characteristics, and the model automatically learns to focus on target structures with different shapes and sizes.
The purpose of breast image segmentation is to obtain a segmentation result for each pixel, determining whether the pixel is a tumor or background. In step 6, by comparing the Group Trunk (GT) with Segmentation Results (SR), there are four cases, true Positive (TP), indicating the number of pixel units that correctly divide the lesion into positive categories; false Positive (FP) indicates the number of pixel units that erroneously divide the background into positive categories. True Negative (TN), which indicates the number of pixels that correctly divide the pixels into negative categories; false Negative (FN) indicates the number of pixels that erroneously classify the pixel into a negative class. The most commonly used evaluation criteria evaluate the performance of the experiment, sensitivity (SE), specificity (SP), accuracy (Acc), recall (PPV), F-Measure (F1), dice Coefficient (DC) and Jacquard similarity coefficient (JC). Their specific definitions are as follows:
through the evaluation of the segmentation indexes, the segmentation effect of the full-field self-adaptive segmentation network based on the lump differentiation classification is good.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (3)

step 5: according to the characteristics of the four types of target images in the step 3, four types of segmentation network models are respectively designed and the tumor is segmented, aiming at irregular multiple tumors, a hole space convolution pooling pyramid module is added while an attention mechanism module is added on the basis of an R2U-Net network, and an original circular convolution layer is replaced by a cross feature accumulation module; aiming at smooth multiple tumors, a plurality of tumor perception modules are added while an attention module is added on the basis of an R2U-Net network, and each convolution layer in the plurality of tumor perception modules uses three kernels with different sizes to construct a feature map; aiming at an irregular single tumor, the super-pixel image and the original image are connected to serve as the input of the network while an attention mechanism module is added on the basis of an R2U-Net network, and the edge contour information of the irregular tumor is extracted; aiming at a smooth single tumor, an attention mechanism module is added on the basis of an R2U-Net network, a network trained by the attention mechanism module suppresses irrelevant areas and highlights useful characteristics, and target structures focused on different shapes and sizes are automatically learned;
CN202011140808.1A2020-10-222020-10-22Full-view self-adaptive segmentation network configuration method based on lump differentiation classificationActiveCN112241954B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202011140808.1ACN112241954B (en)2020-10-222020-10-22Full-view self-adaptive segmentation network configuration method based on lump differentiation classification

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011140808.1ACN112241954B (en)2020-10-222020-10-22Full-view self-adaptive segmentation network configuration method based on lump differentiation classification

Publications (2)

Publication NumberPublication Date
CN112241954A CN112241954A (en)2021-01-19
CN112241954Btrue CN112241954B (en)2024-03-15

Family

ID=74169900

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202011140808.1AActiveCN112241954B (en)2020-10-222020-10-22Full-view self-adaptive segmentation network configuration method based on lump differentiation classification

Country Status (1)

CountryLink
CN (1)CN112241954B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114092422B (en)*2021-11-112024-06-07长沙理工大学Image multi-target extraction method and system based on deep circulation attention
CN117557579A (en)*2023-11-232024-02-13电子科技大学长三角研究院(湖州)Method and system for assisting non-supervision super-pixel segmentation by using cavity pyramid collaborative attention mechanism

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107886514A (en)*2017-11-222018-04-06浙江中医药大学Breast molybdenum target image lump semantic segmentation method based on depth residual error network
CN110414539A (en)*2019-08-052019-11-05腾讯科技(深圳)有限公司A kind of method and relevant apparatus for extracting characterization information
CN110490850A (en)*2019-02-142019-11-22腾讯科技(深圳)有限公司A kind of lump method for detecting area, device and Medical Image Processing equipment
WO2020019671A1 (en)*2018-07-232020-01-30哈尔滨工业大学(深圳)Breast lump detection and classification system and computer-readable storage medium
CN111192245A (en)*2019-12-262020-05-22河南工业大学 A brain tumor segmentation network and segmentation method based on U-Net network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107886514A (en)*2017-11-222018-04-06浙江中医药大学Breast molybdenum target image lump semantic segmentation method based on depth residual error network
WO2020019671A1 (en)*2018-07-232020-01-30哈尔滨工业大学(深圳)Breast lump detection and classification system and computer-readable storage medium
CN110490850A (en)*2019-02-142019-11-22腾讯科技(深圳)有限公司A kind of lump method for detecting area, device and Medical Image Processing equipment
CN110414539A (en)*2019-08-052019-11-05腾讯科技(深圳)有限公司A kind of method and relevant apparatus for extracting characterization information
CN111192245A (en)*2019-12-262020-05-22河南工业大学 A brain tumor segmentation network and segmentation method based on U-Net network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于自适应能量偏移场无边缘主动轮廓模型的乳腺肿块分割与分类方法研究;王孝义;邢素霞;王瑜;曹宇;申楠;潘子妍;;中国医学物理学杂志(第08期);全文*

Also Published As

Publication numberPublication date
CN112241954A (en)2021-01-19

Similar Documents

PublicationPublication DateTitle
CN110334706B (en)Image target identification method and device
CN108765465B (en) An Unsupervised SAR Image Change Detection Method
CN115601602A (en)Cancer tissue pathology image classification method, system, medium, equipment and terminal
CN109993201A (en)A kind of image processing method, device and readable storage medium storing program for executing
CN108537751B (en)Thyroid ultrasound image automatic segmentation method based on radial basis function neural network
CN113870194B (en)Breast tumor ultrasonic image processing device with fusion of deep layer characteristics and shallow layer LBP characteristics
CN101901346A (en) A Method for Recognizing Bad Content of Color Digital Image
CN115311507B (en)Building board classification method based on data processing
CN112241954B (en)Full-view self-adaptive segmentation network configuration method based on lump differentiation classification
CN115170518A (en)Cell detection method and system based on deep learning and machine vision
CN112150487B (en) Rice grain segmentation method, terminal and storage medium
CN117333796A (en)Ship target automatic identification method and system based on vision and electronic equipment
CN110032973A (en)A kind of unsupervised helminth classification method and system based on artificial intelligence
CN113139936A (en)Image segmentation processing method and device
CN109272522A (en) Image refinement segmentation method based on local features
CN119048521B (en) Method, device and computer equipment for counting milk somatic cells
CN115272647A (en)Lung image recognition processing method and system
CN113763407B (en)Nodule edge analysis method of ultrasonic image
CN118691908B (en) A method for identifying crypt abscesses
CN117788874A (en)Contour recognition model training method and porosity calculation method
CN101404062A (en)Automatic screening method for digital galactophore image based on decision tree
CN119323543A (en)Textile flaw detection method based on machine vision
CN116385435B (en)Pharmaceutical capsule counting method based on image segmentation
CN111414956B (en) A multi-instance learning and recognition method for blurred patterns in lung CT images
CN109271939B (en) Thermal infrared human body target recognition method based on monotonic wave direction energy histogram

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp