Movatterモバイル変換


[0]ホーム

URL:


CN113487621A - Medical image grading method and device, electronic equipment and readable storage medium - Google Patents

Medical image grading method and device, electronic equipment and readable storage medium
Download PDF

Info

Publication number
CN113487621A
CN113487621ACN202110570809.8ACN202110570809ACN113487621ACN 113487621 ACN113487621 ACN 113487621ACN 202110570809 ACN202110570809 ACN 202110570809ACN 113487621 ACN113487621 ACN 113487621A
Authority
CN
China
Prior art keywords
medical image
grading
segmentation
result
focus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110570809.8A
Other languages
Chinese (zh)
Other versions
CN113487621B (en
Inventor
郭振
柳杨
李君�
吕彬
高艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANDONG EYE INSTITUTE
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co LtdfiledCriticalPing An Technology Shenzhen Co Ltd
Priority to CN202110570809.8ApriorityCriticalpatent/CN113487621B/en
Priority to PCT/CN2021/109482prioritypatent/WO2022247007A1/en
Publication of CN113487621ApublicationCriticalpatent/CN113487621A/en
Application grantedgrantedCritical
Publication of CN113487621BpublicationCriticalpatent/CN113487621B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention relates to the field of intelligent decision making, and discloses a medical image grading method, which comprises the following steps: performing feature extraction on the medical image to be classified by using a pre-constructed feature extraction network to obtain a feature map; carrying out classification identification and result statistics on the feature map to obtain a classification result; performing region segmentation and area calculation on the feature map by using a pre-constructed focus segmentation network to obtain a segmentation result; performing feature matching on the classification result and the segmentation result to obtain feature information; grading the medical image to be graded by utilizing a pre-constructed first grading model to obtain a first grading result; and carrying out grading correction on the characteristic information and the first grading result by utilizing a pre-constructed second grading model to obtain a target grading result. The invention also relates to a blockchain technology, and the medical image to be graded can be stored in a blockchain node. The invention also provides a medical image grading device, an electronic device and a storage medium. The invention can improve the accuracy of medical image classification.

Description

Medical image grading method and device, electronic equipment and readable storage medium
Technical Field
The invention relates to the field of intelligent decision making, in particular to a medical image grading method, a medical image grading device, electronic equipment and a readable storage medium.
Background
With the development of artificial intelligence, image recognition is applied to various fields as an important component of artificial intelligence, for example, image recognition is applied to the medical field to recognize medical images so as to judge the severity level of diseases, for example, the fundus color super image is graded so as to judge the degree of diabetic retinopathy, and the like.
However, the conventional image grading method can only rely on a single image recognition model to grade the medical image, and has less characteristic dimensions, so that the image grading accuracy is poor.
Disclosure of Invention
The invention provides a medical image grading method, a medical image grading device, electronic equipment and a computer readable storage medium, and mainly aims to improve the accuracy of medical image grading.
In order to achieve the above object, the present invention provides a medical image classification method, including:
acquiring a medical image to be classified, and performing feature extraction on the medical image to be classified by using a feature extraction network in a pre-constructed focus detection model to obtain a feature map;
carrying out classification identification and result statistics on the feature map to obtain a classification result;
performing region segmentation and area calculation on the feature map by using a focus segmentation network in the focus detection model to obtain a segmentation result;
performing feature matching on the classification result and the segmentation result to obtain feature information;
grading the medical image to be graded by using a pre-constructed first grading model to obtain a first grading result;
and carrying out grading correction on the characteristic information and the first grading result by utilizing a pre-constructed second grading model to obtain a target grading result.
Optionally, the extracting features of the medical image to be classified by using a feature extraction network in a pre-constructed lesion detection model to obtain a feature map includes:
performing convolution pooling operation on the medical image to be classified to obtain an initial characteristic map;
and marking the interested region in the initial characteristic map to obtain the characteristic map.
Optionally, before the feature extraction is performed on the medical image to be graded by using a feature extraction network in a pre-constructed lesion detection model, the method further includes:
acquiring a historical medical image set, and performing label marking on the historical medical image set to obtain a first training image set;
and performing iterative training on a pre-constructed first deep learning network model by using the first training image set to obtain the focus detection model.
Optionally, the tagging the set of historical medical images comprises:
dividing a focus region of each focus in each historical medical image in the historical medical image set to obtain a target region;
and marking each target area in each historical medical image by using the preset lesion category label.
Optionally, before the medical image to be classified is classified by using the pre-constructed first classification model to obtain a first classification result, the method further includes:
carrying out preset grading label marking on the historical medical image set to obtain a second training image set;
and performing iterative training on a pre-constructed second deep learning network model by using the second training image set to obtain the first hierarchical model.
Optionally, the performing region segmentation and area calculation on the feature map by using a lesion segmentation network in the lesion detection model to obtain a segmentation result includes:
performing region segmentation on the feature map to obtain a plurality of segmented regions;
calculating the area ratio of each segmentation region to the medical image to be graded to obtain the corresponding relative area of the segmentation region;
and summarizing all the segmentation areas and the corresponding relative areas of each segmentation area to obtain the segmentation result.
Optionally, the performing feature matching on the classification result and the segmentation result to obtain feature information includes:
matching and associating the classification result with the segmentation result to obtain a focus category corresponding to each relative area in the segmentation result;
summing all the relative areas corresponding to the same focus category in the segmentation result to obtain the total area of the corresponding segmentation region;
combining the total area of the segmentation region with the corresponding focus category to obtain a matching array;
and randomly combining all the matching arrays to obtain the characteristic information.
In order to solve the above problems, the present invention also provides a medical image ranking apparatus, the apparatus comprising:
the system comprises a feature matching module, a feature extraction module and a feature extraction module, wherein the feature matching module is used for acquiring a medical image to be classified and extracting features of the medical image to be classified by utilizing a feature extraction network in a pre-constructed focus detection model to obtain a feature map; carrying out classification identification and result statistics on the feature map to obtain a classification result; performing region segmentation and area calculation on the feature map by using a focus segmentation network in the focus detection model to obtain a segmentation result; performing feature matching on the classification result and the segmentation result to obtain feature information;
the image grading module is used for grading the medical image to be graded by utilizing a pre-constructed first grading model to obtain a first grading result;
and the grading correction module is used for carrying out grading correction on the characteristic information and the first grading result by utilizing a pre-constructed second grading model to obtain a target grading result.
In order to solve the above problem, the present invention also provides an electronic device, including:
a memory storing at least one computer program; and
a processor executing the computer program stored in the memory to implement the medical image ranking method described above.
In order to solve the above problem, the present invention also provides a computer-readable storage medium having at least one computer program stored therein, the at least one computer program being executed by a processor in an electronic device to implement the medical image ranking method described above.
The method comprises the steps of obtaining a medical image to be classified, and utilizing a feature extraction network in a pre-constructed focus detection model to extract features of the medical image to be classified to obtain a feature map; carrying out classification identification and result statistics on the feature map to obtain a classification result; performing region segmentation and area calculation on the feature map by using a focus segmentation network in the focus detection model to obtain a segmentation result; performing feature matching on the classification result and the segmentation result to obtain feature information, and performing feature extraction by using multiple dimensions, wherein the extracted feature information is more accurate and detailed; grading the medical image to be graded by using a pre-constructed first grading model to obtain a first grading result; and carrying out grading correction on the characteristic information and the first grading result by utilizing a pre-constructed second grading model to obtain a target grading result, and correcting the first grading result again, so that the grading accuracy is improved. Therefore, the medical image grading method, the medical image grading device, the electronic equipment and the computer readable storage medium provided by the embodiment of the invention improve the accuracy of medical image grading.
Drawings
Fig. 1 is a schematic flow chart of a medical image classification method according to an embodiment of the present invention;
FIG. 2 is a block diagram of a medical image ranking apparatus according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an internal structure of an electronic device implementing a medical image classification method according to an embodiment of the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention provides a medical image grading method. The execution subject of the medical image grading method includes, but is not limited to, at least one of the electronic devices of the server, the terminal, and the like, which can be configured to execute the method provided by the embodiment of the present application. In other words, the medical image ranking method may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Referring to a flow diagram of a medical image ranking method provided in an embodiment of the present invention shown in fig. 1, in an embodiment of the present invention, the medical image ranking method includes:
s1, obtaining a medical image to be classified, and performing feature extraction on the medical image to be classified by using a feature extraction network in a pre-constructed focus detection model to obtain a feature map;
optionally, in an embodiment of the present invention, the medical image to be graded is a fundus color Doppler ultrasound image, and the lesion detection model includes: a feature extraction network, a focus classification network and a focus segmentation network. The feature extraction network is used for feature extraction, the focus classification network is used for focus classification, and the focus segmentation network is used for focus region segmentation.
In detail, in the embodiment of the present invention, an initial feature extraction network in the feature extraction network is used to perform a convolution pooling operation on the medical image to be classified to obtain an initial feature map; and marking the interested region in the initial characteristic diagram by using the region extraction network in the characteristic extraction network to obtain the characteristic diagram.
Optionally, in this embodiment of the present invention, the initial feature extraction Network is a convolutional neural Network, and the Region extraction Network is an RPN (Region pro-potential Network).
Further, before performing feature extraction on the medical image to be classified by using a feature extraction network in a lesion detection model in the embodiment of the present invention, the method further includes: acquiring a historical medical image set, and carrying out preset label marking on the historical medical image set to obtain a first training image set; and performing iterative training on a pre-constructed first deep learning network model by using the first training image set to obtain the focus detection model. Wherein the historical medical image set comprises a plurality of historical medical images, and the historical medical images are medical images with the same content and different content with the type of the image to be graded.
In detail, the embodiment of the present invention performs preset label labeling on the historical medical image set to obtain a first training image set, including: performing focus region marking on a focus in each historical medical image in the historical medical image set to obtain a target region, and performing focus category marking on each target region in each historical medical image to obtain a first training image set; optionally, the preset lesion area comprises a microangioma area, a hemorrhage area, a sclerosmosis area, a cotton wool spot area, a laser spot area, a neovascular area, a vitreous hemorrhage area, a pre-retinal hemorrhage area, a fibromembranous area; the preset lesion category and the preset lesion area are in one-to-one correspondence, and the method comprises the following steps: microangioma focus, bleeding focus, hard infiltration focus, cotton wool spot focus, laser spot focus, neovascular focus, vitreous hemorrhage focus, pre-retinal hemorrhage focus, fibro-membranous focus, marking the region as laser spot focus if the target region is a laser spot region.
Further, the lesion detection model is trained by a first deep learning model, so that the first deep learning model and the lesion detection model have the same network structure, and therefore, the first deep learning model also includes: a feature extraction network, a focus classification network and a focus segmentation network.
In detail, in the embodiment of the present invention, the first training image set is used to perform iterative training on a first deep learning network model that is pre-constructed, so as to obtain the lesion detection model, where the first deep learning network model is a Mask-RCNN model, and includes:
step A: performing convolution pooling on each image in the first training image set by using a feature extraction network in the first deep learning network model, and performing region-of-interest marking on the image subjected to convolution pooling to obtain a historical feature map;
optionally, the feature extraction network in the embodiment of the present invention includes an initial feature extraction network and a regional extraction network; the initial feature extraction Network is a convolutional neural Network, and the Region extraction Network is an RPN (Region pro-social Network).
In detail, in the embodiment of the present invention, an initial feature extraction network is used for convolution pooling, and the region of interest is marked by using the region extraction network.
And B: performing boundary box marking prediction and classification prediction on the region of interest in the historical feature map by using a focus classification network in the first deep learning network model to obtain boundary box prediction coordinates and classification prediction values;
and C: obtaining a real coordinate of a boundary frame according to a focus area marked by a historical characteristic image corresponding to the historical characteristic image; obtaining a classification true value according to the focus category marked by the historical characteristic image corresponding to the historical characteristic image;
for example: the marked lesion class is a laser speckle lesion, and the classified true value of the corresponding laser speckle lesion is 1.
Step D: calculating by using a preset first loss function according to the classification predicted value and the classification true value to obtain a first loss value; and calculating by using a preset second loss function according to the real coordinate of the boundary box and the predicted coordinate of the boundary box to obtain a second loss function.
Optionally, in this embodiment of the present invention, the first loss function or the second loss function may be a cross-entropy loss function.
Optionally, the lesion segmentation network in the embodiment of the present invention includes a full link layer and a softmax network.
Step E: performing region segmentation prediction on the historical feature map by using a focus segmentation network in the first deep learning network model to obtain a total pixel number predicted value and a region edge pixel number predicted value corresponding to each region;
optionally, in an embodiment of the present invention, the lesion segmentation network is a full convolution network.
Step F: obtaining the true value of the total number of pixels of the corresponding region and the true value of the number of pixels at the edge of the region according to the lesion region marked by the historical characteristic image corresponding to the historical characteristic image;
step G: calculating by using a preset third loss function according to the predicted value of the total number of pixels and the predicted value of the number of the edge pixels of each area, and the real value of the total number of pixels and the real value of the number of the edge pixels of the corresponding area to obtain a third loss value; summing the first loss value, the second loss value and the third loss value to obtain a target loss value;
optionally, in this embodiment of the present invention, the third loss function is a cross entropy loss function.
Step H: and when the target loss value is greater than or equal to a preset loss threshold value, updating the first deep learning network model parameters, and returning to the step A until the target loss value is less than the preset loss threshold value, and stopping training to obtain the focus detection model.
In another embodiment of the invention, the medical image to be graded is stored in the blockchain node by utilizing the high throughput characteristic of the blockchain, so that the data access efficiency is improved.
S2, carrying out classification identification and result statistics on the feature map to obtain a classification result;
in detail, in the embodiment of the present invention, the feature map is subjected to bounding box labeling and classification by using the lesion segmentation network in the lesion detection model, and the number of bounding boxes of the same class is summarized to obtain a classification result. For example: the feature map has A, B, C, D boundary frames in total, the boundary frame A is classified as a bleeding focus, the boundary frame B is a laser spot focus, the boundary frame C is a pre-retinal bleeding focus, and the boundary frame D is a bleeding focus, so that the number of the boundary frames of the same category is converged to obtain a classification result, the classification result is the bleeding focus, the two positions are the boundary frame A and the boundary frame D, 1 position of the laser spot focus is the boundary frame B, and one position of the pre-retinal bleeding focus is the boundary frame C.
S3, carrying out region segmentation and area calculation on the feature map to obtain a segmentation result;
in detail, in the embodiment of the present invention, a lesion segmentation network in the lesion detection model is used to perform region segmentation on the feature map to obtain a plurality of segmentation regions, optionally, the lesion segmentation network in the embodiment of the present invention is a full convolution network, further, since the size difference of the segmentation regions corresponding to the images to be classified of different sizes is large, for comparison, a uniform comparison standard is required, and an area ratio between each segmentation region and the medical image to be classified is calculated to obtain a corresponding relative area, which is not affected by the area change of the medical image to be classified; and summarizing all the segmentation areas and the corresponding relative areas of each segmentation area to obtain the segmentation result. For example: the segmentation result is that A, B, C, D total segmentation regions are 4 in the feature map, the A segmentation region is composed of 10 pixels, the medical image to be graded is composed of 100 pixels, and then the corresponding relative area of the A segmentation region is 10%.
S5, performing feature matching on the classification result and the segmentation result to obtain feature information;
in detail, in the embodiment of the present invention, the classification result and the segmentation result are matched and associated to obtain a lesion category corresponding to each relative area in the segmentation result.
Specifically, the classification result and the segmentation result are obtained from different branches in the same large model, and the positions of each bounding box and the segmentation region in the classification result are the same, for example, the classification result is a bleeding focus sharing one bounding box a, and the segmentation region a corresponding to the bounding box a, so that the focus class corresponding to the segmentation region a obtained by matching is the bleeding focus.
Further, the embodiment of the present invention sums all the relative areas corresponding to the same lesion category in the segmentation result to obtain the total area of the corresponding segmentation region; combining the total area of the segmentation region with the corresponding lesion category to obtain a matching array, for example: in the segmentation result, the segmentation areas corresponding to the pre-retinal hemorrhage lesion categories are a and B, the relative area corresponding to the segmentation area a is 10%, the relative area corresponding to the segmentation area B is 20%, the total area of the regions corresponding to the pre-retinal hemorrhage lesion categories is (10% + 20%) 30%, and the corresponding matching array is [ pre-retinal hemorrhage lesion, 30% ]; further, the embodiment of the present invention randomly combines all the matching arrays to obtain the feature information.
S4, grading the medical image to be graded by using a grading model to obtain a first grading result;
in detail, in the embodiment of the present invention, before the step of classifying the medical image to be classified by using the classification model to obtain the first classification result, the method further includes: carrying out preset grading label marking on the historical medical image set to obtain a second training image set; and performing iterative training on a pre-constructed second deep learning network model by using the second training image set to obtain the first hierarchical model. Optionally, the hierarchical label comprises: mild non-proliferative retinopathy, moderate non-proliferative retinopathy, severe non-proliferative retinopathy, proliferative retinopathy and normal fundus oculi.
Optionally, in this embodiment of the present invention, the second deep learning network model is a convolutional neural network model including a dense attention mechanism.
And S6, grading the characteristic information by using a second grading model to obtain a target grading result.
Optionally, in this embodiment of the present invention, the second hierarchical model is a random forest network model.
Further, in order to make the classification result more accurate, the embodiment of the present invention needs to correct the first classification result, and therefore, the embodiment of the present invention uses the target classification network model to classify the feature information to obtain the target classification result.
In detail, before the embodiment of the present invention utilizes the target hierarchical network to hierarchy the feature information, the method further includes: and constructing a random forest model by using the preset focus category label as a root node and using a preset relative area classification interval and a preset grading label as classification conditions to obtain the second grading model, wherein the grading label comprises five types, namely mild non-proliferative retinopathy, moderate non-proliferative retinopathy, severe non-proliferative retinopathy, proliferative retinopathy and normal fundus oculi, and the focus area classification interval can be set according to actual diagnosis experience, such as [0, 20% ], [ 20%, 40% ], [ 40%, 60%, 80% ], [ 80%, 100% ].
Further, in the embodiment of the present invention, the feature information and the first classification result are input to the second classification model to obtain the target classification result, for example: and if the first grading result is moderate non-proliferative retinopathy and the characteristic information is [ preretinal hemorrhage focus, 10% ], inputting the first grading result and the characteristic information into the second grading model to obtain a target grading result which is mild non-proliferative retinopathy.
Fig. 3 is a functional block diagram of the medical image grading apparatus according to the present invention.
The medicalimage ranking apparatus 100 according to the present invention may be installed in an electronic device. Depending on the implemented functions, the medical image classification apparatus may include afeature matching module 101, animage classification module 102, and aclassification correction module 103, which may also be referred to as a unit, and refer to a series of computer program segments capable of being executed by a processor of an electronic device and performing fixed functions, and stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
thefeature matching module 101 is configured to obtain a medical image to be classified, and perform feature extraction on the medical image to be classified by using a feature extraction network in a pre-constructed lesion detection model to obtain a feature map; carrying out classification identification and result statistics on the feature map to obtain a classification result; performing region segmentation and area calculation on the feature map by using a focus segmentation network in the focus detection model to obtain a segmentation result; performing feature matching on the classification result and the segmentation result to obtain feature information;
optionally, in an embodiment of the present invention, the medical image to be graded is a fundus color Doppler ultrasound image, and the lesion detection model includes: a feature extraction network, a focus classification network and a focus segmentation network. The feature extraction network is used for feature extraction, the focus classification network is used for focus classification, and the focus segmentation network is used for focus region segmentation.
In detail, in the embodiment of the present invention, thefeature matching module 101 performs a convolution pooling operation on the medical image to be classified by using an initial feature extraction network in the feature extraction network to obtain an initial feature map; and marking the interested region in the initial characteristic diagram by using the region extraction network in the characteristic extraction network to obtain the characteristic diagram.
Optionally, in this embodiment of the present invention, the initial feature extraction Network is a convolutional neural Network, and the Region extraction Network is an RPN (Region pro-potential Network).
Further, in the embodiment of the present invention, before the feature extracting the medical image to be ranked by using the feature extraction network in the lesion detection model, thefeature matching module 101 further includes: acquiring a historical medical image set, and carrying out preset label marking on the historical medical image set to obtain a first training image set; and performing iterative training on a pre-constructed first deep learning network model by using the first training image set to obtain the focus detection model. Wherein the historical medical image set comprises a plurality of historical medical images, and the historical medical images are medical images with the same content and different content with the type of the image to be graded.
In detail, thefeature matching module 101 according to the embodiment of the present invention performs preset label labeling on the historical medical image set to obtain a first training image set, including: performing focus region marking on a focus in each historical medical image in the historical medical image set to obtain a target region, and performing focus category marking on each target region in each historical medical image to obtain a first training image set; optionally, the preset lesion area comprises a microangioma area, a hemorrhage area, a sclerosmosis area, a cotton wool spot area, a laser spot area, a neovascular area, a vitreous hemorrhage area, a pre-retinal hemorrhage area, a fibromembranous area; the preset lesion category and the preset lesion area are in one-to-one correspondence, and the method comprises the following steps: microangioma focus, bleeding focus, hard infiltration focus, cotton wool spot focus, laser spot focus, neovascular focus, vitreous hemorrhage focus, pre-retinal hemorrhage focus, fibro-membranous focus, marking the region as laser spot focus if the target region is a laser spot region.
Further, the lesion detection model is trained by a first deep learning model, so that the first deep learning model and the lesion detection model have the same network structure, and therefore, the first deep learning model also includes: a feature extraction network, a focus classification network and a focus segmentation network.
In detail, in the embodiment of the present invention, thefeature matching module 101 performs iterative training on a pre-constructed first deep learning network model by using the first training image set to obtain the lesion detection model, where the first deep learning network model is a Mask-RCNN model, and includes:
step A: performing convolution pooling on each image in the first training image set by using a feature extraction network in the first deep learning network model, and performing region-of-interest marking on the image subjected to convolution pooling to obtain a historical feature map;
optionally, the feature extraction network in the embodiment of the present invention includes an initial feature extraction network and a regional extraction network; the initial feature extraction Network is a convolutional neural Network, and the Region extraction Network is an RPN (Region pro-social Network).
In detail, in the embodiment of the present invention, an initial feature extraction network is used for convolution pooling, and the region of interest is marked by using the region extraction network.
And B: performing boundary box marking prediction and classification prediction on the region of interest in the historical feature map by using a focus classification network in the first deep learning network model to obtain boundary box prediction coordinates and classification prediction values;
and C: obtaining a real coordinate of a boundary frame according to a focus area marked by a historical characteristic image corresponding to the historical characteristic image; obtaining a classification true value according to the focus category marked by the historical characteristic image corresponding to the historical characteristic image;
for example: the marked lesion class is a laser speckle lesion, and the classified true value of the corresponding laser speckle lesion is 1.
Step D: calculating by using a preset first loss function according to the classification predicted value and the classification true value to obtain a first loss value; and calculating by using a preset second loss function according to the real coordinate of the boundary box and the predicted coordinate of the boundary box to obtain a second loss function.
Optionally, in this embodiment of the present invention, the first loss function or the second loss function may be a cross-entropy loss function.
Optionally, the lesion segmentation network in the embodiment of the present invention includes a full link layer and a softmax network.
Step E: performing region segmentation prediction on the historical feature map by using a focus segmentation network in the first deep learning network model to obtain a total pixel number predicted value and a region edge pixel number predicted value corresponding to each region;
optionally, in an embodiment of the present invention, the lesion segmentation network is a full convolution network.
Step F: obtaining the true value of the total number of pixels of the corresponding region and the true value of the number of pixels at the edge of the region according to the lesion region marked by the historical characteristic image corresponding to the historical characteristic image;
step G: calculating by using a preset third loss function according to the predicted value of the total number of pixels and the predicted value of the number of the edge pixels of each area, and the real value of the total number of pixels and the real value of the number of the edge pixels of the corresponding area to obtain a third loss value; summing the first loss value, the second loss value and the third loss value to obtain a target loss value;
optionally, in this embodiment of the present invention, the third loss function is a cross entropy loss function.
Step H: and when the target loss value is greater than or equal to a preset loss threshold value, updating the first deep learning network model parameters, and returning to the step A until the target loss value is less than the preset loss threshold value, and stopping training to obtain the focus detection model.
In another embodiment of the invention, the medical image to be graded is stored in the blockchain node by utilizing the high throughput characteristic of the blockchain, so that the data access efficiency is improved.
In detail, in the embodiment of the present invention, thefeature matching module 101 performs bounding box labeling and classification on the feature map by using a lesion segmentation network in the lesion detection model, and summarizes the number of bounding boxes of the same category to obtain a classification result. For example: the feature map has A, B, C, D boundary frames in total, the boundary frame A is classified as a bleeding focus, the boundary frame B is a laser spot focus, the boundary frame C is a pre-retinal bleeding focus, and the boundary frame D is a bleeding focus, so that the number of the boundary frames of the same category is converged to obtain a classification result, the classification result is the bleeding focus, the two positions are the boundary frame A and the boundary frame D, 1 position of the laser spot focus is the boundary frame B, and one position of the pre-retinal bleeding focus is the boundary frame C.
In detail, in the embodiment of the present invention, thefeature matching module 101 performs region segmentation on the feature map by using a lesion segmentation network in the lesion detection model to obtain a plurality of segmentation regions, optionally, the lesion segmentation network in the embodiment of the present invention is a full convolution network, further, since the size difference of the segmentation regions corresponding to the images to be classified with different sizes is large, for comparison, a uniform comparison standard is required, an area ratio of each segmentation region to the medical image to be classified is calculated to obtain a corresponding relative area, and the relative area is not affected by the area change of the medical image to be classified; and summarizing all the segmentation areas and the corresponding relative areas of each segmentation area to obtain the segmentation result. For example: the segmentation result is that A, B, C, D total segmentation regions are 4 in the feature map, the A segmentation region is composed of 10 pixels, the medical image to be graded is composed of 100 pixels, and then the corresponding relative area of the A segmentation region is 10%.
In detail, thefeature matching module 101 of the embodiment of the present invention matches and associates the classification result with the segmentation result to obtain a lesion category corresponding to each relative area in the segmentation result.
Specifically, the classification result and the segmentation result are obtained from different branches in the same large model, and the positions of each bounding box and the segmentation region in the classification result are the same, for example, the classification result is a bleeding focus sharing one bounding box a, and the segmentation region a corresponding to the bounding box a, so that the focus class corresponding to the segmentation region a obtained by matching is the bleeding focus.
Further, thefeature matching module 101 of the embodiment of the present invention sums all the relative areas corresponding to the same lesion category in the segmentation result to obtain a total area of the corresponding segmentation region; combining the total area of the segmentation region with the corresponding lesion category to obtain a matching array, for example: in the segmentation result, the segmentation areas corresponding to the pre-retinal hemorrhage lesion categories are a and B, the relative area corresponding to the segmentation area a is 10%, the relative area corresponding to the segmentation area B is 20%, the total area of the regions corresponding to the pre-retinal hemorrhage lesion categories is (10% + 20%) 30%, and the corresponding matching array is [ pre-retinal hemorrhage lesion, 30% ]; further, the embodiment of the present invention randomly combines all the matching arrays to obtain the feature information.
Theimage grading module 102 is configured to grade the medical image to be graded by using a pre-constructed first grading model to obtain a first grading result;
in detail, in the embodiment of the present invention, theimage classification module 102 classifies the medical image to be classified by using a classification model, and before obtaining a first classification result, the method further includes: carrying out preset grading label marking on the historical medical image set to obtain a second training image set; and performing iterative training on a pre-constructed second deep learning network model by using the second training image set to obtain the first hierarchical model. Optionally, the hierarchical label comprises: mild non-proliferative retinopathy, moderate non-proliferative retinopathy, severe non-proliferative retinopathy, proliferative retinopathy and normal fundus oculi.
Optionally, in this embodiment of the present invention, the second deep learning network model is a convolutional neural network model including a dense attention mechanism.
Thehierarchical rectification module 103 is configured to perform hierarchical rectification on the feature information and the first hierarchical result by using a pre-constructed second hierarchical model to obtain a target hierarchical result.
Optionally, in this embodiment of the present invention, the second hierarchical model is a random forest network model.
Further, in order to make the classification result more accurate, the embodiment of the present invention needs to correct the first classification result, so that theclassification correction module 103 in the embodiment of the present invention classifies the feature information by using the target classification network model to obtain the target classification result.
In detail, before thehierarchical correction module 103 utilizes the target hierarchical network to grade the feature information, the method further includes: and constructing a random forest model by using the preset focus category label as a root node and using a preset relative area classification interval and a preset grading label as classification conditions to obtain the second grading model, wherein the grading label comprises five types, namely mild non-proliferative retinopathy, moderate non-proliferative retinopathy, severe non-proliferative retinopathy, proliferative retinopathy and normal fundus oculi, and the focus area classification interval can be set according to actual diagnosis experience, such as [0, 20% ], [ 20%, 40% ], [ 40%, 60%, 80% ], [ 80%, 100% ].
Further, in this embodiment of the present invention, thehierarchical correction module 103 inputs the feature information and the first hierarchical result into the second hierarchical model to obtain the target hierarchical result, for example: and if the first grading result is moderate non-proliferative retinopathy and the characteristic information is [ preretinal hemorrhage focus, 10% ], inputting the first grading result and the characteristic information into the second grading model to obtain a target grading result which is mild non-proliferative retinopathy.
Fig. 3 is a schematic structural diagram of an electronic device for implementing the medical image classification method according to the present invention.
The electronic device may comprise aprocessor 10, amemory 11, a communication bus 12 and acommunication interface 13, and may further comprise a computer program, such as a medical image rating program, stored in thememory 11 and executable on theprocessor 10.
Thememory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. Thememory 11 may in some embodiments be an internal storage unit of the electronic device, for example a removable hard disk of the electronic device. Thememory 11 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device. Further, thememory 11 may also include both an internal storage unit and an external storage device of the electronic device. Thememory 11 may be used not only to store application software installed in the electronic device and various types of data, such as codes of a medical image rating program, etc., but also to temporarily store data that has been output or is to be output.
Theprocessor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. Theprocessor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the whole electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device by running or executing programs or modules (e.g., medical image classification programs, etc.) stored in thememory 11 and calling data stored in thememory 11.
The communication bus 12 may be a PerIPheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The bus may be divided into an address bus, a data bus, a control bus, etc. The communication bus 12 is arranged to enable connection communication between thememory 11 and at least oneprocessor 10 or the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
Fig. 3 shows only an electronic device having components, and those skilled in the art will appreciate that the structure shown in fig. 3 does not constitute a limitation of the electronic device, and may include fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least oneprocessor 10 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Optionally, thecommunication interface 13 may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), which is generally used to establish a communication connection between the electronic device and other electronic devices.
Optionally, thecommunication interface 13 may further include a user interface, which may be a Display (Display), an input unit (such as a Keyboard (Keyboard)), and optionally, a standard wired interface, or a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The medical image rating program stored by thememory 11 in the electronic device is a combination of a plurality of computer programs which, when run in theprocessor 10, may implement:
acquiring a medical image to be classified, and performing feature extraction on the medical image to be classified by using a feature extraction network in a pre-constructed focus detection model to obtain a feature map;
carrying out classification identification and result statistics on the feature map to obtain a classification result;
performing region segmentation and area calculation on the feature map by using a focus segmentation network in the focus detection model to obtain a segmentation result;
performing feature matching on the classification result and the segmentation result to obtain feature information;
grading the medical image to be graded by using a pre-constructed first grading model to obtain a first grading result;
and carrying out grading correction on the characteristic information and the first grading result by utilizing a pre-constructed second grading model to obtain a target grading result.
Specifically, theprocessor 10 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the computer program, which is not described herein again.
Further, the electronic device integrated module/unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. The computer readable medium may be non-volatile or volatile. The computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
Embodiments of the present invention may also provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor of an electronic device, the computer program may implement:
acquiring a medical image to be classified, and performing feature extraction on the medical image to be classified by using a feature extraction network in a pre-constructed focus detection model to obtain a feature map;
carrying out classification identification and result statistics on the feature map to obtain a classification result;
performing region segmentation and area calculation on the feature map by using a focus segmentation network in the focus detection model to obtain a segmentation result;
performing feature matching on the classification result and the segmentation result to obtain feature information;
grading the medical image to be graded by using a pre-constructed first grading model to obtain a first grading result;
and carrying out grading correction on the characteristic information and the first grading result by utilizing a pre-constructed second grading model to obtain a target grading result.
Further, the computer usable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

CN202110570809.8A2021-05-252021-05-25Medical image grading method, device, electronic equipment and readable storage mediumActiveCN113487621B (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN202110570809.8ACN113487621B (en)2021-05-252021-05-25Medical image grading method, device, electronic equipment and readable storage medium
PCT/CN2021/109482WO2022247007A1 (en)2021-05-252021-07-30Medical image grading method and apparatus, electronic device, and readable storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110570809.8ACN113487621B (en)2021-05-252021-05-25Medical image grading method, device, electronic equipment and readable storage medium

Publications (2)

Publication NumberPublication Date
CN113487621Atrue CN113487621A (en)2021-10-08
CN113487621B CN113487621B (en)2024-07-12

Family

ID=77933476

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110570809.8AActiveCN113487621B (en)2021-05-252021-05-25Medical image grading method, device, electronic equipment and readable storage medium

Country Status (2)

CountryLink
CN (1)CN113487621B (en)
WO (1)WO2022247007A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114529760A (en)*2022-01-252022-05-24北京医准智能科技有限公司Self-adaptive classification method and device for thyroid nodules
CN114782712A (en)*2022-04-292022-07-22沈阳东软智能医疗科技研究院有限公司 Method, Apparatus, Equipment and Medium for Feature Processing Based on Medical Image
CN115018775A (en)*2022-05-242022-09-06阿里巴巴(中国)有限公司 Image detection method, apparatus, device and storage medium
CN116245891A (en)*2022-11-182023-06-09三峡大学Crop area extraction method based on feature interaction and attention decoding
CN117649922A (en)*2023-11-232024-03-05陕西理工大学Medical image storage system based on blockchain

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN116458945B (en)*2023-04-252024-01-16杭州整形医院有限公司Intelligent guiding system and method for children facial beauty suture route
CN117133012B (en)*2023-08-292025-07-25上海帮图信息科技有限公司Method for detecting sub-frame in building drawing and electronic equipment
CN118037731B (en)*2024-04-122024-07-16泉州医学高等专科学校Medical image management system
CN118366621A (en)*2024-04-192024-07-19平安科技(深圳)有限公司Medical image analysis method, device, terminal equipment and storage medium
CN118608873B (en)*2024-08-062024-11-08腾讯科技(深圳)有限公司 Image processing method, device, equipment, storage medium and program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107563123A (en)*2017-09-272018-01-09百度在线网络技术(北京)有限公司Method and apparatus for marking medical image
US10430946B1 (en)*2019-03-142019-10-01Inception Institute of Artificial Intelligence, Ltd.Medical image segmentation and severity grading using neural network architectures with semi-supervised learning techniques
CN111028206A (en)*2019-11-212020-04-17万达信息股份有限公司Prostate cancer automatic detection and classification system based on deep learning
US20200234445A1 (en)*2018-04-132020-07-23Bozhon Precision Industry Technology Co., Ltd.Method and system for classifying diabetic retina images based on deep learning
US20200250398A1 (en)*2019-02-012020-08-06Owkin Inc.Systems and methods for image classification
CN111986211A (en)*2020-08-142020-11-24武汉大学Deep learning-based ophthalmic ultrasonic automatic screening method and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109447065B (en)*2018-10-162020-10-16杭州依图医疗技术有限公司Method and device for identifying mammary gland image
CN109785300A (en)*2018-12-272019-05-21华南理工大学A kind of cancer medical image processing method, system, device and storage medium
CN111161279B (en)*2019-12-122023-05-26中国科学院深圳先进技术研究院Medical image segmentation method, device and server

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107563123A (en)*2017-09-272018-01-09百度在线网络技术(北京)有限公司Method and apparatus for marking medical image
US20200234445A1 (en)*2018-04-132020-07-23Bozhon Precision Industry Technology Co., Ltd.Method and system for classifying diabetic retina images based on deep learning
US20200250398A1 (en)*2019-02-012020-08-06Owkin Inc.Systems and methods for image classification
US10430946B1 (en)*2019-03-142019-10-01Inception Institute of Artificial Intelligence, Ltd.Medical image segmentation and severity grading using neural network architectures with semi-supervised learning techniques
CN111028206A (en)*2019-11-212020-04-17万达信息股份有限公司Prostate cancer automatic detection and classification system based on deep learning
CN111986211A (en)*2020-08-142020-11-24武汉大学Deep learning-based ophthalmic ultrasonic automatic screening method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SATO, YOICHI ET AL.: "Artificial intelligence improves the accuracy of residents in the diagnosis of hip fractures: a multicenter study", 《BMC MUSCULOSKELETAL DISORDERS》*
任福龙;曹鹏;万超;赵大哲;: "结合代价敏感半监督集成学习的糖尿病视网膜病变分级", 计算机应用, no. 07*
刘忠利;陈光;单志勇;蒋学芹;: "基于深度学习的脊柱CT图像分割", 计算机应用与软件, no. 10*

Cited By (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114529760A (en)*2022-01-252022-05-24北京医准智能科技有限公司Self-adaptive classification method and device for thyroid nodules
CN114529760B (en)*2022-01-252022-09-02北京医准智能科技有限公司Self-adaptive classification method and device for thyroid nodules
CN114782712A (en)*2022-04-292022-07-22沈阳东软智能医疗科技研究院有限公司 Method, Apparatus, Equipment and Medium for Feature Processing Based on Medical Image
CN115018775A (en)*2022-05-242022-09-06阿里巴巴(中国)有限公司 Image detection method, apparatus, device and storage medium
CN116245891A (en)*2022-11-182023-06-09三峡大学Crop area extraction method based on feature interaction and attention decoding
CN117649922A (en)*2023-11-232024-03-05陕西理工大学Medical image storage system based on blockchain
CN117649922B (en)*2023-11-232025-09-05陕西理工大学 A blockchain-based medical image storage system

Also Published As

Publication numberPublication date
WO2022247007A1 (en)2022-12-01
CN113487621B (en)2024-07-12

Similar Documents

PublicationPublication DateTitle
CN113487621A (en)Medical image grading method and device, electronic equipment and readable storage medium
CN111652845A (en)Abnormal cell automatic labeling method and device, electronic equipment and storage medium
CN112446544A (en)Traffic flow prediction model training method and device, electronic equipment and storage medium
CN111932547B (en)Method and device for segmenting target object in image, electronic device and storage medium
CN111932534B (en)Medical image picture analysis method and device, electronic equipment and readable storage medium
CN112699775A (en)Certificate identification method, device and equipment based on deep learning and storage medium
CN113283446A (en)Method and device for identifying target object in image, electronic equipment and storage medium
CN112396005A (en)Biological characteristic image recognition method and device, electronic equipment and readable storage medium
CN112052850A (en)License plate recognition method and device, electronic equipment and storage medium
CN112507934A (en)Living body detection method, living body detection device, electronic apparatus, and storage medium
CN111695609A (en)Target damage degree determination method, target damage degree determination device, electronic device, and storage medium
CN115146865A (en)Task optimization method based on artificial intelligence and related equipment
CN113707337A (en)Disease early warning method, device, equipment and storage medium based on multi-source data
CN114187489B (en)Method and device for detecting abnormal driving risk of vehicle, electronic equipment and storage medium
CN113065609B (en)Image classification method, device, electronic equipment and readable storage medium
CN113298159A (en)Target detection method and device, electronic equipment and storage medium
CN113157739A (en)Cross-modal retrieval method and device, electronic equipment and storage medium
CN112749653A (en)Pedestrian detection method, device, electronic equipment and storage medium
CN112860905A (en)Text information extraction method, device and equipment and readable storage medium
CN112132037A (en)Sidewalk detection method, device, equipment and medium based on artificial intelligence
CN114708461A (en)Multi-modal learning model-based classification method, device, equipment and storage medium
CN113268665A (en)Information recommendation method, device and equipment based on random forest and storage medium
CN113420684A (en)Report recognition method and device based on feature extraction, electronic equipment and medium
CN113704474A (en)Bank outlet equipment operation guide generation method, device, equipment and storage medium
CN112101481A (en)Method, device and equipment for screening influence factors of target object and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
TA01Transfer of patent application right

Effective date of registration:20221010

Address after:266000 No. 5, yan'erdao Road, Qingdao, Shandong Province

Applicant after:SHANDONG EYE INSTITUTE

Applicant after:PING AN TECHNOLOGY (SHENZHEN) Co.,Ltd.

Address before:518000 Guangdong, Shenzhen, Futian District Futian street Fu'an community Yitian road 5033, Ping An financial center, 23 floor.

Applicant before:PING AN TECHNOLOGY (SHENZHEN) Co.,Ltd.

TA01Transfer of patent application right
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp