Deep learning-based intracranial hemorrhage CT image segmentation methodTechnical Field
The invention relates to the field of artificial intelligence and medical image processing, in particular to a deep learning-based intracranial hemorrhage CT image segmentation method.
Background
Intracranial hemorrhage (ICH) refers to bleeding caused by rupture of cerebral blood vessels, which compresses the surrounding nervous tissue and induces functional disorders. Because car accidents, trauma, hypertension, vascular diseases, brain tumors and the like can cause intracranial hemorrhage, the disease becomes a common disease, the disease deterioration speed is very high, the disease has very high possibility of causing disability or death, and the diagnosis and treatment scheme of intracranial hemorrhage patients in time have extremely important significance. With the development of imaging technology, clinical diagnosis of intracranial hemorrhage is mainly that a radiologist checks an electronic Computed Tomography (CT) image to detect and locate an intracranial hemorrhage region, but because of the problems of complex brain structure, inconsistent size and shape of the hemorrhage region, low contrast of the CT image of the brain, fuzzy boundary of the hemorrhage region and the like, manual judgment of the hemorrhage region is time-consuming and labor-consuming, and has certain subjective consciousness errors.
Disclosure of Invention
In order to improve the defects of the existing intracranial hemorrhage CT image segmentation technology and solve the problem of influence on segmentation caused by overlarge bleeding area difference, the invention provides an intracranial hemorrhage CT image segmentation method based on deep learning.
The technical scheme of the invention comprises the following steps:
1) acquiring an intracranial hemorrhage CT image;
2) preprocessing the CT image of intracranial hemorrhage, and taking the preprocessed partial CT image of intracranial hemorrhage as a training sample;
3) training the deep convolutional neural network by using a training sample to obtain a trained deep convolutional neural network;
4) inputting the preprocessed intracranial hemorrhage CT image into a trained deep convolution neural network for image segmentation, and outputting the segmented intracranial hemorrhage CT image.
In the above method for segmenting the intracranial hemorrhage CT image based on deep learning, the step 2) comprises the following specific steps:
for the intracranial hemorrhage CT images received by the image preprocessing module, carrying out size adjustment on each intracranial hemorrhage CT image according to a preset image size, carrying out edge filling on the images with the size less than the preset size by 0 pixel, and carrying out center slice cutting on the images with the size more than the preset size;
for the intracranial hemorrhage CT image after the image size is adjusted, a region containing a blood lump in the brain is extracted by adopting a region growing method, so that the interference of other tissues with higher CT values such as the skull and the like in the segmentation process is avoided; the region growing method comprises the following steps:
generating a zero matrix with the same size as each original image of the intracranial hemorrhage CT image;
selecting seed points with pixel values meeting conditions on an original image, adding the seed points into a growth area, and simultaneously setting the gray value of points at the same positions as the seed points on a zero matrix as 1;
randomly selecting an unmarked pixel point from the growth area, calculating the gray value difference between the pixel point and all neighborhood pixel points, adding the neighborhood pixel point into the growth area if the difference value meets the threshold condition, simultaneously setting the gray value of the pixel point at the same position on a zero matrix as the pixel point in the field as 1, and marking the pixel point in the growth area after all the domain pixel points of the pixel point are processed;
if the unmarked pixel points do not exist in the growing region, the region growing is finished, otherwise, the pixel points are continuously selected, and the previous step is repeated;
and performing open operation on the generated image matrix by using structural elements of a 5 multiplied by 5 rectangle, smoothing the image boundary, eliminating fine protrusions, and cross-multiplying the obtained image matrix with the original image to finally obtain the image only containing the intracranial structure.
In the above method for segmenting an intracranial hemorrhage CT image based on deep learning, the specific step in step 3) is: converting image training sample data into tensor input network input layers, transmitting input variables layer by layer forwards to obtain a prediction result finally, calculating errors between the prediction result and actual values and reversely feeding back the errors layer by layer, updating the weight and bias of each layer of neurons in the network, finally repeating the forward propagation again, repeating the processes to complete the fitting of the training data, and obtaining the trained deep convolutional neural network.
In the above method for segmenting an intracranial hemorrhage CT image based on deep learning, the specific steps in step 4) are as follows: extracting high-level semantic information of the image through a deep convolutional neural network, judging whether the image belongs to an intracranial hemorrhage area from a pixel level, and segmenting the intracranial hemorrhage CT image.
The invention has the following beneficial effects: aiming at the difficulties existing in the segmentation of the CT image of intracranial hemorrhage, the invention automatically segments the CT image of intracranial hemorrhage by using the deep convolutional neural network fused with attention, the attention focuses on and enhances the target area, the multi-scale convolutional layer can extract the characteristics of the image with different receptive field sizes, the segmentation of the target area with extreme size is effectively aimed at, the bleeding area can be segmented with high precision and high efficiency, the basic clinical requirements are met, the error caused by subjective consciousness during manual segmentation can be effectively avoided, the labor cost is saved, and the invention has positive significance for assisting a radiologist to diagnose the intracranial hemorrhage diseases.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a structural diagram of a deep convolutional neural network in the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments.
As shown in fig. 1, the present invention provides a method for segmenting an intracranial hemorrhage CT image based on deep learning, which specifically comprises the following steps:
1) acquiring an intracranial hemorrhage CT image;
2) carry out the preliminary treatment to intracranial hemorrhage CT image, because the intracranial hemorrhage CT image data collection who uses is from different imaging device of different hospitals, the condition that the size is inconsistent exists in the image of different cases, need adjust intracranial hemorrhage CT image to the size of predetermineeing that accords with the network model input, its process is: for the obtained intracranial hemorrhage CT images, carrying out size adjustment on each intracranial hemorrhage CT image according to a preset image size, carrying out edge filling on the images with the size less than the preset size by 0 pixel, and carrying out center slice cutting on the images with the size more than the preset size;
for the intracranial hemorrhage CT image after the image size is adjusted, a region containing a blood lump in the brain is extracted by adopting a region growing method, so that the interference of other tissues with higher CT values such as the skull and the like in the segmentation process is avoided; the region growing method comprises the following steps:
generating a zero matrix with the same size as each original image of the intracranial hemorrhage CT image;
selecting seed points with pixel values meeting conditions on an original image, adding the seed points into a growth area, and simultaneously setting the gray value of points at the same positions as the seed points on a zero matrix as 1;
randomly selecting an unmarked pixel point from the growth area, calculating the gray value difference between the pixel point and all neighborhood pixel points, adding the neighborhood pixel point into the growth area if the difference value meets the threshold condition, simultaneously setting the gray value of the pixel point at the same position on a zero matrix as the pixel point in the field as 1, and marking the pixel point in the growth area after all the domain pixel points of the pixel point are processed;
if the unmarked pixel points do not exist in the growing region, the region growing is finished, otherwise, the pixel points are continuously selected, and the previous step is repeated;
and performing open operation on the generated image matrix by using structural elements of a 5 multiplied by 5 rectangle, smoothing the image boundary, eliminating fine protrusions, and cross-multiplying the obtained image matrix with the original image to finally obtain the image only containing the intracranial structure.
Taking the preprocessed partial intracranial hemorrhage CT image as a training sample;
3) training the deep convolutional neural network by using a training sample to obtain a trained deep convolutional neural network; the process is as follows: converting image training sample data into tensor input network input layers, transmitting input variables layer by layer forwards to obtain a prediction result finally, calculating errors between the prediction result and actual values and reversely feeding back the errors layer by layer, updating the weight and bias of each layer of neurons in the network, finally repeating the forward propagation again, repeating the processes to complete the fitting of the training data, and obtaining the trained deep convolutional neural network.
The deep convolutional neural network is shown in fig. 2, the deep convolutional neural network is of a coding and decoding network structure, a coding part is composed of five stages, the five stages are named as a coding 1 stage, a coding 2 stage, a coding 3 stage, a coding 4 stage and a coding 5 stage in the figure, referring to a coding X stage on the right side of fig. 2, each stage of the coding part is composed of two convolutional layers with the convolutional kernel size of 3 × 3, and a ReLU activation function is used after each convolutional layer; each stage of the coding part is connected through a maximum pooling layer with the filter parameter of 2 multiplied by 2, and the pooling layer gradually reduces the spatial resolution of the output characteristic diagram and is used for extracting the position information and the deep semantic information of the image; the decoding part consists of four stages, wherein the four stages are named as a decoding 1 stage, a decoding 2 stage, a decoding 3 stage and a decoding 4 stage in the figure, and as shown by referring to a decoding X stage on the right side of the figure 2, each stage of the decoding part also consists of two convolutional layers with the convolutional kernel size of 3 multiplied by 3, and a ReLU activation function is used after each convolutional layer; each stage of the decoding part is connected with the previous stage through an up-sampling layer, wherein the first stage is connected with the fifth stage of the encoding part through the up-sampling layer, the up-sampling layer restores the spatial resolution of the feature map layer by layer, and the feature map is restored to the size of the original map.
Splicing operation is carried out between the first four stages of the coding part and the four stages of the corresponding decoding part of the deep convolutional neural network through jump connection containing attention units, and the splicing operation is carried out by referring to fig. 2, wherein the same attention units are connected in the coding 1 stage and the decoding 3 stage, and the output of the attention units is spliced with the feature map of the decoding 4 stage; connecting the same attention unit in the encoding 2 stage and the decoding 2 stage, and splicing the output of the attention unit with the feature map in the decoding 3 stage; connecting the same attention unit in the encoding 3 stage and the decoding 1 stage, and splicing the output of the attention unit with the feature map in the decoding 2 stage; and connecting the same attention unit in the encoding 4 stage and the encoding 5 stage, and splicing the output of the attention unit with the feature map in the decoding 1 stage.
The attention unit obtains an attention coefficient by utilizing high-level semantic information to filter useless information in a low-level feature map and help a decoding part to mainly repair detailed features of a target area, wherein:
the feature diagram matrix of a stage before a certain stage of a decoding part is restored to the same size as the feature diagram of a corresponding encoding stage through an up-sampling layer, the two feature diagram matrices are respectively subjected to convolution operation with the convolution kernel size of 1 multiplied by 1, then are added point by point, the added result is input into a parallel multi-scale convolution module through a ReLU activation function, splicing operation is executed after convolution layers with the convolution kernel sizes of 1 multiplied by 1, 3 multiplied by 3 and 5 multiplied by 5 are respectively passed, the fused result is then subjected to convolution operation with the convolution kernel size of 1 multiplied by 1, and an attention coefficient alpha is obtained through a Sigmoid activation function, and the value [0,1] is taken. And multiplying the obtained attention coefficient alpha with the characteristic diagram matrix of the coding stage to eliminate the influence of irrelevant areas, improve the weight of the target area, and finally splicing the output result and the characteristic diagram of the decoding stage to supplement lost spatial information when the resolution of the characteristic diagram is reduced.
4) Inputting the preprocessed intracranial hemorrhage CT image into a trained deep convolution neural network for image segmentation, and outputting the segmented intracranial hemorrhage CT image.
Finally, it should be noted that the above-mentioned embodiments are only intended to illustrate the design idea and embodiments of the present invention, and not to limit the same, and those skilled in the art should understand that other modifications or equivalent substitutions for the technical solution of the present invention are included in the scope defined by the claims of the present application.