Summary of the invention
The main purpose of the present invention is to provide the tumour automatic division methods and system in a kind of CT image, it is intended to solveThe technical problem lower to the segmentation precision of CT image in the prior art.
To achieve the above object, first aspect present invention provides the tumour automatic division method in a kind of CT image, comprising:Data enhancing is carried out to raw image data to expand, and obtains enhancing expanding data;Enhancing expanding data is normalized,Obtain normalization data;Normalization data is entered to the processing network trained, obtains segmented image;Segmented image is done at noise reductionReason.
Further, described to carry out data enhancing to raw image data to expand including: based on translation rotation principle to originalBeginning image data carries out enhancing expansion, or carries out enhancing expansion to raw image data based on Stochastic Elasticity deformation principle.
Further, it includes: according to linear normalization principle to increasing that described pair of enhancing expanding data, which is normalized,Strong expanding data and liver neoplasm goldstandard are normalized, and obtain linear normalization image data;Linearly return to describedThe normalized that one change image data carries out data distribution obtains normalization data.
Further, the training method of the processing network trained includes: building the first convolutional layer, the first modified lineProperty elementary layer, pond layer, dropout layers and down-sampling layer, formed constricted path;According to first convolutional layer, described firstCorrect linear elementary layer, the pond layer, dropout layers and down-sampling layer extract and encode the noise reduction data, generate codingData;The second convolutional layer, the linear elementary layer of the second amendment and up-sampling layer are constructed, expansion path is formed;According to the volume TwoLamination, described second correct linear elementary layer and up-sample layer decoder and divide encoding samples data, generate decoding data;BuildingProbability output layer exports the decoding data.
Further, the first convolutional layer of the building, the first linear elementary layer of amendment, pond layer, dropout layers and under adoptSample layer includes: successively to construct three the first 3*3 convolutional layers, and construct two the first 2*2 convolutional layers, and the first 3* successively constructedThe feature port number of 3 convolutional layers and the first 2*2 convolutional layer successively increases one times since 64;After each first 3*3 convolutional layerConstruct the linear elementary layer of amendment;Between adjacent first 3*3 convolutional layer, it is between adjacent first 2*2 convolutional layer and adjacentPond layer is constructed between first 3*3 convolutional layer and the first 2*2 convolutional layer;First constructed after the convolutional layer of the first 3*3First dropout layers are constructed after the convolutional layer of first 2*2, and the convolution of second the first 2*2 is constructed after dropout layers of buildingLayer;Second dropout layers are constructed after second the first 2*2 convolutional layer;The second convolutional layer of the building, the second amendment are linearElementary layer and up-sampling layer include: successively four volume Two bases of building after second dropout layer of building, by above adoptingSample layer and two the 2nd 3*3 convolutional layers are constituted, and the volume Two base feature port number successively constructed is with second the first 2*2 volumesLamination is to begin, and feature port number successively reduces one times;The building up-sampling layer before each 2nd 3*3 convolutional layer.
Further, it is distributed, and is cascaded in mirror image between the constricted path and the expansion path.
Further, the training method of the processing network trained further include: building confrontation network, the building pairAnti- network includes: the first data of building, first data by segmentation goldstandard as standard reference, and by liver neoplasm two-valueGoldstandard and Original Liver gray level image dot product obtain;The second data are constructed, second data are by the segmented image and originalBeginning liver intensity picture point is multiplied to be arrived;Loss function is constructed, and by the first data and the second data entrance loss function, captures andFrom the length space characteristics of the Pixel-level of different levels.
Second aspect of the present invention provides the automatic segmenting system of tumour in a kind of CT image, comprising: enhancing enlargement module is usedIn carrying out enhancing expansion to raw image data, enhancing expanding data is obtained;Normalized module, for expanding number to enhancingAccording to being normalized, normalization data is obtained;Image segmentation module, for noise reduction data to be inputted the processing net trainedNetwork obtains segmented image;Noise reduction module, for doing noise reduction process to segmented image.
Third aspect present invention provides a kind of electronic device, comprising: memory, processor and is stored on the memoryAnd the computer program that can be run on the processor, which is characterized in that when the processor executes the computer program,Realize any one the method among the above.
Fourth aspect present invention provides a kind of computer readable storage medium, a kind of computer readable storage medium, thereonIt is stored with computer program, which is characterized in that when the computer program is executed by processor, realize any one among the aboveThe method.
The present invention provides the tumour automatic division method in a kind of CT image, and beneficial effect is: by original imageData carry out enhancing expansion, do not influence on the basis that the robust for meeting variation grayscale information requires and to handle network to real informationProcessing on, can reach abundant information amount to improve the extensive effect of parted pattern as a result, enable parted pattern to fitFor wider data set, the scope of application of processing network is improved;Due to the gray value for the original image that different patients obtainIt differs greatly, therefore by normalized, processing of the dividing processing network to image data can be facilitated, to reduce different CTMachine scans the difference of bring difference original image, improves the scope of application and precision of processing network processes result.
Specific embodiment
In order to make the invention's purpose, features and advantages of the invention more obvious and easy to understand, below in conjunction with the present inventionAttached drawing in embodiment, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described realityApplying example is only a part of the embodiment of the present invention, and not all embodiments.Based on the embodiments of the present invention, those skilled in the artMember's every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
Referring to Fig. 1, for the tumour automatic division method in a kind of CT image, comprising: S1, carried out to raw image dataData enhancing is expanded, and enhancing expanding data is obtained;S2, enhancing expanding data is normalized, obtains normalization data;S3, normalization data is entered to the processing network trained, obtains segmented image;S4, noise reduction process is done to segmented image.
Raw image data is carried out data enhancing to expand including: to carry out raw image data based on translation rotation principleEnhancing is expanded, or carries out enhancing expansion to raw image data based on Stochastic Elasticity deformation principle.
Since raw image data is more dull, the data information of only one vertical positive direction, for the place trainedIt being more short of for reason network information richness, dull data information causes the Generalization Capability for handling e-learning weaker, becauseThis needs to expand raw image data and enhanced, and obtains enhancing expanding data, to enhance the general of processing e-learningChange performance, in the present embodiment, the process of data enhancing abides by invariance principle, and it is flat for meeting the concrete operations of invariance principleIt moves, rotation and elastic deformation not only believe the training data gray scale of original image after handling initial data by invariance principleThe robustness of breath, moreover it is possible to achieve the purpose that informative, and not influence to handle study of the network to real information, enhance placeThe Generalization Capability for managing network enables to handle network suitable for wider data set;In the present embodiment, by original graphAs data are translated, rotate and elastically-deformable mode carries out enhancing expansion to raw image data, obtains enhancing and expand numberAccording to.
It includes: according to linear normalization principle to enhancing expanding data and liver that enhancing expanding data, which is normalized,Dirty tumour goldstandard is normalized, and obtains linear normalization image data;Linear normalization image data is countedNormalization data is obtained according to the normalized of distribution.
Due to the difference of CT equipment, the grey value difference for the CT image being scanned to patient is larger, in this implementationIn example, by the way that enhancing expanding data and liver neoplasm goldstandard are normalized, thus facilitate the training of processing network,Reduce the grey value difference of different CT equipment scanning bring CT images;Specifically, gradation data is returned using linear normalizationOne changes to [0,225] section, linear normalization formula are as follows:
In linear normalization formula, XnormFor normalization data, X is enhancing expanding data, Xmax、XminRespectively enhanceThe maximum value and minimum value that expanding data is concentrated;It in the present embodiment, is 0/255 binaryzation to gray scale before input processing networkLiver neoplasm data carry out 0-1 standardization and are used as tumour goldstandard, and data are higher than 0.5 setting divided by with 0.5 being threshold value after 255It is 1, is set as 0 lower than 0.5.
After it will enhance expanding data and carry out linear normalization, the normalized of data distribution is carried out, data distributionNormalized is as follows: the hepatic data to gray scale before input processing network in the section 0-255 carries out 0 mean value standardization, will increaseStrong EDS extended data set is normalized to the data set for the normal distribution that mean value is 0, variance is 1, and the normalization formula of data distribution is such asUnder:
In the normalization formula of data distribution, μ and σ are respectively the mean value and standard deviation of raw data set.
There is noise jamming to a certain extent during handling raw image data, and the figure of these noises is specialSign is more obvious, and accounting example is smaller on the original image, it is therefore desirable to noise reduction process is done to segmented image, in the present embodimentIn, interference noise is removed by filter, caused by filtering using the library specialized medical image processing software SimpleITK carry outOperation, noise is shown when Fei Telei (Feret) diameter is less than 7 based on the data statistics of morphologic information, corresponding at this timePerimeter and occupied pixel quantity are all that minimum, the identification that can be filtered without influencing other tumours is influenced within the scope of noise,Therefore using FeretDiameter=7 as threshold value, the part less than the value will be filtered, to realize the mesh of noise reduction filtering, enable finally obtained segmented image more accurate.
The training method for the processing network trained includes: the first convolutional layer of building, the linear elementary layer of the first amendment, Chi HuaLayer, dropout layers and down-sampling layer form constricted path;According to the first convolutional layer, the first linear elementary layer of amendment, pond layer,Dropout layers and down-sampling layer extract and coded samples noise reduction data, generates coded data;It constructs the second convolutional layer, second repairLinear positive elementary layer and up-sampling layer, form expansion path;According to the second convolutional layer, the linear elementary layer of the second amendment and up-samplingLayer decoder and partition encoding data generate decoding data;Probability output layer is constructed, decoding data is exported.
Wherein, the method for making sample noise reduction data includes: to carry out data enhancing to sample image data to expand, and obtains sampleThis enhancing expanding data;Sample enhancing expanding data is normalized, samples normalization data are obtained;To sample normalizingChange data and do noise reduction process, obtains sample noise reduction data, and the increasing with raw image data is expanded in the enhancing of sample image dataStrong extending method is consistent, and sample enhancing expanding data is consistent with the enhancing normalization processing method of expanding data, sample normalizingIt is consistent with the noise reduction process method of normalization data to change data.
Construct the first convolutional layer, the first linear elementary layer of amendment, pond layer, dropout layers and down-sampling layer include: successivelyThree the first 3*3 convolutional layers are constructed, and construct two the first 2*2 convolutional layers, and the first 3*3 convolutional layer successively constructed and firstThe feature port number of 2*2 convolutional layer successively increases one times since 64;An amendment is constructed after each first 3*3 convolutional layerLinear unit layer;Between adjacent first 3*3 convolutional layer, between adjacent first 2*2 convolutional layer and the first adjacent 3*3 convolutionPond layer is constructed between layer and the first 2*2 convolutional layer;The volume of first the first 2*2 constructed after the convolutional layer of the first 3*3First dropout layers are constructed after lamination, and the convolutional layer of second the first 2*2 is constructed after dropout layers of building;At secondSecond dropout layers are constructed after first 2*2 convolutional layer;Construct the second convolutional layer, the linear elementary layer of the second amendment and up-samplingLayer includes: successively to construct four volume Two bases after second dropout layers of building, by up-sampling layer and two second3*3 convolutional layer is constituted, and the volume Two base feature port number successively constructed is to begin with second the first 2*2 convolutional layer, featurePort number successively reduces one times;The building up-sampling layer before each 2nd 3*3 convolutional layer.
Referring to Fig. 2, specifically, handling the overall U-shaped symmetrical structure of network, being divided into two stages of coding-decoding, encodeStage is characteristic extraction part, is completed by constricted path, predominantly U-shape network left part, constricted path and classical convolution mindIt is essentially identical through network operation, 5 block are broadly divided into, wherein three block mentioning for feature by two continuous 3x3The maximum pondization operation for down-sampling of the convolution operation (RELU is met behind each convolution), a 2x2 that take forms, andWith the beginning of start image feature port number 64, after down-sampling operation of every progress, the feature port number of image is all doubled;Latter two block, which then introduces dropout layers on the basis of block in front, prevents network training from over-fitting occur.MeanwhileIn constricted path, the thinking of residual error network is quoted simultaneously in the convolution feature extraction operation part of each block, by using oneA shortcut based on pixel superposition, the information after raw information to be passed through to the convolution operation and convolution operation of 1x1 directly connectIt connects, keeps the two port number consistent, can supplement and increase network information transmission capacity, improve the ability of e-learning feature.DecodingPart is the part that feature is restored, and is mainly completed by expansion path, predominantly the right part of the U-shaped network, can be divided mainly into 4A block+1 last probability output layers (sigmoid), this four block all by a up-sampling layer, (grasp by 2x2 deconvolutionRealize) and two 3x3 convolution (RELU is met behind each convolution), and when deconvolution of every progress up-sampling, imageFeature port number halves, and exports finally by the sigmoid layer of 1x1 to probability graph belonging to each pixel.
It is distributed, and is cascaded in mirror image between constricted path and expansion path, can be supplemented by mirror imageLoss of learning part, further enriches the network information.
The training method for the processing network trained further include: building confrontation network, building confrontation network includes: building theOne data, the first data by segmentation goldstandard as standard reference, and by liver neoplasm two-value goldstandard and Original Liver gray scalePicture point is multiplied to be arrived;The second data are constructed, the second data are obtained by segmented image and Original Liver gray level image dot product;Building damageFunction is lost, and by the first data and the second data entrance loss function, captures the length space of the Pixel-level from different levelsFeature.
Referring to Fig. 3, specifically, the input for fighting the network architecture can strictly be divided into two parts.A part is segmentation gold markStandard is inputted as standard referring to part, by the liver neoplasm two-value goldstandard (ground truth) and Original Liver gray scale providedImage multiplication (dot product) obtains, and is denoted as label_mask;A part is the input of segmentation neural network forecast part, most according to segmentation networkWhole two-value predicts segmentation result figure, and be multiplied (dot product) with Original Liver grayscale image, is denoted as output_mask.Fight networkNetwork structure is similar to the coded portion of segmentation network, and the network is using label_mask and output_mask as input, settingLoss function is MAE (Mean Absolute Error, mean absolute error), which can capture well and come fromThe length space characteristics of the Pixel-level of different levels (information including high, medium and low layer), so as to realize image feature informationMulti-level comparison correction.Network losses function is fought by calculating the gap between standard masks and prediction mask, and is combinedThe loss function for dividing network, collectively as the adjustment function of final segmentation network, to realize confrontation network to generatedThe feedback regulation of parted pattern weight updates, and reaches and advanced optimizes effect.
The building of loss function uses Dice coefficient as assessment, formula are as follows:
Wherein, s1, s2 are respectively actual value and predicted value, and smotth is one for increasing the ginseng of matched curve smoothnessNumber, is similar to an infinitesimal variable, and the introducing of smotth keeps function more smooth.
The loss function for fighting network is MAE (Mean Absolute Error, mean absolute error):
Wherein L is the confrontation total number of plies of network,Come from the feature extraction of input the i-th layer network of goldstandard exposure maskImage,Come from the feature-extraction images of input segmentation the i-th layer network of prediction mask.The loss function can be fineGround captures the length space characteristics of the Pixel-level from different levels (information including high, medium and low layer), so as to realize figureAs the multi-level comparison of characteristic information is corrected.
Whole loss function are as follows: loss=lmae-ldice。
The embodiment of the present application provides the automatic segmenting system of tumour in a kind of CT image, comprising: enhancing enlargement module is used forEnhancing expansion is carried out to raw image data, obtains enhancing expanding data;Normalized module, for enhancing expanding dataIt is normalized, obtains normalization data;Image segmentation module, for noise reduction data to be inputted the processing net trainedNetwork obtains segmented image;Noise reduction module, for doing noise reduction process to segmented image.
The embodiment of the present application provides a kind of electronic device, referring to Fig. 4, the electronic device includes: memory 601, processingDevice 602 and it is stored in the computer program that can be run on memory 601 and on processor 602, processor 602 executes the calculatingWhen machine program, the generation method of increment Density Estimator device described in embodiment of the aforementioned figures 1 to attached drawing 4 is realized.
Further, electronic device further include: at least one input equipment 603 and at least one output equipment 604.
Above-mentioned memory 601, processor 602, input equipment 603 and output equipment 604, are connected by bus 605.
Wherein, input equipment 603 concretely camera, touch panel, physical button or mouse etc..Output equipment604 concretely display screens.
Memory 601 can be high random access memory body (RAM, Random Access Memory) memory,It can be non-labile memory (non-volatile memory), such as magnetic disk storage.Memory 601 is for storing oneGroup executable program code, processor 602 are coupled with memory 601.
Further, the embodiment of the present application also provides a kind of computer readable storage medium, the computer-readable storagesMedium can be in the electronic device being set in the various embodiments described above, which can be earlier figures 4Memory 601 in illustrated embodiment.It is stored with computer program on the computer readable storage medium, the program is by processor602 realize the generation method of increment Density Estimator device described in preceding method embodiment when executing.
Further, the computer can storage medium can also be USB flash disk, mobile hard disk, read-only memory 601 (ROM,Read-Only Memory), RAM, the various media that can store program code such as magnetic or disk.
In several embodiments provided herein, it should be understood that disclosed method, it can be by others sideFormula is realized.For example, the division of the module, only a kind of logical function partition, can there is other division in actual implementationMode, such as multiple module or components can be combined or can be integrated into another system, or some features can be ignored, orIt does not execute.Another point, shown or discussed mutual coupling, direct-coupling or communication connection can be by someThe indirect coupling or communication connection of interface, device or module can be electrical property, mechanical or other forms.
The module as illustrated by the separation member may or may not be physically separated, aobvious as moduleThe component shown may or may not be physical module, it can and it is in one place, or may be distributed over multipleOn network module.Some or all of the modules therein can be selected to realize the mesh of this embodiment scheme according to the actual needs's.
It, can also be in addition, each functional module in each embodiment of the present invention can integrate in a processing moduleIt is that modules physically exist alone, can also be integrated in two or more modules in a module.Above-mentioned integrated mouldBlock both can take the form of hardware realization, can also be realized in the form of software function module.
If the integrated module is realized in the form of software function module and sells or use as independent productWhen, it can store in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantiallyThe all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other wordsIt embodies, which is stored in a storage medium, including some instructions are used so that a computerEquipment (can be personal computer, server or the network equipment etc.) executes the complete of each embodiment the method for the present inventionPortion or part steps.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-OnlyMemory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can store journeyThe medium of sequence code.
It should be noted that for the various method embodiments described above, describing for simplicity, therefore, it is stated as a series ofCombination of actions, but those skilled in the art should understand that, the present invention is not limited by the sequence of acts described becauseAccording to the present invention, certain steps can use other sequences or carry out simultaneously.Secondly, those skilled in the art should also knowIt knows, the embodiments described in the specification are all preferred embodiments, and related actions and modules might not all be this hairNecessary to bright.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodimentPoint, it may refer to the associated description of other embodiments.
The above are the description to tumour automatic division method and system in a kind of CT image provided by the present invention, forThose skilled in the art, thought according to an embodiment of the present invention have change in specific embodiments and applicationsPlace, to sum up, the contents of this specification are not to be construed as limiting the invention.