Summary of the invention
To solve the above-mentioned problems, this paper presents a kind of heart coronary artery image based on deep learning and optical flow methodSegmentation recognition method may be implemented to train end to end, be realized on heart coronary artery contrastographic picture with certain accuracy rateBlood vessel segmentation identification.The present invention indicates the relevance between frame and frame using optical flow method, and is supplied to mind as inputIt is trained through network, this allows neural network to obtain more valuable information, so as to preferably be dividedAs a result.
In order to achieve the above object, a kind of heart coronary artery shadow based on deep learning and optical flow method provided by the inventionAs segmentation recognition method, comprising:
It is as training sample, training sample is defeated to choose two frame picture of arbitrary continuation in segmentation heart radiography Dicom videoEnter in neural network;Neural network is based on the training sample, calculates the Optic flow information between two continuous frames picture as two framesBetween mapping, while will Optic flow information input neural network in;
The third feature figure and work as that neural network obtains former frame picture and Optic flow information by the method for deep learningIn fifth feature figure that the fourth feature figure that previous frame picture obtains combines input pyramid module, pyramid module is based on theThe method of five characteristic pattern application pyramids fusion obtains the cardiovascular characteristic pattern of different scale;Warp lamination is inserted by bilinearityThe cardiovascular characteristic pattern of different scale is merged together by the method for value along a dimension, obtains heart coronary artery imageSegmentation identification vessel graph.
Further, the acquisition methods of the segmentation heart radiography Dicom video include:
Receive the whole section of heart radiography Dicom video corresponding with lesion information stored in medical integrated database;
Based on lesion information, cooperateed with using SSN (Structured Segment Network, segmentation of structures network)Analyze the key feature information occurred in whole section of heart radiography Dicom video.
Based on key feature information and position information is combined, whole section of Dicom video is segmented, and the iteration step,Meet the segmenting video of setting until eventually finding.
Further, the convolutional neural networks module is repeatedly stacked by multilayer same unit, while reading pre- instructionPractice model parameter;
The unit is followed successively by convolutional layer, batch standardization layer, quick articulamentum, activation primitive layer from top to bottom.
Further, the neural network former frame picture and Optic flow information are obtained by the method for deep learningThe step of three characteristic patterns includes:
Former frame picture in two frame pictures is taken out into input convolution module, moving model calculates figure, takes out calculating processIn the last one convolutional layer output fisrt feature figure;
Optic flow information is inputted and carries out main feature extraction in convolution module, obtains second feature figure;
It is packaged layer and receives fisrt feature figure and second feature figure, linear interpolation is carried out to fisrt feature figure and second feature figureOperation, fisrt feature figure and second feature figure are carried out to the fusion of effective information, obtain third feature figure.
Further, the step of obtaining the fourth feature figure include:
Present frame picture input convolution module is carried out feature extraction by neural network, takes out the output of the last one convolutional layerObtain fourth feature figure.
Further, the step of obtaining the fifth feature figure include:
Third feature figure is added multiplied by different weights by combination layer later respectively with fourth feature figure, obtains fifth featureFigure.
Further, further include the steps that parameter updates, which includes:
Compare the heart coronary artery Image Segmentation identification vessel graph of output and the heart coronary artery shadow of doctor's essence markDifference as dividing identification vessel graph obtains penalty values, and the parameter by gradient descent method to each layer of neural network carries out moreNewly;Iteration runs all steps, until the heart coronary artery Image Segmentation divided and identified by neural network identifiesPenalty values are lower than preset threshold value between vessel graph and the heart coronary artery Image Segmentation identification vessel graph of doctor's essence mark.
It further, further include testing procedure, which includes:
Step 1: reading the heart radiography Dicom video file of the patient taken, extracts key frame, inputs nerve netNetwork;And read the corresponding model parameter of the position.
Step 2: neural network is initialized, and establishes multilayer neural network structure, and reads trained corresponding positionModel parameter.
Step 3: neural network receives the heart radiography Dicom video image of patient, by the method for deep learning to defeatedEnter segmentation and detection that picture carries out blood vessel, exports the blood vessel segmentation identification picture of different position key frames;
Step 4: step 1 is repeated to step 3 to different positions, is disposed until by the key frame of all positions.
A kind of heart coronary artery Image Segmentation recognition methods based on deep learning and optical flow method provided by the invention is led toIt crosses two frame picture of arbitrary continuation in selection segmentation heart radiography Dicom video and training sample is inputted into nerve as training sampleIn network;Neural network is based on the training sample, calculates the Optic flow information between two continuous frames picture as between two framesMapping, while Optic flow information being inputted in neural network;Neural network is by the method for deep learning to former frame picture and lightThe fifth feature figure that the fourth feature figure that the third feature figure and present frame picture that stream information obtains obtain combines inputs goldIn word tower module, pyramid module obtains the cardiovascular of different scale based on the method that fifth feature figure application pyramid mergesCharacteristic pattern;Warp lamination is merged the cardiovascular characteristic pattern of different scale along a dimension by the method for bilinear interpolationTo the technical solution for together, obtaining heart coronary artery Image Segmentation identification vessel graph, the information between adjacent two frame is indicatedOut and it to be used for the segmentation and identification mission of coronary heart contrastographic picture, and optical flow method and deep learning technology are combined and usedIn the segmentation and identification mission of heart contrastographic picture.Divided with higher accuracy rate and identifies that heart contrastographic picture is different types ofBlood vessel.Deep learning technology and optical flow method are combined, realization is divided automatically end to end and identifies vascular process.It solvesThe way that tradition is split Dicom video file is to be split one by one, and not any between frame and frameRelevance leads to the technical problem that segmentation precision is low.This method indicates the relevance between frame and frame using optical flow method, by blood vesselChange information be added dividing method in, so as to obtain preferable result.
Embodiment one
Referring to Fig.1, Fig. 1 shows a kind of heart coronary artery Image Segmentation identification side based on deep learning and optical flow methodOne flow chart of method embodiment, including step S110 to step S120:
Step S110, choosing two frame picture of arbitrary continuation in segmentation heart radiography Dicom video will instruct as training samplePractice in sample input neural network;Neural network is based on the training sample, calculates the Optic flow information between two continuous frames pictureIt is inputted in neural network as the mapping between two frames, while by Optic flow information.
Step S120, the third that neural network obtains former frame picture and Optic flow information by the method for deep learning is specialIn the fifth feature figure input pyramid module that the fourth feature figure that sign figure and present frame picture obtain combines, pyramid mouldBlock obtains the cardiovascular characteristic pattern of different scale based on the method that fifth feature figure application pyramid merges;Warp lamination passes throughThe cardiovascular characteristic pattern of different scale is merged together by the method for bilinear interpolation along a dimension, obtains heart coronariesArtery Image Segmentation identifies vessel graph.
Segmentation heart radiography Dicom video acquisition methods include:
Receive the whole section of heart radiography Dicom video corresponding with lesion information stored in medical integrated database;Based on lesion information, whole section of Cooperative Analysis of SSN (Structured Segment Network, segmentation of structures network) is usedThe key feature information occurred in heart radiography Dicom video;Based on key feature information and position information is combined, to whole sectionDicom video is segmented, and the iteration step, and the segmenting video of setting is met until eventually finding.I.e. from dicom fileIt intercepts out the relatively clear Digital Subtraction heart contrastographic picture of vessel profile, and picture is handled as single pass gray scale picture,Input convolutional neural networks.
Wherein, heart radiography Dicom sets of video data is by the Dicom coronary digital of about 100 patients with coronary heart diseaseSubtractive angiography (Digital imaging in medicine communication) file composition.Every patient has multiple Dicom files of different positions, oftenA Dicom file all includes the coronarography of several frames, and each frame has different types of blood vessel, including Left main artery, a left sideCircumflex branch, left anterior descending branch, collateral, left branches of interventricular septum, right hat etc..It needs to divide in the present invention and the blood vessel identified is exactly these bloodPipe.For each frame image in video, doctor can carry out fine Pixel-level mark to the blood vessel in figure.Using these thisA little data train network model, carry out the segmentation and identification of blood vessel using the model trained later.
Whole network is made of basic convolutional neural networks module, pyramid model, also packing layer, combination layer.FromPick out the relatively clear Digital Subtraction heart contrastographic picture of adjacent two frames vessel profile in Dicom file, and by this two frames figurePiece processing is single pass gray scale picture, inputs convolutional neural networks.
The convolutional neural networks module is repeatedly stacked by multilayer same unit, while reading pre-training model ginsengNumber;The unit is followed successively by convolutional layer, batch standardization layer, quick articulamentum, activation primitive layer from top to bottom.
The third feature figure that the neural network obtains former frame picture and Optic flow information by the method for deep learningThe step of include: by two frame pictures former frame picture take out input convolution module, moving model calculate figure, taking-up calculatedThe fisrt feature figure that the last one convolutional layer exports in journey;Optic flow information is inputted and carries out main feature extraction in convolution module,Obtain second feature figure;It is packaged layer and receives fisrt feature figure and second feature figure, fisrt feature figure and second feature figure are carried outFisrt feature figure and second feature figure are carried out the fusion of effective information, obtain third feature figure by the operation of linear interpolation.
The step of obtaining the fourth feature figure, which includes: neural network, carries out feature for present frame picture input convolution moduleIt extracts, the output for taking out the last one convolutional layer obtains fourth feature figure.
The step of obtaining the fifth feature figure include: combination layer by third feature figure and fourth feature figure respectively multiplied by notIt is added after same weight, obtains fifth feature figure.
Further include the steps that parameter updates, which includes: to compare the heart coronary artery Image Segmentation identification blood of outputThe difference of the heart coronary artery Image Segmentation identification vessel graph of Guan Tuhe doctor's essence mark obtains penalty values, is declined by gradientMethod is updated each layer of neural network of parameter;Iteration runs all steps, until being divided by neural network and being knownNot Chu heart coronary artery Image Segmentation identification vessel graph and doctor's essence mark heart coronary artery Image Segmentation identify bloodPenalty values are lower than preset threshold value between pipe figure.
It further include testing procedure, which includes:
Step 1: reading the heart radiography Dicom video file of the patient taken, extracts key frame, inputs nerve netNetwork;And read the corresponding model parameter of the position.
Step 2: neural network is initialized, and establishes multilayer neural network structure, and reads trained corresponding positionModel parameter.
Step 3: neural network receives the heart radiography Dicom video image of patient, by the method for deep learning to defeatedEnter segmentation and detection that picture carries out blood vessel, exports the blood vessel segmentation identification picture of different position key frames;
Step 4: step 1 is repeated to step 3 to different positions, is disposed until by the key frame of all positions.
Under normal circumstances, training neural network generally requires three days to one week time, along with doing experiment results,Time cost often needs an important factor for considering.Criticizing standardization layer can be exactly substantially reduced with acceleration model training speedA kind of method of time cost.Standardization will accelerate network convergence rate under Identity Planization to same suitable distribution is criticized,The first step of concrete operations is standardized to the feature of input, and the feature of input is subtracted after its mean value divided by its variance,Detailed process may be expressed as:
WhereinFeature after indicating standardization, x indicate the feature of input, E [x(k)] indicate input feature vector mean value,Indicate the variance of input feature vector.
During extracting cardiovascular feature, time cost is also reduced.So after convolutional layer, to volumeThe image of the heart coronaries radiography of lamination output does one batch of standardization processing.One is carried out to the feature of convolutional layer output to subtractMean value while needing to do each layer of mean value and variance one storage divided by the operation of variance so as to during the test can be withIt directly uses, the heart contrastographic picture after convolution can be made to have unified data distribution, in this way so as to accelerate to mentionTake the task of blood vessel feature.
Second step is to carry out Pan and Zoom to the feature of standardization, it is therefore an objective to network oneself be allowed to learn to be suitble to the defeated of networkOut, detailed process may be expressed as:
Wherein γ(k)For the scaling parameter that can learn, β(k)For the translation parameters that can learn.
Further, the quick articulamentum receives batch output of standardization layer, by the input of convolutional layer and batch standardizationThe output of layer is added to obtain characteristic pattern by weight, and characteristic pattern is output to activation primitive layer.If entire neural network be exactly byDry quick articulamentum, which is connected, to be formed.
The number of plies of neural network is deeper, and the dimension for the feature that can be acquired is also higher, therefore the number of plies is to neural networkHave very big influence.But when the number of plies of neural network becomes deeper and deeper, deeper model is difficult tableUp to the feature of low dimensional, so the problems such as just will appear gradient explosion, gradient disappearance.Quick connection unit is to solve this to askThe method of topic.Remember H (X)=F (X)+X, in extreme circumstances F (X) there is nothing study arrive, i.e. F (X)=0, at this time H (X)=X.This ensures that shallow-layer feature is transmitted backward, and the feature that whole network learns will not be too poor, and quick connection unit is usedIn the extracting the feature extraction of heart coronary artery contrastographic picture of the task.Model oneself is allowed to determine that it wants to extract characteristic dimensionJust, accomplish the cardiovascular feature for retaining useful low dimensional as far as possible.It is asked to solve gradient explosion with what gradient disappearedTopic.
Entire quick connection procedure can indicate are as follows:
Y=F (x, { wi})+x
Wherein y indicates the feature of output, and x indicates the feature of input, and F (X, { Wi }) indicates to need the residual error of training to map letterNumber, Wi indicate the weight of this layer.
If it is simple these linear convolutions are connected to the network, then final effect is only single with oneConvolution unit is the same.So needing to introduce activation primitive layer in actual use, as shown in Fig. 2, Fig. 2 is activation primitiveImage, detailed process may be expressed as:
Y=G (X)
Wherein y is output feature, and x is input feature vector, and G is activation primitive.
During the test, also the heart coronary artery blood-vessel image that process of convolution is crossed subtract mean value divided by varianceOperation guarantees that test is consistent with the distribution of training process cardiac coronary artery characteristics of image.
Further, the pyramid module receives fifth feature figure, using the method that pyramid merges, first to the 5th spySign figure carries out convolution operation, exports the cardiovascular characteristic pattern of different scale;The cardiovascular characteristic pattern of different scale is inputtedWarp lamination.
Pyramid module has merged the feature of 4 kinds of different scales of the heart coronary artery image extracted.I.e. by four kindsDifferent size of heart features fusion such as includes the heart of most coarse heart coronary artery characteristics of image and three kinds of different scalesDirty coronary artery images pond feature.In order to guarantee the weight of global characteristics, if pyramid shares N number of rank, eachThe 1/N of script will be reduced to after rank for rank channel using the convolution of 1x1.Before obtaining non-pond by bilinear interpolation againSize is finally merged together along a dimension.
Warp lamination receives the cardiovascular characteristic pattern of different scale, by the method for bilinear interpolation by different scaleCardiovascular characteristic pattern is amplified to same size, is finally merged together along a dimension, to heart coronary artery image pointCut identification vessel graph.
Further, further include the steps that parameter updates, which includes:
Compare the heart coronary artery Image Segmentation identification vessel graph of output and the heart coronary artery shadow of doctor's essence markDifference as dividing identification vessel graph obtains penalty values, and the parameter by gradient descent method to each layer of neural network carries out moreNewly;Iteration runs all steps, until by damaging between neural network segmentation and the vessel graph identified and doctor's essence markMistake value is lower than preset threshold value.
One preferred embodiment, laboratory hardware: Intel Xeon CPU E5-2630 v4 CPU and NVIDIA GTX1080Ti GPU carries out Collaborative Control.
One, reading data
Step 1: it receives the whole section of heart corresponding with the lesion information stored from medical integrated database and makesShadow Dicom video.
Step 2: it is based on disease information, uses the key occurred in whole section of heart radiography Dicom video of SSN Cooperative AnalysisCharacteristic information.
Step 3: based on key feature information and position information is combined, whole section of Dicom video is segmented, and iterationThe step meets the segmenting video of setting until eventually finding.
Step 4: the arbitrary continuation two field pictures in selecting video segmentation are inputted neural network as training sampleIn module.
Step 5: the Optic flow information between adjacent two frame picked out is calculated, as the mapping between two frames.Also will simultaneouslyThese light streams input in neural network module, as input trained later.
Two, training network is split and identifies to blood vessel
Step 1: neural network module is initialized, and establishes multilayer neural network structure, by the multiple heap of similar unitsIt is folded to form, convolutional layer, batch standardization layer, quick articulamentum, activation primitive layer are followed successively by a unit from top to bottom.It reads simultaneouslyTake pre-training model parameter.
Step 2: taking out input convolution module for the previous frame image in two field pictures, and moving model calculates figure, takes out meterThe fisrt feature figure of the last one convolutional layer during calculation, as input trained later.
Step 3: it is defeated that neural network module receives Optic flow information between the image of present frame, adjacent two frame, previous stepAfter the fisrt feature figure entered, start the training for carrying out model.
Step 4: neural network module needs to convert original light stream first, that is, is inputted a convolution mouldMain feature extraction is carried out in block, obtains second feature figure, and be inputted packing layer.
Step 5: being packaged layer reception fisrt feature figure and second feature figure, fisrt feature figure represent the master of previous frame imageInformation is wanted, second feature figure represents the main feature of light stream between two field pictures.Layer is packaged linearly to insert two characteristic patternsTwo characteristic patterns are carried out the fusion of effective information, obtain third feature figure by the operation of value.It is entered into combination layer.
Step 6: neural network module receives present frame to image, needs to carry out feature extraction to it, so being inputtedOne convolution module, the output for taking out the last one convolutional layer obtain fourth feature figure.And fourth feature figure is inputted into combination layerIn.
Step 7: combination layer receives third feature figure and fourth feature figure, wherein third feature figure indicate Optic flow information andThe efficient combination of former frame pictorial information, fourth feature figure indicate the effective information of present frame.Combination layer by third feature figure withFourth feature figure is added multiplied by after different weights respectively, obtains fifth feature figure.And fifth feature figure is inputted into pyramidModule.
Step 8: pyramid module receives fifth feature figure, using the method that pyramid merges, first rolls up to characteristic patternProduct operation, exports the cardiovascular characteristic pattern of four kinds of different scales.The cardiovascular characteristic pattern input of four kinds of different scales is anti-Convolutional layer.
Step 9: warp lamination receives the cardiovascular characteristic pattern of four kinds of different scales, passes through the method for bilinear interpolationThe cardiovascular characteristic pattern of four kinds of different scales is amplified to same size, is finally merged together along a dimension.This is justLast segmentation identification vessel graph is obtained.
Step 10: the difference of the segmentation identification vessel graph and doctor's essence mark picture that more finally export obtains penalty values,The parameter by gradient descent method to each layer of neural network is updated later.
Step 11: iteration operating procedure two to step 10 until divided by neural network and the vessel graph that identifies withPenalty values are lower than preset threshold value between doctor's essence mark.
Step 12: storing model parameter after training and Artificial Neural Network Structures, so as in later test processIt uses.
Step 13: the model parameter of different position data and storage are trained.
Three, test network is split and detects to blood vessel
Step 1: reading the Dicom file of the patient taken, extracts crucial consecutive frame, input neural network module is simultaneouslyRead the corresponding model parameter of the position.
Step 2: neural network module is initialized, and establishes multilayer neural network structure, and trained before readingThe model parameter of corresponding position.
Step 3: neural network receives digital subtraction angiography image, by the method for deep learning to input pictureThe segmentation and detection for carrying out blood vessel export the blood vessel segmentation and identification picture of different position key frames.
Step 4: above-mentioned one to three step is repeated to different positions, is disposed until by the key frame of all positions.
The embodiment of the present invention one provides a kind of based on the knowledge of the heart coronary artery Image Segmentation of deep learning and optical flow methodOther method, by choosing, two frame picture of arbitrary continuation is as training sample in segmentation heart radiography Dicom video, by training sampleIt inputs in neural network;Neural network is based on the training sample, calculates the Optic flow information between two continuous frames picture as twoMapping between frame, while Optic flow information being inputted in neural network;Neural network is by the method for deep learning to former frameThe fifth feature that the fourth feature figure that the third feature figure and present frame picture that picture and Optic flow information obtain obtain combinesIn figure input pyramid module, pyramid module obtains different scale based on the method that fifth feature figure application pyramid mergesCardiovascular characteristic pattern;Warp lamination is by the method for bilinear interpolation by the cardiovascular characteristic pattern of different scale along oneDimension is merged together, and obtains the technical solution of heart coronary artery Image Segmentation identification vessel graph, will be between adjacent two frameInformation represents the segmentation and identification mission being used together in coronary heart contrastographic picture, and by optical flow method and deep learning skillArt combines the segmentation and identification mission for heart contrastographic picture.Divided with higher accuracy rate and identifies heart contrastographic picture notThe blood vessel of same type.Deep learning technology and optical flow method are combined, realization is divided automatically end to end and identifies blood vessel mistakeJourney.Solving tradition is to be split one by one to the way that Dicom video file is split, and between frame and frameThere is no any relevance, leads to the technical problem that segmentation precision is low.This method indicates being associated between frame and frame using optical flow methodProperty, the change information of blood vessel is added in dividing method, it is preferable as a result, improving the accuracy of separation so as to obtain.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, anyThose familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all containLid is within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.