Movatterモバイル変換


[0]ホーム

URL:


CN109754403A - A method and system for automatic tumor segmentation in CT images - Google Patents

A method and system for automatic tumor segmentation in CT images
Download PDF

Info

Publication number
CN109754403A
CN109754403ACN201811440970.8ACN201811440970ACN109754403ACN 109754403 ACN109754403 ACN 109754403ACN 201811440970 ACN201811440970 ACN 201811440970ACN 109754403 ACN109754403 ACN 109754403A
Authority
CN
China
Prior art keywords
data
layer
image
enhancing
constructed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811440970.8A
Other languages
Chinese (zh)
Inventor
贾富仓
方驰华
初陈曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Southern Medical University Zhujiang Hospital
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Southern Medical University Zhujiang Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS, Southern Medical University Zhujiang HospitalfiledCriticalShenzhen Institute of Advanced Technology of CAS
Priority to CN201811440970.8ApriorityCriticalpatent/CN109754403A/en
Publication of CN109754403ApublicationCriticalpatent/CN109754403A/en
Priority to PCT/CN2019/121594prioritypatent/WO2020108562A1/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The invention discloses the tumour automatic division methods and system in a kind of CT image, belong to field of medical image processing, for dividing the tumor focus region in CT image, solve the problems, such as that the segmentation precision of CT image is lower, expand comprising: carry out data enhancing to raw image data, obtains enhancing expanding data;Enhancing expanding data is normalized, normalization data is obtained;Normalization data is entered to the processing network trained, obtains segmented image;Noise reduction process is done to segmented image;To reduce the difference of different CT machine scanning bring difference original images, the scope of application and precision of processing network processes result are improved.

Description

Tumour automatic division method and system in a kind of CT image
Technical field
The present invention relates to the tumour automatic division methods in technical field of medical image processing more particularly to a kind of CT imageAnd system.
Background technique
Liver neoplasm is to threaten the major disease of human health, and the early detection of liver neoplasm is accurately measured and examined clinicIt controls significant with treatment.CT (Computed Tomography, Chinese are CT scan) is used as oneThe detection mode of kind Cheap highly effective, has been increasingly becoming the conventional means of clinically Diagnosis of Hepatic Tumors.Fast accurate from liverTumor focus region is partitioned into CT image, it is not only helpful to operation program type, but also to tumor region position in artPrecise positioning has great practical value with excision and the assessment of postoperative chemicotherapy effect.
How tumor focus region is partitioned into fast accurate from CT image for liver, be doctor and scholars study oneBig project has the research of the liver neoplasm automatic division method based on deep learning framework, such as convolutional neural networks at present(Convolutional Neural Network, CNN), full convolutional neural networks (Fully ConvolutionalNetwork, FCN) etc..
But the network structure of CNN is more single, and the addition of full articulamentum is so that network entirety training parameter is more hugeGreatly, calculating is complex, contains much information, net training time is longer, and segmentation precision is poor.Afterwards based on the whole of this improved FCNBody segmentation precision is still lower, and classification pixel-based does not account for the relationship between pixel, lacks Space Consistency.HereafterSome segmentation network architectures are proposed on the basis of these classic network frameworks again, but segmentation precision is still to be improved.
Summary of the invention
The main purpose of the present invention is to provide the tumour automatic division methods and system in a kind of CT image, it is intended to solveThe technical problem lower to the segmentation precision of CT image in the prior art.
To achieve the above object, first aspect present invention provides the tumour automatic division method in a kind of CT image, comprising:Data enhancing is carried out to raw image data to expand, and obtains enhancing expanding data;Enhancing expanding data is normalized,Obtain normalization data;Normalization data is entered to the processing network trained, obtains segmented image;Segmented image is done at noise reductionReason.
Further, described to carry out data enhancing to raw image data to expand including: based on translation rotation principle to originalBeginning image data carries out enhancing expansion, or carries out enhancing expansion to raw image data based on Stochastic Elasticity deformation principle.
Further, it includes: according to linear normalization principle to increasing that described pair of enhancing expanding data, which is normalized,Strong expanding data and liver neoplasm goldstandard are normalized, and obtain linear normalization image data;Linearly return to describedThe normalized that one change image data carries out data distribution obtains normalization data.
Further, the training method of the processing network trained includes: building the first convolutional layer, the first modified lineProperty elementary layer, pond layer, dropout layers and down-sampling layer, formed constricted path;According to first convolutional layer, described firstCorrect linear elementary layer, the pond layer, dropout layers and down-sampling layer extract and encode the noise reduction data, generate codingData;The second convolutional layer, the linear elementary layer of the second amendment and up-sampling layer are constructed, expansion path is formed;According to the volume TwoLamination, described second correct linear elementary layer and up-sample layer decoder and divide encoding samples data, generate decoding data;BuildingProbability output layer exports the decoding data.
Further, the first convolutional layer of the building, the first linear elementary layer of amendment, pond layer, dropout layers and under adoptSample layer includes: successively to construct three the first 3*3 convolutional layers, and construct two the first 2*2 convolutional layers, and the first 3* successively constructedThe feature port number of 3 convolutional layers and the first 2*2 convolutional layer successively increases one times since 64;After each first 3*3 convolutional layerConstruct the linear elementary layer of amendment;Between adjacent first 3*3 convolutional layer, it is between adjacent first 2*2 convolutional layer and adjacentPond layer is constructed between first 3*3 convolutional layer and the first 2*2 convolutional layer;First constructed after the convolutional layer of the first 3*3First dropout layers are constructed after the convolutional layer of first 2*2, and the convolution of second the first 2*2 is constructed after dropout layers of buildingLayer;Second dropout layers are constructed after second the first 2*2 convolutional layer;The second convolutional layer of the building, the second amendment are linearElementary layer and up-sampling layer include: successively four volume Two bases of building after second dropout layer of building, by above adoptingSample layer and two the 2nd 3*3 convolutional layers are constituted, and the volume Two base feature port number successively constructed is with second the first 2*2 volumesLamination is to begin, and feature port number successively reduces one times;The building up-sampling layer before each 2nd 3*3 convolutional layer.
Further, it is distributed, and is cascaded in mirror image between the constricted path and the expansion path.
Further, the training method of the processing network trained further include: building confrontation network, the building pairAnti- network includes: the first data of building, first data by segmentation goldstandard as standard reference, and by liver neoplasm two-valueGoldstandard and Original Liver gray level image dot product obtain;The second data are constructed, second data are by the segmented image and originalBeginning liver intensity picture point is multiplied to be arrived;Loss function is constructed, and by the first data and the second data entrance loss function, captures andFrom the length space characteristics of the Pixel-level of different levels.
Second aspect of the present invention provides the automatic segmenting system of tumour in a kind of CT image, comprising: enhancing enlargement module is usedIn carrying out enhancing expansion to raw image data, enhancing expanding data is obtained;Normalized module, for expanding number to enhancingAccording to being normalized, normalization data is obtained;Image segmentation module, for noise reduction data to be inputted the processing net trainedNetwork obtains segmented image;Noise reduction module, for doing noise reduction process to segmented image.
Third aspect present invention provides a kind of electronic device, comprising: memory, processor and is stored on the memoryAnd the computer program that can be run on the processor, which is characterized in that when the processor executes the computer program,Realize any one the method among the above.
Fourth aspect present invention provides a kind of computer readable storage medium, a kind of computer readable storage medium, thereonIt is stored with computer program, which is characterized in that when the computer program is executed by processor, realize any one among the aboveThe method.
The present invention provides the tumour automatic division method in a kind of CT image, and beneficial effect is: by original imageData carry out enhancing expansion, do not influence on the basis that the robust for meeting variation grayscale information requires and to handle network to real informationProcessing on, can reach abundant information amount to improve the extensive effect of parted pattern as a result, enable parted pattern to fitFor wider data set, the scope of application of processing network is improved;Due to the gray value for the original image that different patients obtainIt differs greatly, therefore by normalized, processing of the dividing processing network to image data can be facilitated, to reduce different CTMachine scans the difference of bring difference original image, improves the scope of application and precision of processing network processes result.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show belowThere is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only thisSome embodiments of invention for those skilled in the art without creative efforts, can also basisThese attached drawings obtain other attached drawings.
Fig. 1 is the structural schematic block diagram of the tumour automatic division method in CT of embodiment of the present invention image;
The structure of processing network of the Fig. 2 to have trained in the tumour automatic division method in CT of embodiment of the present invention image is shownIt is intended to;
Fig. 3 be CT of embodiment of the present invention image in tumour automatic division method in fight network structural schematic diagram;
Fig. 4 is the structural schematic block diagram of electronic device of the embodiment of the present invention.
Specific embodiment
In order to make the invention's purpose, features and advantages of the invention more obvious and easy to understand, below in conjunction with the present inventionAttached drawing in embodiment, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described realityApplying example is only a part of the embodiment of the present invention, and not all embodiments.Based on the embodiments of the present invention, those skilled in the artMember's every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
Referring to Fig. 1, for the tumour automatic division method in a kind of CT image, comprising: S1, carried out to raw image dataData enhancing is expanded, and enhancing expanding data is obtained;S2, enhancing expanding data is normalized, obtains normalization data;S3, normalization data is entered to the processing network trained, obtains segmented image;S4, noise reduction process is done to segmented image.
Raw image data is carried out data enhancing to expand including: to carry out raw image data based on translation rotation principleEnhancing is expanded, or carries out enhancing expansion to raw image data based on Stochastic Elasticity deformation principle.
Since raw image data is more dull, the data information of only one vertical positive direction, for the place trainedIt being more short of for reason network information richness, dull data information causes the Generalization Capability for handling e-learning weaker, becauseThis needs to expand raw image data and enhanced, and obtains enhancing expanding data, to enhance the general of processing e-learningChange performance, in the present embodiment, the process of data enhancing abides by invariance principle, and it is flat for meeting the concrete operations of invariance principleIt moves, rotation and elastic deformation not only believe the training data gray scale of original image after handling initial data by invariance principleThe robustness of breath, moreover it is possible to achieve the purpose that informative, and not influence to handle study of the network to real information, enhance placeThe Generalization Capability for managing network enables to handle network suitable for wider data set;In the present embodiment, by original graphAs data are translated, rotate and elastically-deformable mode carries out enhancing expansion to raw image data, obtains enhancing and expand numberAccording to.
It includes: according to linear normalization principle to enhancing expanding data and liver that enhancing expanding data, which is normalized,Dirty tumour goldstandard is normalized, and obtains linear normalization image data;Linear normalization image data is countedNormalization data is obtained according to the normalized of distribution.
Due to the difference of CT equipment, the grey value difference for the CT image being scanned to patient is larger, in this implementationIn example, by the way that enhancing expanding data and liver neoplasm goldstandard are normalized, thus facilitate the training of processing network,Reduce the grey value difference of different CT equipment scanning bring CT images;Specifically, gradation data is returned using linear normalizationOne changes to [0,225] section, linear normalization formula are as follows:
In linear normalization formula, XnormFor normalization data, X is enhancing expanding data, Xmax、XminRespectively enhanceThe maximum value and minimum value that expanding data is concentrated;It in the present embodiment, is 0/255 binaryzation to gray scale before input processing networkLiver neoplasm data carry out 0-1 standardization and are used as tumour goldstandard, and data are higher than 0.5 setting divided by with 0.5 being threshold value after 255It is 1, is set as 0 lower than 0.5.
After it will enhance expanding data and carry out linear normalization, the normalized of data distribution is carried out, data distributionNormalized is as follows: the hepatic data to gray scale before input processing network in the section 0-255 carries out 0 mean value standardization, will increaseStrong EDS extended data set is normalized to the data set for the normal distribution that mean value is 0, variance is 1, and the normalization formula of data distribution is such asUnder:
In the normalization formula of data distribution, μ and σ are respectively the mean value and standard deviation of raw data set.
There is noise jamming to a certain extent during handling raw image data, and the figure of these noises is specialSign is more obvious, and accounting example is smaller on the original image, it is therefore desirable to noise reduction process is done to segmented image, in the present embodimentIn, interference noise is removed by filter, caused by filtering using the library specialized medical image processing software SimpleITK carry outOperation, noise is shown when Fei Telei (Feret) diameter is less than 7 based on the data statistics of morphologic information, corresponding at this timePerimeter and occupied pixel quantity are all that minimum, the identification that can be filtered without influencing other tumours is influenced within the scope of noise,Therefore using FeretDiameter=7 as threshold value, the part less than the value will be filtered, to realize the mesh of noise reduction filtering, enable finally obtained segmented image more accurate.
The training method for the processing network trained includes: the first convolutional layer of building, the linear elementary layer of the first amendment, Chi HuaLayer, dropout layers and down-sampling layer form constricted path;According to the first convolutional layer, the first linear elementary layer of amendment, pond layer,Dropout layers and down-sampling layer extract and coded samples noise reduction data, generates coded data;It constructs the second convolutional layer, second repairLinear positive elementary layer and up-sampling layer, form expansion path;According to the second convolutional layer, the linear elementary layer of the second amendment and up-samplingLayer decoder and partition encoding data generate decoding data;Probability output layer is constructed, decoding data is exported.
Wherein, the method for making sample noise reduction data includes: to carry out data enhancing to sample image data to expand, and obtains sampleThis enhancing expanding data;Sample enhancing expanding data is normalized, samples normalization data are obtained;To sample normalizingChange data and do noise reduction process, obtains sample noise reduction data, and the increasing with raw image data is expanded in the enhancing of sample image dataStrong extending method is consistent, and sample enhancing expanding data is consistent with the enhancing normalization processing method of expanding data, sample normalizingIt is consistent with the noise reduction process method of normalization data to change data.
Construct the first convolutional layer, the first linear elementary layer of amendment, pond layer, dropout layers and down-sampling layer include: successivelyThree the first 3*3 convolutional layers are constructed, and construct two the first 2*2 convolutional layers, and the first 3*3 convolutional layer successively constructed and firstThe feature port number of 2*2 convolutional layer successively increases one times since 64;An amendment is constructed after each first 3*3 convolutional layerLinear unit layer;Between adjacent first 3*3 convolutional layer, between adjacent first 2*2 convolutional layer and the first adjacent 3*3 convolutionPond layer is constructed between layer and the first 2*2 convolutional layer;The volume of first the first 2*2 constructed after the convolutional layer of the first 3*3First dropout layers are constructed after lamination, and the convolutional layer of second the first 2*2 is constructed after dropout layers of building;At secondSecond dropout layers are constructed after first 2*2 convolutional layer;Construct the second convolutional layer, the linear elementary layer of the second amendment and up-samplingLayer includes: successively to construct four volume Two bases after second dropout layers of building, by up-sampling layer and two second3*3 convolutional layer is constituted, and the volume Two base feature port number successively constructed is to begin with second the first 2*2 convolutional layer, featurePort number successively reduces one times;The building up-sampling layer before each 2nd 3*3 convolutional layer.
Referring to Fig. 2, specifically, handling the overall U-shaped symmetrical structure of network, being divided into two stages of coding-decoding, encodeStage is characteristic extraction part, is completed by constricted path, predominantly U-shape network left part, constricted path and classical convolution mindIt is essentially identical through network operation, 5 block are broadly divided into, wherein three block mentioning for feature by two continuous 3x3The maximum pondization operation for down-sampling of the convolution operation (RELU is met behind each convolution), a 2x2 that take forms, andWith the beginning of start image feature port number 64, after down-sampling operation of every progress, the feature port number of image is all doubled;Latter two block, which then introduces dropout layers on the basis of block in front, prevents network training from over-fitting occur.MeanwhileIn constricted path, the thinking of residual error network is quoted simultaneously in the convolution feature extraction operation part of each block, by using oneA shortcut based on pixel superposition, the information after raw information to be passed through to the convolution operation and convolution operation of 1x1 directly connectIt connects, keeps the two port number consistent, can supplement and increase network information transmission capacity, improve the ability of e-learning feature.DecodingPart is the part that feature is restored, and is mainly completed by expansion path, predominantly the right part of the U-shaped network, can be divided mainly into 4A block+1 last probability output layers (sigmoid), this four block all by a up-sampling layer, (grasp by 2x2 deconvolutionRealize) and two 3x3 convolution (RELU is met behind each convolution), and when deconvolution of every progress up-sampling, imageFeature port number halves, and exports finally by the sigmoid layer of 1x1 to probability graph belonging to each pixel.
It is distributed, and is cascaded in mirror image between constricted path and expansion path, can be supplemented by mirror imageLoss of learning part, further enriches the network information.
The training method for the processing network trained further include: building confrontation network, building confrontation network includes: building theOne data, the first data by segmentation goldstandard as standard reference, and by liver neoplasm two-value goldstandard and Original Liver gray scalePicture point is multiplied to be arrived;The second data are constructed, the second data are obtained by segmented image and Original Liver gray level image dot product;Building damageFunction is lost, and by the first data and the second data entrance loss function, captures the length space of the Pixel-level from different levelsFeature.
Referring to Fig. 3, specifically, the input for fighting the network architecture can strictly be divided into two parts.A part is segmentation gold markStandard is inputted as standard referring to part, by the liver neoplasm two-value goldstandard (ground truth) and Original Liver gray scale providedImage multiplication (dot product) obtains, and is denoted as label_mask;A part is the input of segmentation neural network forecast part, most according to segmentation networkWhole two-value predicts segmentation result figure, and be multiplied (dot product) with Original Liver grayscale image, is denoted as output_mask.Fight networkNetwork structure is similar to the coded portion of segmentation network, and the network is using label_mask and output_mask as input, settingLoss function is MAE (Mean Absolute Error, mean absolute error), which can capture well and come fromThe length space characteristics of the Pixel-level of different levels (information including high, medium and low layer), so as to realize image feature informationMulti-level comparison correction.Network losses function is fought by calculating the gap between standard masks and prediction mask, and is combinedThe loss function for dividing network, collectively as the adjustment function of final segmentation network, to realize confrontation network to generatedThe feedback regulation of parted pattern weight updates, and reaches and advanced optimizes effect.
The building of loss function uses Dice coefficient as assessment, formula are as follows:
Wherein, s1, s2 are respectively actual value and predicted value, and smotth is one for increasing the ginseng of matched curve smoothnessNumber, is similar to an infinitesimal variable, and the introducing of smotth keeps function more smooth.
The loss function for fighting network is MAE (Mean Absolute Error, mean absolute error):
Wherein L is the confrontation total number of plies of network,Come from the feature extraction of input the i-th layer network of goldstandard exposure maskImage,Come from the feature-extraction images of input segmentation the i-th layer network of prediction mask.The loss function can be fineGround captures the length space characteristics of the Pixel-level from different levels (information including high, medium and low layer), so as to realize figureAs the multi-level comparison of characteristic information is corrected.
Whole loss function are as follows: loss=lmae-ldice
The embodiment of the present application provides the automatic segmenting system of tumour in a kind of CT image, comprising: enhancing enlargement module is used forEnhancing expansion is carried out to raw image data, obtains enhancing expanding data;Normalized module, for enhancing expanding dataIt is normalized, obtains normalization data;Image segmentation module, for noise reduction data to be inputted the processing net trainedNetwork obtains segmented image;Noise reduction module, for doing noise reduction process to segmented image.
The embodiment of the present application provides a kind of electronic device, referring to Fig. 4, the electronic device includes: memory 601, processingDevice 602 and it is stored in the computer program that can be run on memory 601 and on processor 602, processor 602 executes the calculatingWhen machine program, the generation method of increment Density Estimator device described in embodiment of the aforementioned figures 1 to attached drawing 4 is realized.
Further, electronic device further include: at least one input equipment 603 and at least one output equipment 604.
Above-mentioned memory 601, processor 602, input equipment 603 and output equipment 604, are connected by bus 605.
Wherein, input equipment 603 concretely camera, touch panel, physical button or mouse etc..Output equipment604 concretely display screens.
Memory 601 can be high random access memory body (RAM, Random Access Memory) memory,It can be non-labile memory (non-volatile memory), such as magnetic disk storage.Memory 601 is for storing oneGroup executable program code, processor 602 are coupled with memory 601.
Further, the embodiment of the present application also provides a kind of computer readable storage medium, the computer-readable storagesMedium can be in the electronic device being set in the various embodiments described above, which can be earlier figures 4Memory 601 in illustrated embodiment.It is stored with computer program on the computer readable storage medium, the program is by processor602 realize the generation method of increment Density Estimator device described in preceding method embodiment when executing.
Further, the computer can storage medium can also be USB flash disk, mobile hard disk, read-only memory 601 (ROM,Read-Only Memory), RAM, the various media that can store program code such as magnetic or disk.
In several embodiments provided herein, it should be understood that disclosed method, it can be by others sideFormula is realized.For example, the division of the module, only a kind of logical function partition, can there is other division in actual implementationMode, such as multiple module or components can be combined or can be integrated into another system, or some features can be ignored, orIt does not execute.Another point, shown or discussed mutual coupling, direct-coupling or communication connection can be by someThe indirect coupling or communication connection of interface, device or module can be electrical property, mechanical or other forms.
The module as illustrated by the separation member may or may not be physically separated, aobvious as moduleThe component shown may or may not be physical module, it can and it is in one place, or may be distributed over multipleOn network module.Some or all of the modules therein can be selected to realize the mesh of this embodiment scheme according to the actual needs's.
It, can also be in addition, each functional module in each embodiment of the present invention can integrate in a processing moduleIt is that modules physically exist alone, can also be integrated in two or more modules in a module.Above-mentioned integrated mouldBlock both can take the form of hardware realization, can also be realized in the form of software function module.
If the integrated module is realized in the form of software function module and sells or use as independent productWhen, it can store in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantiallyThe all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other wordsIt embodies, which is stored in a storage medium, including some instructions are used so that a computerEquipment (can be personal computer, server or the network equipment etc.) executes the complete of each embodiment the method for the present inventionPortion or part steps.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-OnlyMemory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can store journeyThe medium of sequence code.
It should be noted that for the various method embodiments described above, describing for simplicity, therefore, it is stated as a series ofCombination of actions, but those skilled in the art should understand that, the present invention is not limited by the sequence of acts described becauseAccording to the present invention, certain steps can use other sequences or carry out simultaneously.Secondly, those skilled in the art should also knowIt knows, the embodiments described in the specification are all preferred embodiments, and related actions and modules might not all be this hairNecessary to bright.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodimentPoint, it may refer to the associated description of other embodiments.
The above are the description to tumour automatic division method and system in a kind of CT image provided by the present invention, forThose skilled in the art, thought according to an embodiment of the present invention have change in specific embodiments and applicationsPlace, to sum up, the contents of this specification are not to be construed as limiting the invention.

Claims (10)

CN201811440970.8A2018-11-292018-11-29 A method and system for automatic tumor segmentation in CT imagesPendingCN109754403A (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN201811440970.8ACN109754403A (en)2018-11-292018-11-29 A method and system for automatic tumor segmentation in CT images
PCT/CN2019/121594WO2020108562A1 (en)2018-11-292019-11-28Automatic tumor segmentation method and system in ct image

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201811440970.8ACN109754403A (en)2018-11-292018-11-29 A method and system for automatic tumor segmentation in CT images

Publications (1)

Publication NumberPublication Date
CN109754403Atrue CN109754403A (en)2019-05-14

Family

ID=66402563

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201811440970.8APendingCN109754403A (en)2018-11-292018-11-29 A method and system for automatic tumor segmentation in CT images

Country Status (2)

CountryLink
CN (1)CN109754403A (en)
WO (1)WO2020108562A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110197716A (en)*2019-05-202019-09-03广东技术师范大学Processing method, device and the computer readable storage medium of medical image
CN110717060A (en)*2019-09-042020-01-21平安科技(深圳)有限公司Image mask filtering method and device and storage medium
CN110751627A (en)*2019-09-192020-02-04上海联影智能医疗科技有限公司Image processing method, image processing device, computer equipment and storage medium
CN111028242A (en)*2019-11-272020-04-17中国科学院深圳先进技术研究院 Tumor automatic segmentation system, method and electronic device
WO2020108562A1 (en)*2018-11-292020-06-04中国科学院深圳先进技术研究院Automatic tumor segmentation method and system in ct image
CN111652886A (en)*2020-05-062020-09-11哈尔滨工业大学 A Liver Tumor Segmentation Method Based on Improved U-net Network
CN111739008A (en)*2020-06-232020-10-02北京百度网讯科技有限公司 Image processing method, apparatus, device and readable storage medium
CN111754530A (en)*2020-07-022020-10-09广东技术师范大学 A method for segmentation and classification of prostate ultrasound images
CN112529909A (en)*2020-12-082021-03-19北京安德医智科技有限公司Tumor image brain region segmentation method and system based on image completion
CN113111684A (en)*2020-01-102021-07-13字节跳动有限公司Training method and device of neural network model and image processing system
WO2021151275A1 (en)*2020-05-202021-08-05平安科技(深圳)有限公司Image segmentation method and apparatus, device, and storage medium
CN113705320A (en)*2021-05-242021-11-26中国科学院深圳先进技术研究院Training method, medium, and apparatus for surgical motion recognition model
CN114066871A (en)*2021-11-192022-02-18江苏科技大学Method for training new coronary pneumonia focus region segmentation model
US11367181B2 (en)2018-12-292022-06-21Shanghai United Imaging Intelligence Co., Ltd.Systems and methods for ossification center detection and bone age assessment

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114187298B (en)*2020-09-152025-06-06株式会社理光 Image processing and neural network construction method, device and storage medium
CN114494266B (en)*2020-10-262024-05-28中国人民解放军空军军医大学 A hierarchical hollow pyramid convolution method for cervical and surrounding multi-organ segmentation
CN115349139B (en)*2020-12-212025-06-20广州视源电子科技股份有限公司 Image segmentation method, device, equipment and storage medium
CN114119448B (en)*2021-02-052025-04-29苏州大学 Pancreas segmentation system in CT images based on improved U-shaped network
CN116898455B (en)*2023-07-062024-04-16湖北大学 A sleep EEG signal detection method and system based on deep learning model
CN117765532B (en)*2024-02-222024-05-31中国科学院宁波材料技术与工程研究所Cornea Langerhans cell segmentation method and device based on confocal microscopic image
CN118229981B (en)*2024-05-232024-07-23山东未来网络研究院(紫金山实验室工业互联网创新应用基地)CT image tumor segmentation method, device and medium combining convolutional network and transducer
CN118864503B (en)*2024-09-262024-11-29西南科技大学Image processing method and device based on depth dynamic self-adjustment

Citations (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106408562A (en)*2016-09-222017-02-15华南理工大学Fundus image retinal vessel segmentation method and system based on deep learning
CN106683104A (en)*2017-01-062017-05-17西北工业大学Prostate magnetic resonance image segmentation method based on integrated depth convolution neural network
CN107680678A (en)*2017-10-182018-02-09北京航空航天大学Based on multiple dimensioned convolutional neural networks Thyroid ultrasound image tubercle auto-check system
CN107945204A (en)*2017-10-272018-04-20西安电子科技大学A kind of Pixel-level portrait based on generation confrontation network scratches drawing method
CN108346145A (en)*2018-01-312018-07-31浙江大学The recognition methods of unconventional cell in a kind of pathological section
CN108492286A (en)*2018-03-132018-09-04成都大学A kind of medical image cutting method based on the U-shaped convolutional neural networks of binary channel
CN108537793A (en)*2018-04-172018-09-14电子科技大学A kind of pulmonary nodule detection method based on improved u-net networks
CN108596884A (en)*2018-04-152018-09-28桂林电子科技大学A kind of cancer of the esophagus dividing method in chest CT image
CN108596915A (en)*2018-04-132018-09-28深圳市未来媒体技术研究院A kind of medical image segmentation method based on no labeled data
CN108629784A (en)*2018-05-082018-10-09上海嘉奥信息科技发展有限公司A kind of CT image intracranial vessel dividing methods and system based on deep learning
CN108765422A (en)*2018-06-132018-11-06云南大学A kind of retinal images blood vessel automatic division method
CN108776969A (en)*2018-05-242018-11-09复旦大学Breast ultrasound image lesion segmentation approach based on full convolutional network
CN108806793A (en)*2018-04-172018-11-13平安科技(深圳)有限公司Lesion monitoring method, device, computer equipment and storage medium
CN108830912A (en)*2018-05-042018-11-16北京航空航天大学A kind of interactive grayscale image color method of depth characteristic confrontation type study
CN108876793A (en)*2018-04-132018-11-23北京迈格威科技有限公司Semantic segmentation methods, devices and systems and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US10245000B2 (en)*2014-12-122019-04-02General Electric CompanyMethod and system for defining a volume of interest in a physiological image
CN107862695A (en)*2017-12-062018-03-30电子科技大学A kind of modified image segmentation training method based on full convolutional neural networks
CN108109152A (en)*2018-01-032018-06-01深圳北航新兴产业技术研究院Medical Images Classification and dividing method and device
CN108171711A (en)*2018-01-172018-06-15深圳市唯特视科技有限公司A kind of infant's brain Magnetic Resonance Image Segmentation method based on complete convolutional network
CN109754403A (en)*2018-11-292019-05-14中国科学院深圳先进技术研究院 A method and system for automatic tumor segmentation in CT images

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106408562A (en)*2016-09-222017-02-15华南理工大学Fundus image retinal vessel segmentation method and system based on deep learning
CN106683104A (en)*2017-01-062017-05-17西北工业大学Prostate magnetic resonance image segmentation method based on integrated depth convolution neural network
CN107680678A (en)*2017-10-182018-02-09北京航空航天大学Based on multiple dimensioned convolutional neural networks Thyroid ultrasound image tubercle auto-check system
CN107945204A (en)*2017-10-272018-04-20西安电子科技大学A kind of Pixel-level portrait based on generation confrontation network scratches drawing method
CN108346145A (en)*2018-01-312018-07-31浙江大学The recognition methods of unconventional cell in a kind of pathological section
CN108492286A (en)*2018-03-132018-09-04成都大学A kind of medical image cutting method based on the U-shaped convolutional neural networks of binary channel
CN108876793A (en)*2018-04-132018-11-23北京迈格威科技有限公司Semantic segmentation methods, devices and systems and storage medium
CN108596915A (en)*2018-04-132018-09-28深圳市未来媒体技术研究院A kind of medical image segmentation method based on no labeled data
CN108596884A (en)*2018-04-152018-09-28桂林电子科技大学A kind of cancer of the esophagus dividing method in chest CT image
CN108537793A (en)*2018-04-172018-09-14电子科技大学A kind of pulmonary nodule detection method based on improved u-net networks
CN108806793A (en)*2018-04-172018-11-13平安科技(深圳)有限公司Lesion monitoring method, device, computer equipment and storage medium
CN108830912A (en)*2018-05-042018-11-16北京航空航天大学A kind of interactive grayscale image color method of depth characteristic confrontation type study
CN108629784A (en)*2018-05-082018-10-09上海嘉奥信息科技发展有限公司A kind of CT image intracranial vessel dividing methods and system based on deep learning
CN108776969A (en)*2018-05-242018-11-09复旦大学Breast ultrasound image lesion segmentation approach based on full convolutional network
CN108765422A (en)*2018-06-132018-11-06云南大学A kind of retinal images blood vessel automatic division method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
FAUSTO MILLETARI ETC: "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation", 《2016 FOURTH INTERNATIONAL CONFERENCE ON 3D VISION (3DV)》*
MICHAL DROZDZAL 等: "The Importance of Skip Connections in Biomedical Image Segmentation", 《DLMIA 2016, LABELS 2016: DEEP LEARNING AND DATA LABELING FOR MEDICAL APPLICATIONS》*
OLAF RONNEBERGER ETC: "U-net: Convolutional networks for biomedical image segmentation", 《LECTURE NOTES IN COMPUTER SCIENCE (INCLUDING SUBSERIES LECTURE NOTES IN ARTIFICIAL INTELLIGENCE AND LECTURE NOTES IN BIOINFORMATICS)》*
YUAN XUE ETC: "SegAN: Adversarial Network with Multi-scale L1 Loss for Medical Image Segmentation", 《NEUROINFORMATICS(2018)》*

Cited By (20)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2020108562A1 (en)*2018-11-292020-06-04中国科学院深圳先进技术研究院Automatic tumor segmentation method and system in ct image
US11367181B2 (en)2018-12-292022-06-21Shanghai United Imaging Intelligence Co., Ltd.Systems and methods for ossification center detection and bone age assessment
US11735322B2 (en)2018-12-292023-08-22Shanghai United Imaging Intelligence Co., Ltd.Systems and methods for ossification center detection and bone age assessment
CN110197716A (en)*2019-05-202019-09-03广东技术师范大学Processing method, device and the computer readable storage medium of medical image
CN110717060A (en)*2019-09-042020-01-21平安科技(深圳)有限公司Image mask filtering method and device and storage medium
CN110717060B (en)*2019-09-042023-08-18平安科技(深圳)有限公司Image mask filtering method, device and storage medium
CN110751627A (en)*2019-09-192020-02-04上海联影智能医疗科技有限公司Image processing method, image processing device, computer equipment and storage medium
CN110751627B (en)*2019-09-192024-01-26上海联影智能医疗科技有限公司Image processing method, device, computer equipment and storage medium
CN111028242A (en)*2019-11-272020-04-17中国科学院深圳先进技术研究院 Tumor automatic segmentation system, method and electronic device
CN113111684B (en)*2020-01-102024-05-21字节跳动有限公司Training method and device for neural network model and image processing system
CN113111684A (en)*2020-01-102021-07-13字节跳动有限公司Training method and device of neural network model and image processing system
CN111652886A (en)*2020-05-062020-09-11哈尔滨工业大学 A Liver Tumor Segmentation Method Based on Improved U-net Network
WO2021151275A1 (en)*2020-05-202021-08-05平安科技(深圳)有限公司Image segmentation method and apparatus, device, and storage medium
CN111739008B (en)*2020-06-232024-04-12北京百度网讯科技有限公司Image processing method, device, equipment and readable storage medium
CN111739008A (en)*2020-06-232020-10-02北京百度网讯科技有限公司 Image processing method, apparatus, device and readable storage medium
CN111754530B (en)*2020-07-022023-11-28广东技术师范大学 A prostate ultrasound image segmentation and classification method
CN111754530A (en)*2020-07-022020-10-09广东技术师范大学 A method for segmentation and classification of prostate ultrasound images
CN112529909A (en)*2020-12-082021-03-19北京安德医智科技有限公司Tumor image brain region segmentation method and system based on image completion
CN113705320A (en)*2021-05-242021-11-26中国科学院深圳先进技术研究院Training method, medium, and apparatus for surgical motion recognition model
CN114066871A (en)*2021-11-192022-02-18江苏科技大学Method for training new coronary pneumonia focus region segmentation model

Also Published As

Publication numberPublication date
WO2020108562A1 (en)2020-06-04

Similar Documents

PublicationPublication DateTitle
CN109754403A (en) A method and system for automatic tumor segmentation in CT images
Altaf et al.Going deep in medical image analysis: concepts, methods, challenges, and future directions
Zhou et al.Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method
Liang et al.MCFNet: Multi-layer concatenation fusion network for medical images fusion
Singhal et al.Study of deep learning techniques for medical image analysis: A review
CN109166130B (en)Image processing method and image processing device
CN110827216A (en)Multi-generator generation countermeasure network learning method for image denoising
DE102018111407A1 (en) METHOD FOR MACHINE LEARNING FOR AUTOMATICALLY MODELING OF EXCESSIVE EXPENDITURE
CN110110723B (en)Method and device for automatically extracting target area in image
CN114897780A (en)MIP sequence-based mesenteric artery blood vessel reconstruction method
Chen et al.Generative adversarial U-Net for domain-free medical image augmentation
CN110415253A (en) A point-based interactive medical image segmentation method based on deep neural network
Yang et al.Review of deep learning-based image inpainting techniques
Zhao et al.MPSHT: multiple progressive sampling hybrid model multi-organ segmentation
Chan et al.An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction
Lu et al.Fine-grained calibrated double-attention convolutional network for left ventricular segmentation
Annavarapu et al.Figure-ground segmentation based medical image denoising using deep convolutional neural networks
Li et al.Hrinet: Alternative supervision network for high-resolution ct image interpolation
Huang et al.Interactive tumor progression modeling via sketch-based image editing
Corona-Figueroa2D to 3D Translation with Medical Image Applications
Maaroof et al.Fractal-Hyper Net: Elevating X-Ray Diagnostic Visualization through Deep Hyper-Dimensional Feature Learning and Fractal-Scale Structural Integrity
LanTraditional Augmentation Versus Deep Generative Diffusion Augmentation for Addressing Class Imbalance in Chest X-ray Classification
Tattersall et al.TIST-Net: style transfer in dynamic contrast enhanced MRI using spatial and temporal information
CN116433682B (en)Method for automatically segmenting masticatory muscles by using cone beam computed tomography image
Liao et al.Semi-supervised Learning Based Right Ventricle Segmentation Using Deep Convolutional Boltzmann Machine Shape Model

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp