Movatterモバイル変換


[0]ホーム

URL:


CN110728675A - Pulmonary nodule analysis device, model training method, device and analysis equipment - Google Patents

Pulmonary nodule analysis device, model training method, device and analysis equipment
Download PDF

Info

Publication number
CN110728675A
CN110728675ACN201911005393.4ACN201911005393ACN110728675ACN 110728675 ACN110728675 ACN 110728675ACN 201911005393 ACN201911005393 ACN 201911005393ACN 110728675 ACN110728675 ACN 110728675A
Authority
CN
China
Prior art keywords
lung
image
module
training
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911005393.4A
Other languages
Chinese (zh)
Inventor
柴象飞
郭娜
张莞舒
史睿琼
左盼莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wisdom Shadow Medical Technology (beijing) Co Ltd
Original Assignee
Wisdom Shadow Medical Technology (beijing) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wisdom Shadow Medical Technology (beijing) Co LtdfiledCriticalWisdom Shadow Medical Technology (beijing) Co Ltd
Priority to CN201911005393.4ApriorityCriticalpatent/CN110728675A/en
Publication of CN110728675ApublicationCriticalpatent/CN110728675A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The application provides a pulmonary nodule analysis device, a model training method, a device and analysis equipment, and relates to the technical field of image processing. The lung nodule analysis device comprises a segmentation module and a sampling processing module obtained by training a sample set comprising a sample 3D lung image, a sample classification result corresponding to the lung image and a sample segmentation result corresponding to the lung nodule image. The sampling processing module is used for sampling the 3D lung image to be analyzed to obtain a high-resolution characteristic map. The high resolution feature map includes the probability that each voxel in the 3D lung image belongs to a lung nodule. And the segmentation module is used for processing the high-resolution feature map according to the probability that each voxel in the high-resolution feature map belongs to a pulmonary nodule to obtain a segmentation mask corresponding to the 3D pulmonary image, and the foreground part of the segmentation mask is the pulmonary nodule. Therefore, the segmentation mask can be obtained by combining the classification information and the segmentation information in the 3D lung image, and the method has the characteristic of high accuracy.

Description

Pulmonary nodule analysis device, model training method, device and analysis equipment
Technical Field
The application relates to the technical field of image processing, in particular to a pulmonary nodule analysis device, a model training method, a device and analysis equipment.
Background
As known from 'national latest lung cancer reports' issued by the national cancer center, the incidence of lung cancer in China is high in recent years, and even the incidence of lung cancer in China tends to rise continuously. Asymptomatic pulmonary nodules are common manifestations of lung cancer, and the size and shape of pulmonary nodules are important bases for diagnosis of pulmonary nodules. In the automatic diagnosis of lung nodules, after the lung nodules are detected, the size and shape of the lung nodules can be obtained through segmentation. Therefore, with the continuous development of the automatic lung nodule diagnosis technology, the importance of automatic segmentation after lung nodule detection is more and more prominent, and how to improve the accuracy of lung nodule segmentation is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
In view of the above, an object of the present application is to provide a pulmonary nodule analysis apparatus, a model training method, an apparatus and an analysis device.
In order to achieve the above purpose, the embodiments of the present application employ the following technical solutions:
in a first aspect, an embodiment of the present application provides a pulmonary nodule analysis apparatus applied to an analysis device, where the pulmonary nodule analysis apparatus includes a sampling processing module and a segmentation module, where the sampling processing module is obtained by training a sample set including a sample 3D lung image, a sample classification result corresponding to the sample 3D lung image, and a sample segmentation result corresponding to the sample 3D lung image,
the sampling processing module is used for sampling a 3D lung image to be analyzed to obtain a high-resolution feature map, wherein the size of the high-resolution feature map is the same as that of the 3D lung image, and the high-resolution feature map comprises the probability that each voxel in the 3D lung image belongs to a lung nodule;
and the segmentation module is used for processing the high-resolution feature map according to the probability that each voxel in the high-resolution feature map belongs to a pulmonary nodule to obtain a segmentation mask corresponding to the 3D pulmonary image, wherein the foreground part of the segmentation mask is the pulmonary nodule.
In an optional embodiment, the segmentation module is specifically configured to:
and carrying out binarization processing on the high-resolution feature map according to a first preset probability to obtain the segmentation mask.
In an alternative embodiment, the apparatus further includes a classification module and a feature fusion module obtained by training the sample set, the sampling processing module includes a down-sampling part and an up-sampling part, the high-resolution feature map is obtained from the 3D lung image through the down-sampling part and the up-sampling part,
the feature fusion module is used for performing feature fusion on the high-resolution feature map and the high-level semantic feature map obtained by the down-sampling part to obtain a target probability value;
and the classification module is used for obtaining the type of the lung nodule in the 3D lung image according to the target probability value and the corresponding relation between the probability value and the type of the lung nodule.
In an alternative embodiment, the feature fusion module is specifically configured to:
converting the high-resolution feature map into a first one-dimensional vector, and converting the high-level semantic feature map into a second one-dimensional vector;
and obtaining the target probability value according to the first one-dimensional vector and the second one-dimensional vector.
In an alternative embodiment, the feature fusion module includes a 3D convolutional layer and a plurality of fully connected layers.
In an alternative embodiment, the apparatus further comprises an image acquisition module,
the image acquisition module is configured to:
obtaining an image block with a preset size from an original 3D image according to the central position of a lung nodule in the original 3D image, wherein the image block comprises the central position of the lung nodule;
and preprocessing the image blocks to obtain a 3D lung image to be analyzed.
In a second aspect, an embodiment of the present application provides a model training method for training a pulmonary nodule analysis model, where the method includes:
inputting a sample 3D lung image into an untrained lung nodule analysis model to obtain a segmentation result and a classification result, wherein the lung nodule analysis model comprises a backbone network and a feature fusion network;
calculating to obtain a loss value of the training according to a sample segmentation result and a sample classification result corresponding to the sample 3D lung image and the segmentation result and the classification result;
judging whether the loss value of the training is smaller than a preset loss value or not;
judging whether the current iteration times are larger than the preset iteration times or not;
if the loss value of the training is smaller than the preset loss value or the current iteration times are larger than the preset iteration times, judging that the training is finished, and taking the current lung nodule analysis model as a trained lung nodule analysis model;
if the loss value of the training is not less than the preset loss value and the current iteration times are not more than the preset iteration times, adjusting parameters in the backbone network and/or the feature fusion network, and repeating the steps until the obtained loss value of the training is less than the preset loss value or the current iteration times is more than the preset iteration times.
In a third aspect, an embodiment of the present application provides a model training apparatus for training a pulmonary nodule analysis model, where the apparatus includes:
the system comprises an input module, a segmentation module and a classification module, wherein the input module is used for inputting a sample 3D lung image into a lung nodule analysis model to obtain a segmentation result and a classification result, and the lung nodule analysis model comprises a backbone network and a feature fusion network;
the calculation module is used for calculating a loss value of the training according to a sample segmentation result and a sample classification result corresponding to the sample 3D lung image and the segmentation result and the classification result;
the judging module is used for judging whether the loss value of the training is smaller than a preset loss value or not;
the judging module is also used for judging whether the current iteration times are greater than the preset iteration times;
the adjusting module is used for judging that the training is finished when the loss value of the training is smaller than the preset loss value or the current iteration times is larger than the preset iteration times, and taking the current lung nodule analysis model as the trained lung nodule analysis model;
the adjusting module is further configured to adjust parameters in the backbone network and/or the feature fusion network when the loss value of the current training is not less than a preset loss value and the current iteration number is not greater than the preset iteration number, so as to continue training until the obtained loss value of the current training is less than the preset loss value or the current iteration number is greater than the preset iteration number.
In a fourth aspect, embodiments of the present application provide an analysis apparatus comprising a processor and a memory, the memory storing machine executable instructions executable by the processor to implement the functionality of the pulmonary nodule analysis apparatus of any one of the preceding embodiments.
In a fifth aspect, embodiments of the present application provide a readable storage medium, on which a computer program is stored, the computer program, when executed by a processor, implementing the functions of the pulmonary nodule analysis apparatus according to any one of the foregoing embodiments.
The lung nodule analysis device, the model training method, the model training device, the analysis equipment and the readable storage medium provided by the embodiment of the application utilize a sampling module obtained by training a sample set comprising a sample 3D lung image, a sample classification result corresponding to the sample 3D lung image and a sample segmentation result corresponding to the sample 3D lung image to sample the 3D lung image to be analyzed, so as to obtain a high-resolution feature map. The high resolution feature map has the same size as the 3D lung image, and includes a probability that each voxel in the 3D lung image belongs to a lung nodule. And then processing the high-resolution feature map according to the probability that each voxel in the high-resolution feature map belongs to the lung nodule to obtain a segmentation mask corresponding to the 3D lung image as a segmentation result, wherein the foreground part of the segmentation mask is the lung nodule. Therefore, the 3D lung image can be segmented by combining the classification information and the segmentation information in the 3D lung image to obtain the segmentation mask corresponding to the 3D lung image, and the method has the characteristic of high accuracy.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
FIG. 1 is a block schematic diagram of an analytical apparatus provided in an embodiment of the present application;
fig. 2 is a block diagram of a pulmonary nodule analysis apparatus provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a pulmonary nodule analysis apparatus provided in an embodiment of the present application;
fig. 4 is a second block diagram of a pulmonary nodule analysis apparatus provided in an embodiment of the present application;
fig. 5 is a third schematic block diagram of a pulmonary nodule analysis apparatus provided in an embodiment of the present application;
FIG. 6 is a schematic flow chart diagram illustrating a model training method according to an embodiment of the present disclosure;
fig. 7 is a block diagram illustrating a model training apparatus according to an embodiment of the present disclosure.
Icon: 100-an analysis device; 110-a memory; 120-a processor; 130-a communication unit; 200-a pulmonary nodule analysis apparatus; 201-an image acquisition module; 210-a sample processing module; 220-a segmentation module; 230-a feature fusion module; 240-a classification module; 300-a model training device; 310-an input module; 320-a calculation module; 330-a judgment module; 340-adjusting module.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It is noted that relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Referring to fig. 1, fig. 1 is a block diagram of ananalysis apparatus 100 according to an embodiment of the present disclosure. Theanalysis device 100 may be, but is not limited to, a server, a computer, etc. Theanalysis device 100 comprises amemory 110, aprocessor 120 and acommunication unit 130. The elements of thememory 110, theprocessor 120 and thecommunication unit 130 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines.
Thememory 110 is used to store programs or data. TheMemory 110 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an erasable Read-Only Memory (EPROM), an electrically erasable Read-Only Memory (EEPROM), and the like.
Theprocessor 120 is used to read/write data or programs stored in thememory 110 and perform corresponding functions. For example, thememory 110 stores a pulmonarynodule analysis apparatus 200, and the pulmonarynodule analysis apparatus 200 includes at least one software functional module which can be stored in thememory 110 in the form of software or firmware (firmware). Theprocessor 120 performs various functional applications and data processing by running software programs and modules stored in thememory 110, such as the lungnodule analysis apparatus 200 in the embodiment of the present application, so as to accurately complete segmentation and classification of lung nodules.
Thecommunication unit 130 is used to establish a communication connection between theanalysis apparatus 100 and another communication terminal through a network, and to transceive data through the network.
It should be understood that the configuration shown in fig. 1 is merely a schematic diagram of the configuration of theanalysis apparatus 100, and that theanalysis apparatus 100 may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
Referring to fig. 2, fig. 2 is a block diagram of a pulmonarynodule analysis apparatus 200 according to an embodiment of the present disclosure. The lungnodule analyzing apparatus 200 is applied to theanalyzing device 100. The pulmonarynodule analysis apparatus 200 may include asampling processing module 210 and asegmentation module 220.
Thesampling processing module 210 is configured to perform sampling processing on the 3D lung image to be analyzed to obtain a high-resolution feature map. Wherein the size of the high-resolution feature map is the same as the size of the 3D lung image, and the high-resolution feature map comprises the probability that each voxel in the 3D lung image belongs to a lung nodule.
Thesegmentation module 220 is configured to process the high-resolution feature map according to the probability that each voxel in the high-resolution feature map belongs to a pulmonary nodule, so as to obtain a segmentation mask corresponding to the 3D lung image.
In this embodiment, thesampling processing module 210 is trained in advance according to a sample set. The sample set comprises a sample 3D lung image, a sample classification result corresponding to the sample 3D lung image and a sample segmentation result corresponding to the sample 3D lung image. After receiving the 3D lung image to be analyzed, thesampling processing module 210 samples the 3D lung image to be analyzed, so as to obtain a high-resolution feature map.
Thesampling processing module 210 is trained in an end-to-end training manner. The high resolution feature map is the same size as the 3D lung image, thereby facilitating lung nodule segmentation of the 3D lung image from the high resolution feature map.
The high resolution feature map includes a probability that each voxel in the 3D lung image belongs to a lung nodule. Optionally, the voxel value of each voxel in the high-resolution feature map represents a probability that a corresponding one of the voxels in the 3D lung image belongs to a lung nodule. Therefore, thesegmentation module 220 may obtain the probability that each voxel in the 3D lung image belongs to a lung nodule according to the high-resolution feature map, and further process the high-resolution feature map to obtain a segmentation result of the 3D lung image. The segmentation result is a segmentation mask obtained after processing the high-resolution feature map, the segmentation mask comprises a foreground part and a background part, and the foreground part is a lung nodule. Thereby, a lung nodule included in the 3D lung image may be obtained.
Because thesampling processing module 210 is trained based on the sample 3D lung image, the sample classification result corresponding to the sample 3D lung image, and the sample segmentation result corresponding to the sample 3D lung image, the high-resolution feature map obtained by thesampling processing module 210 is obtained based on the classification information and the segmentation information included in the 3D lung image to be analyzed, that is, when performing lung nodule segmentation, not only the segmentation information included in the 3D lung image but also the classification information included in the 3D lung image are considered. Therefore, the segmentation accuracy can be higher.
Currently, lung nodule segmentation is generally implemented by using a segmentation model obtained by deep learning. The segmentation model is obtained by training based on the sample image and the segmentation result corresponding to the sample image, and classification information in the sample image is not considered. However, the lung nodule type has a large relationship with the segmentation result of the lung nodule: the type of lung nodule often depends on the distribution of CT (Computed Tomography) values within the lung nodule, which requires determining which pixels in the 3D lung image belong to the lung nodule according to the lung nodule edge; and the outline of the lung nodule can be finely judged according to the type of the lung nodule. Thesampling processing module 210 is obtained through sample set training of sample 3D lung images, sample classification results corresponding to the sample 3D lung images and sample segmentation results corresponding to the sample 3D lung images, so that when lung nodule segmentation is carried out, lung nodule segmentation is carried out based on segmentation information and classification information included in the 3D lung images to be analyzed, and therefore the accuracy of lung nodule segmentation is improved.
Optionally, in this embodiment, thedividing module 220 may be specifically configured to: and carrying out binarization processing on the high-resolution feature map according to a first preset probability to obtain the segmentation mask. And the first preset probability is set according to the actual situation.
Thesegmentation module 220 may first determine whether the probability that each voxel belongs to a lung nodule is greater than the first preset probability. Then setting the voxel value of the voxel with the probability less than the first preset probability as 0, and keeping the probability not smallAnd setting the voxel value of the voxel with the first preset probability as 1, so as to carry out binarization processing on the high-resolution feature map, and further obtain the segmentation mask shown in fig. 3. Wherein, I in FIG. 3noduleRepresenting the 3D lung image to be analyzed and MASKnodule representing the segmentation mask, i.e. the segmentation result. A voxel with a voxel value of 1 belongs to the foreground (i.e. the lung nodule) and a voxel with a voxel value of 0 belongs to the background.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a pulmonarynodule analysis apparatus 200 according to an embodiment of the present disclosure. Thesampling processing module 210 includes an up-sampling part and a down-sampling part. The high resolution feature map is obtained from the 3D lung image via the downsampling portion and an upsampling portion. And obtaining a high-level semantic feature map corresponding to the 3D lung image through a down-sampling part. Thesampling processing module 210 may be a basic network with a down-sampling portion and an up-sampling portion, such as U-net, U-net + +, and the like. Optionally, in an implementation manner of this embodiment, thesampling processing module 210 is a 3D-Unet network, and the down-sampling portion of the structure includes a plurality of serially connected 3D convolutional layers and down-sampling layers, and the up-sampling portion includes a plurality of serially connected 3D convolutional layers and up-sampling layers, and so on.
Referring to fig. 3 and 4, fig. 4 is a second schematic block diagram of a lungnodule analysis apparatus 200 according to an embodiment of the present application. The pulmonarynodule analysis apparatus 200 may further include afeature fusion module 230 and aclassification module 240. Wherein, thefeature fusion module 230 is obtained by training the sample set.
Thefeature fusion module 230 is configured to perform feature fusion on the high-resolution feature map and the high-level semantic feature map to obtain a target probability value. Theclassification module 240 is configured to obtain a type of a lung nodule in the 3D lung image according to the target probability value and a corresponding relationship between the probability value and the type of the lung nodule.
Generally, two different devices are used to independently segment the lung nodule edges and predict the type of lung nodule. The device for segmentation is generally obtained by training a sample segmentation result and a corresponding image, and the device for classification is generally obtained by training a sample classification result and a corresponding image. The method is not only long in time consumption, but also low in overall accuracy rate (namely, the accuracy rate of the obtained segmentation result and the classification result is not high) due to neglect of the correlation between segmentation and classification.
Compared with the conventional method, the pulmonarynodule analysis apparatus 200 in this embodiment can complete segmentation and type prediction of a pulmonary nodule at the same time, which not only improves efficiency, but also improves accuracy of type prediction and segmentation because thesampling processing module 210 and thefeature fusion module 230 are obtained by training a sample set including a sample 3D lung image, a sample classification result corresponding to the sample 3D lung image, and a sample segmentation result corresponding to the sample 3D lung image, so that the pulmonarynodule analysis apparatus 200 in this embodiment considers two kinds of information, namely segmentation information and classification information, included in a 3D lung image to be analyzed when performing segmentation and type prediction.
Optionally, in an implementation manner of this embodiment, thefeature fusion module 230 may be specifically configured to: converting the high-resolution feature map into a first one-dimensional vector, and converting the high-level semantic feature map into a second one-dimensional vector; and obtaining the target probability value according to the first one-dimensional vector and the second one-dimensional vector.
Alternatively, as shown in fig. 3, thefeature fusion module 230 may include a 3D convolutional layer and a plurality of fully connected layers. Thefeature fusion module 230 may convert the high-resolution feature map into a first one-dimensional vector through one fully-connected layer and convert the high-level semantic feature map into a second one-dimensional vector through another fully-connected layer. Therefore, the high-resolution feature map and the high-level semantic feature map can be respectively converted into one-dimensional vectors so as to obtain a target probability value through a full connection layer in the following process. Wherein, the specific parameters in the 3D convolution layer and the plurality of full connection layers are set according to the actual conditions.
Accordingly, the pulmonary nodule in the 3D pulmonary image and the type of the pulmonary nodule can be obtained only by the pulmonarynodule analysis apparatus 200 without performing segmentation and classification on separate segmentation models and classification models. Compared with the mode of respectively segmenting and classifying through two models, the segmentation and classification efficiency can be improved. Meanwhile, the type of the lung nodule and the segmentation of the lung nodule mutually influence each other, so that the accuracy of the type obtained by the method is higher.
Referring to fig. 5, fig. 5 is a third block diagram of a lungnodule analysis apparatus 200 according to an embodiment of the present application. The lungnodule analyzing apparatus 200 may further comprise animage acquisition module 201.
Theimage acquisition module 201 is configured to: obtaining an image block with a preset size from an original 3D image according to the central position of a lung nodule in the original 3D image, wherein the image block comprises the central position of the lung nodule; and preprocessing the image blocks to obtain a 3D lung image to be analyzed.
Theimage obtaining module 201 may obtain the original 3D image by receiving an input image or receiving an image sent by another device or module, and obtain the central position of the lung nodule in the original 3D image. The central position of the lung nodule may be manually annotated or may be identified by other means. Then, an image block with a preset size can be cut out from the original 3D image according to the center position of the lung nodule. The image patch includes a center position of a lung nodule.
After the image block is obtained, the image block may be preprocessed. Wherein the preprocessing comprises pixel range truncation, normalization processing and the like. Through the pixel range truncation processing, the voxel values of the processed image blocks (namely the 3D lung images) are all in a preset range. For example, the voxel value range is set to a-b, the voxel values smaller than a in the image block are all set to a, the voxel values larger than b in the image block are all set to b, and the voxel values located between a and b in the image block are not processed.
Optionally, in order to improve the generalization ability of the pulmonarynodule analysis apparatus 200, when thesampling processing module 210 and thefeature fusion module 230 are obtained by training, the image blocks used in the training may be subjected to online data enhancement by adopting a turning, rotating, translating, scaling, and the like.
Optionally, after the original 3D image is obtained,the original 3D image may be processed into a pixel matrix form, resulting in a pixel matrix Iorign. From this pixel matrix I, the center position of the lung nodule is then determinedorignThe length, width and height of the cloth to be cut are Sl、Sw、ShThe matrix block of (2). And preprocessing the matrix block to obtain a pixel matrix Inodule
Embodiments of the present application also provide a readable storage medium, on which a computer program is stored, which when executed by a processor implements the functions of the pulmonarynodule analysis apparatus 200.
Referring to fig. 6, fig. 6 is a schematic flow chart of a model training method according to an embodiment of the present disclosure. The model training method is used for training to obtain a lung nodule analysis model. The flow of the model training method is explained below.
And S310, inputting the 3D lung image of the sample into an untrained lung nodule analysis model to obtain a segmentation result and a classification result.
The pulmonary nodule analysis model comprises a backbone network and a feature fusion network.
And step S320, calculating to obtain a loss value of the training according to a sample segmentation result and a sample classification result corresponding to the sample 3D lung image and the segmentation result and the classification result.
Step S330, judging whether the loss value of the training is smaller than a preset loss value.
Step S340, determining whether the current iteration count is greater than a preset iteration count.
Step 330 and step S340 may be executed at the same time or at different times. In the present embodiment, the execution sequence of the steps S330 and S340 is not limited.
And when the loss value of the training is smaller than the preset loss value or the current iteration number is larger than the preset iteration number, executing the step S350. And when the loss value of the training is not less than the preset loss value and the current iteration number is not more than the preset iteration number, executing the step S360.
And step S350, judging that the training is finished, and taking the current lung nodule analysis model as a trained lung nodule analysis model.
And step S360, adjusting parameters in the backbone network and/or the feature fusion network.
In this embodiment, the lung nodule analysis model may be trained in an end-to-end manner, and during the training process, the loss value of each training may be used for the forward feedback of the lung nodule analysis model to obtain the lung nodule analysis model meeting the requirement.
The original 3D lung image may be processed first to obtain an image block. The specific processing method may refer to the description of theimage acquisition module 201 above. The image block is a sample 3D lung image. The sample 3D lung image is then input into a lung nodule analysis model. After the segmentation result and the classification result are obtained by the lung nodule analysis module, the loss of the segmentation result (namely the loss value of the current segmentation result) can be calculated by using a loss function such as Cross loss or Dice loss, and the loss of the classification result (namely the loss value of the current classification result) can be calculated by using a loss function such as Cross entry loss. And then calculating the loss value of the training according to the loss value of the segmentation result and the loss value of the classification result, so as to perform supervised training on the lung nodule analysis model according to the loss value of the training.
Alternatively, the loss value of the training can be calculated by the following formula: l ═ Wtype×Ltype+Wsegment×LsegmentWherein L represents the loss value (also called the overall loss value) of the training, WtypeA predetermined weight value, L, corresponding to a loss value representing a classification resulttypeLoss value, W, representing the result of this classificationsegmentA preset weight value, L, corresponding to a loss value representing a segmentation resultsegmentThe loss value corresponding to the segmentation result is shown.
And if the loss value of the training is not less than the preset loss value and the current iteration number is not more than the preset iteration number, adjusting parameters of a backbone network and/or a feature fusion network in the current lung nodule analysis model. And then jumping to step S310 to continue training the lung nodule model until the obtained loss value of the current training is smaller than a preset loss value, or the current iteration number is larger than a preset iteration number, and at this time, the current lung nodule model can be used as the trained lung nodule model.
Referring to fig. 7, fig. 7 is a block diagram illustrating amodel training apparatus 300 according to an embodiment of the present disclosure. Themodel training device 300 is used for training to obtain a lung nodule analysis model. Themodel training apparatus 300 may comprise aninput module 310, acalculation module 320, adetermination module 330, and anadjustment module 340.
Theinput module 310 is configured to input the sample 3D lung image into a lung nodule analysis model to obtain a segmentation result and a classification result, where the lung nodule analysis model includes a backbone network and a feature fusion network.
The calculatingmodule 320 is configured to calculate a loss value of the training according to a sample segmentation result and a sample classification result corresponding to the sample 3D lung image, and the segmentation result and the classification result.
The determiningmodule 330 is configured to determine whether the loss value of the current training is smaller than a preset loss value.
The determiningmodule 330 is further configured to determine whether the current iteration number is greater than a preset iteration number.
The adjustingmodule 340 is configured to determine that the training is completed when the loss value of the current training is smaller than the preset loss value or the current iteration number is larger than the preset iteration number, and use the current lung nodule analysis model as the trained lung nodule analysis model.
The adjustingmodule 340 is further configured to adjust parameters in the backbone network and/or the feature fusion network when the loss value of the current training is not less than a preset loss value and the current iteration number is not greater than the preset iteration number, so as to continue training until the obtained loss value of the current training is less than the preset loss value or the current iteration number is greater than the preset iteration number.
In this embodiment, for a detailed description of themodel training apparatus 300, reference may be made to the above description of the model training method, which is not repeated herein.
In summary, the embodiment of the present application provides a pulmonary nodule analysis apparatus, a model training method, an apparatus and an analysis device. And carrying out sampling processing on the 3D lung image to be analyzed by using a sampling module obtained by training a sample set comprising the sample 3D lung image, a sample classification result corresponding to the sample 3D lung image and a sample segmentation result corresponding to the sample 3D lung image to obtain a high-resolution characteristic diagram. The high resolution feature map has the same size as the 3D lung image, and includes a probability that each voxel in the 3D lung image belongs to a lung nodule. And then processing the high-resolution feature map according to the probability that each voxel in the high-resolution feature map belongs to the lung nodule to obtain a segmentation mask corresponding to the 3D lung image as a segmentation result, wherein the foreground part of the segmentation mask is the lung nodule. Therefore, the 3D lung image can be segmented by combining the classification information and the segmentation information in the 3D lung image to obtain the corresponding segmentation mask in the 3D lung image, and the method has the characteristic of high accuracy.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

CN201911005393.4A2019-10-222019-10-22Pulmonary nodule analysis device, model training method, device and analysis equipmentPendingCN110728675A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201911005393.4ACN110728675A (en)2019-10-222019-10-22Pulmonary nodule analysis device, model training method, device and analysis equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201911005393.4ACN110728675A (en)2019-10-222019-10-22Pulmonary nodule analysis device, model training method, device and analysis equipment

Publications (1)

Publication NumberPublication Date
CN110728675Atrue CN110728675A (en)2020-01-24

Family

ID=69220653

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201911005393.4APendingCN110728675A (en)2019-10-222019-10-22Pulmonary nodule analysis device, model training method, device and analysis equipment

Country Status (1)

CountryLink
CN (1)CN110728675A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111429421A (en)*2020-03-192020-07-17北京推想科技有限公司Model generation method, medical image segmentation method, device, equipment and medium
CN111768418A (en)*2020-06-302020-10-13北京推想科技有限公司Image segmentation method and device and training method of image segmentation model
CN113361584A (en)*2021-06-012021-09-07推想医疗科技股份有限公司Model training method and device, and pulmonary arterial hypertension measurement method and device
CN114359560A (en)*2021-12-312022-04-15泰安市中心医院Lung nodule refined segmentation method and device based on deep learning and storage medium
CN118229606A (en)*2022-12-202024-06-21广州视源电子科技股份有限公司Method, device, equipment and medium for predicting lung nodule image based on priori prediction

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108537784A (en)*2018-03-302018-09-14四川元匠科技有限公司A kind of CT figure pulmonary nodule detection methods based on deep learning
CN108648172A (en)*2018-03-302018-10-12四川元匠科技有限公司A kind of CT figure Lung neoplasm detecting systems based on 3D-Unet
CN108765369A (en)*2018-04-202018-11-06平安科技(深圳)有限公司Detection method, device, computer equipment and the storage medium of Lung neoplasm
CN108986067A (en)*2018-05-252018-12-11上海交通大学Pulmonary nodule detection method based on cross-module state
CN109003260A (en)*2018-06-282018-12-14深圳视见医疗科技有限公司CT image pulmonary nodule detection method, device, equipment and readable storage medium storing program for executing
CN109523521A (en)*2018-10-262019-03-26复旦大学Lung neoplasm classification and lesion localization method and system based on more slice CT images

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108537784A (en)*2018-03-302018-09-14四川元匠科技有限公司A kind of CT figure pulmonary nodule detection methods based on deep learning
CN108648172A (en)*2018-03-302018-10-12四川元匠科技有限公司A kind of CT figure Lung neoplasm detecting systems based on 3D-Unet
CN108765369A (en)*2018-04-202018-11-06平安科技(深圳)有限公司Detection method, device, computer equipment and the storage medium of Lung neoplasm
CN108986067A (en)*2018-05-252018-12-11上海交通大学Pulmonary nodule detection method based on cross-module state
CN109003260A (en)*2018-06-282018-12-14深圳视见医疗科技有限公司CT image pulmonary nodule detection method, device, equipment and readable storage medium storing program for executing
CN109523521A (en)*2018-10-262019-03-26复旦大学Lung neoplasm classification and lesion localization method and system based on more slice CT images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BOTONG WU等: "JOINT LEARNING FOR PULMONARY NODULE SEGMENTATION, ATTRIBUTES AND MALIGNANCY PREDICTION", 《IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING》*
徐江川等: "基于深度学习u-net模型的石块图像分割算法", 《工业控制计算机》*

Cited By (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111429421A (en)*2020-03-192020-07-17北京推想科技有限公司Model generation method, medical image segmentation method, device, equipment and medium
CN111768418A (en)*2020-06-302020-10-13北京推想科技有限公司Image segmentation method and device and training method of image segmentation model
CN113361584A (en)*2021-06-012021-09-07推想医疗科技股份有限公司Model training method and device, and pulmonary arterial hypertension measurement method and device
CN114359560A (en)*2021-12-312022-04-15泰安市中心医院Lung nodule refined segmentation method and device based on deep learning and storage medium
CN118229606A (en)*2022-12-202024-06-21广州视源电子科技股份有限公司Method, device, equipment and medium for predicting lung nodule image based on priori prediction

Similar Documents

PublicationPublication DateTitle
CN110728675A (en)Pulmonary nodule analysis device, model training method, device and analysis equipment
CN114120102A (en) Boundary-optimized remote sensing image semantic segmentation method, device, equipment and medium
US10909682B2 (en)Method and device for detecting pulmonary nodule in computed tomography image, and computer-readable storage medium
CN112800964B (en) Remote sensing image target detection method and system based on multi-module fusion
CN109685060B (en)Image processing method and device
CN114612835A (en) A UAV target detection model based on YOLOv5 network
Rahman et al.Semantic deep learning integrated with RGB feature-based rule optimization for facility surface corrosion detection and evaluation
CN111640120A (en)Pancreas CT automatic segmentation method based on significance dense connection expansion convolution network
CN110223300A (en)CT image abdominal multivisceral organ dividing method and device
CN111598875A (en)Method, system and device for building thyroid nodule automatic detection model
CN112232371A (en) An American license plate recognition method based on YOLOv3 and text recognition
US11037030B1 (en)System and method for direct learning from raw tomographic data
US11367206B2 (en)Edge-guided ranking loss for monocular depth prediction
CN110599483B (en)Lung focus detection device, lung focus detection equipment and readable storage medium
CN114445356B (en) Rapid tumor localization method based on multi-resolution full-field pathological slice images
CN115439654B (en)Method and system for finely dividing weakly supervised farmland plots under dynamic constraint
CN115019181B (en)Remote sensing image rotating target detection method, electronic equipment and storage medium
CN113657214A (en) A method of building damage assessment based on Mask RCNN
CN112017161A (en) Pulmonary nodule detection method and detection device based on center point regression
Zhang et al.Improved U-net network asphalt pavement crack detection method
CN112712527B (en)Medical image segmentation method based on DR-Unet104,104
Wang et al.Automatic recognition system for concrete cracks with support vector machine based on crack features
CN118679501A (en)Machine learning techniques for detecting artifact pixels in images
CN112465050B (en)Image template selection method, device, equipment and storage medium
CN116385889B (en)Railway identification-based power inspection method and device and electronic equipment

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication

Application publication date:20200124

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp