Movatterモバイル変換


[0]ホーム

URL:


CN111047589A - An attention-enhanced brain tumor-assisted intelligent detection and recognition method - Google Patents

An attention-enhanced brain tumor-assisted intelligent detection and recognition method
Download PDF

Info

Publication number
CN111047589A
CN111047589ACN201911393654.4ACN201911393654ACN111047589ACN 111047589 ACN111047589 ACN 111047589ACN 201911393654 ACN201911393654 ACN 201911393654ACN 111047589 ACN111047589 ACN 111047589A
Authority
CN
China
Prior art keywords
classification
model
convolution
segmentation
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911393654.4A
Other languages
Chinese (zh)
Other versions
CN111047589B (en
Inventor
李建欣
张帅
于金泽
周号益
邰振赢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang UniversityfiledCriticalBeihang University
Priority to CN201911393654.4ApriorityCriticalpatent/CN111047589B/en
Publication of CN111047589ApublicationCriticalpatent/CN111047589A/en
Application grantedgrantedCritical
Publication of CN111047589BpublicationCriticalpatent/CN111047589B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明实现了一套注意力增强的脑肿瘤辅助智能检测识别方法,技术方案在U‑Net模型的基础上进行改进,提出使用对分割任务的训练作为分类任务的注意力增强机制,通过对分割任务、病灶区域和边缘信息的关注提高分类任务的准确率,并通过多任务的损失衡量和训练方法,同时对分割任务与分类任务进行优化,达到分割任务与分类任务两个任务上的预期效果,实现设计目的与应用目标。

Figure 201911393654

The invention realizes a set of attention-enhanced brain tumor auxiliary intelligent detection and identification method, the technical scheme is improved on the basis of the U-Net model, and proposes to use the training of the segmentation task as the attention enhancement mechanism of the classification task. The focus on tasks, lesion areas and edge information improves the accuracy of classification tasks, and through multi-task loss measurement and training methods, the segmentation task and the classification task are optimized at the same time to achieve the expected results of the segmentation task and the classification task. , to achieve design goals and application goals.

Figure 201911393654

Description

Attention-enhanced brain tumor auxiliary intelligent detection and identification method
Technical Field
The invention relates to the field of image processing, in particular to an attention-enhanced brain tumor auxiliary intelligent detection and identification method in the fields of medical imaging and computer-aided diagnosis.
Background
Tumors that grow in the cranium are collectively referred to as brain tumors, and refer to tumors of the nervous system that occur in the cranial cavity, including tumors that originate in the neuroepithelium, peripheral nerves, meninges, and germ cells, tumors of lymphoid and hematopoietic tissues, craniopharyngiomas and granulocytic tumors in the sphenoid saddle region, and metastatic tumors. Tumors arise from the parenchyma of the brain and are called primary intracranial tumors, and metastasis from malignant tumors of other visceral tissues of the body to the cranium is called secondary intracranial tumors. Intracranial tumors can occur at any age, most commonly between 20 and 50 years of age. With the development of neuroimaging techniques and functional examination techniques in recent years, auxiliary examination has become a main means for diagnosing intracranial tumors.
Brain glioma is the most common primary intracranial malignant tumor, accounting for over 75%. Brain glioma is divided into localized glioma and diffuse glioma, which can be divided into WHO I-IV grade according to the malignancy degree of tumor, and the malignancy degree is increased along with the increase of grade. Brain glioma can be divided into low-grade glioma (LGG, WHO I-II grade) and high-grade glioma (HGG, WHO III-IV grade), and can be divided into different subtypes according to gene mutation, chromosome change and the like, and the treatment modes and prognosis of the brain glioma with different grades and different gene mutations are different. Therefore, if the tumor region and the tumor grade can be accurately segmented and judged before the surgical treatment, the method is helpful for guiding the selection of the treatment scheme and the surgical resection region, and has important values for improving the treatment effect and the prognosis of the patient.
Magnetic Resonance Imaging (MRI) is a medical imaging technique that utilizes the hydrogen nuclei, i.e., hydrogen protons, of the human body in a strong external magnetic field to generate magnetic resonance under the action of specific radio frequency pulses. MRI is the main imaging examination technique for various intracranial diseases, can be used as the first examination method for some diseases, and is also an important supplement for CT examination. The MRI examination has the advantages of high tissue resolution, multiple sequences, multiple parameters, multiple directions, multiple kinds of fMRI examination and the like, and can more sensitively discover the lesion and display the lesion characteristics, thereby being beneficial to early detection and accurate diagnosis of the disease. Common MRI images for brain glioma diagnosis include images of three planes of axial (Axis), Sagittal (Sagittal) and Coronal (Coronal) planes, and T1, T1 enhanced, T2 and T2 water suppression four modalities, and the location, range and grade of the tumor are clinically and frequently determined in combination with information of the three planes and the four modalities. However, due to the diversity of the appearance and shape of brain tumors, the separation of brain tumors in multi-modality MRI images is one of the most challenging and difficult persons in medical image processing, and the classification of brain gliomas based on non-invasive examination such as brain MRI, and even the judgment and classification of genotypes are the research directions of great clinical attention.
The localized glioma is mostly seen in children, adults are relatively few, most patients can be cured by operation, the malignancy degree is low, and the research focus of the patent is not taken as the research focus of the patent.
The application of the current deep learning on brain MRI images, particularly other related patents, is concentrated in the field of brain tumor segmentation, and the fields directly related to diagnosis and treatment, such as classification and grading of brain tumors, are less involved, which is more clinically concerned and difficult to achieve by the current human eyes in noninvasive image examination. The method for obtaining a better effect on a brain tumor segmentation task mostly adopts U-Net as a basic frame, and is improved on the basis, the technical scheme of the application also takes U-Net as a basis, uses a three-dimensional model to operate a three-dimensional image, optimizes the three-dimensional image according to the latest progress of current machine learning and deep learning in the computer vision field on the basis of the original U-Net to achieve a better effect, focuses more on a tumor grading and classification diagnosis layer compared with other works, and focuses on an abnormal area in a brain MRI image to realize a tumor diagnosis task by taking the classification task as an attention enhancement mode.
Disclosure of Invention
At present, most of the applications of U-Net to MRI image processing of brain tumors are image segmentation tasks, the applications only segment the region of the tumor, and do not further use the segmented images to obtain valuable information at the medical level, which is a problem that in the process of communicating with clinicians, the clinician pays more attention to the images and needs to research heavily.
In order to achieve the purpose, the invention adopts the following technical scheme:
an attention-enhanced brain tumor auxiliary intelligent detection and identification method is characterized by comprising the following steps: comprises the following steps of;
the method comprises the following steps: establishing a three-dimensional convolution network model of a multitask neural network based on U-Net and suitable for segmentation and diagnosis of a brain glioma lesion region in a brain MRI image;
step two: a multi-task joint training objective;
step three: measuring the loss of multiple tasks and optimizing the result;
step four: and (4) model training, result evaluation and output.
The step of establishing a three-dimensional convolution network model of a multitask neural network based on U-Net and suitable for segmentation and diagnosis of a brain glioma lesion region in a brain MRI image comprises the following steps:
building a three-dimensional model of an attention area based on a tumor area as the model based on an original U-Net network by using a three-dimensional convolution processing method, wherein the framework of the model comprises a down-sampling data path and an up-sampling data path, each layer on the down-sampling path comprises two convolution layers of 3 x 3, and each convolution layer adopts a dropout mode to prevent overfitting and uses an activation function of a ReLU. After two convolutional layers, using a maximum pooling layer with a step size of 2 and a size of 3 × 3 × 3 to perform downsampling operation; the up-sampling operation is carried out between two layers of the up-sampling data path in a deconvolution mode, the features after up-sampling are spliced with the features of the left down-sampling layer, the spliced features are subjected to convolution operation twice as same as that of the left up-sampling layer, after the up-sampling path finally obtains the features fused with the deep layer and the shallow layer information, convolution with the convolution kernel size of 3 x 3 is carried out twice, and after the features after convolution are obtained, the model is divided into two branches: one continues to carry out convolution operation once, the output channel is the same as the classification in the semantic segmentation result, and then the output result containing the background, the edema, the tumor parenchyma, the necrosis and the enhancement kernel is obtained through softmax calculation; after the other branch is subjected to convolution operation for one time, Global Average Pooling (Global Average Pooling) is used, then two full-connection layers are followed, and the output of the second full-connection layer is the same as the classification number of pathological diagnosis;
and then traversing all brain MRI images imported from a case, counting average value and variance information, reserving the brain MRI images for use in a standardized operation in a training and predicting process, receiving input brain MRI image sequences of four modes of T1, T1 enhancement, T2 and T2-Flair, taking each mode as a channel, splicing all slice images in the scanning sequence and a segmentation result labeling mask respectively to form a three-dimensional picture and a labeling sequence, binding the two sequences and a diagnosis result corresponding to the sequences to be used as a sample, and processing all the images.
The multi-task joint training target step comprises the following steps:
on the full convolution model, a classification branch is added after shallow information and deep information are fused, a single task segmentation and classification model is used, and semantic segmentation and classification results are obtained simultaneously, so that a segmentation task and a classification task of brain tumors can be executed simultaneously and shallow features are shared;
after the features of the deep layer information and the shallow layer information are fused, convolution with convolution kernel size of 3 multiplied by 3 is carried out twice, and after the features after convolution are obtained, the model is divided into two branches: one continues to carry out convolution operation once, the output channel is the same as the classification in the semantic segmentation result, and then the output result containing the background, the edema, the tumor parenchyma, the necrosis and the enhancement kernel is obtained through softmax calculation; the other branch was subjected to a convolution operation, followed by Global Average Pooling (Global Average Pooling), followed by two fully-connected layers, the output of the second fully-connected layer being the same as the pathological diagnosis classification number, including the following categories: oligodendroglioma, anaplastic oligodendroglioma, astrocytoma, anaplastic astrocytoma, glioblastoma.
The measuring the loss of the multiple tasks, and the optimizing the result step uses a loss function of the multiple task combination to measure the segmentation result and the classification result and optimize the segmentation and classification result, wherein:
the loss function of the image segmentation model adopts a Dice loss function;
the loss function of the tumor classification module selects a cross-entropy function.
The model training and result evaluation and output step comprises the following steps:
a model training step, namely setting a Dice pass of an image segmentation model or a cross entropy loss function of a tumor classification model to carry out back propagation for iterative training, and carrying out back propagation for iterative training after combining a pass value of the image segmentation model and a pass value of the tumor classification model in a certain proportion;
after the training effect is converged and a more ideal result is obtained, evaluating the effect on a training set;
and inputting a new case image to the evaluated model, and outputting a detection and identification result.
Compared with the prior art, the invention has the advantages that:
the design scheme fully utilizes the segmented information, and expands the classification task module aiming at the medical information hidden in the segmented image, thereby playing the roles of providing suggestions for an auxiliary medical system of a doctor in the diagnosis process, improving the diagnosis capability of a medical institution on related diseases, and feeding pathological research on the brain tumor in the medical field in the aspects of classification, classification and the like of the brain tumor.
At present, most of U-Net is applied to brain tumor MRI image processing to perform image segmentation tasks, the applications only segment the region of a tumor, valuable medical information is not obtained by further using the segmented image, and segmentation and diagnosis results of a focus region are obtained simultaneously through processing and analyzing the image.
Drawings
FIG. 1 is a design framework of a brain tumor auxiliary detection and identification system;
FIG. 2 is a multi-task learning model of brain tumor segmentation and classification diagnosis tasks;
Detailed Description
Referring to the attached drawings 1-2 in the specification, the invention provides a method for analyzing and processing a three-dimensional brain nuclear magnetic resonance image by combining a medical image with a deep learning and computer vision method and using a computer vision analysis method, and carries out segmentation of a glioma lesion region on the brain nuclear magnetic resonance image and a classification diagnosis task based on the image. Aiming at the problems that the data volume of a medical image data set is small, the category is unbalanced and serious, and the existing method focuses on the segmentation of a focus region and omits a classification diagnosis task, an improved 3D U-Net-based convolutional neural network is provided, classification diagnosis branches are added, and segmentation and classification results are obtained simultaneously in a multi-task combined training mode. FIG. 1 is an algorithm design flow proposed by the present invention, which first preprocesses an MRI image, a segmentation result of an artificial label corresponding to the MRI image, and classification diagnostic information obtained from pathological information to obtain a three-dimensional image sequence including four modalities, divides the processed image into a training set and a test set in proportion, trains on the training set, optimizes the performance of a model on two tasks of segmentation and classification, and finally evaluates the effect on the training set after the training effect is converged and a more ideal result is obtained.
Before a model is established, brain MRI image data used by a user directly comes from a medical record system of a hospital, image preprocessing operations of denoising, brightness contrast adjustment and the like for enhancing visibility are completed, image data are labeled manually, different parts of a tumor, including four parts of edema, tumor parenchyma, enhanced nucleus and necrosis, are labeled, and diagnosis information of tumor types, stages and the like is obtained from medical record diagnosis of cases and postoperative pathological information.
And traversing all MRI images, counting information such as the average value, the variance and the like, and reserving the information for standardized operation in the training and predicting process. In addition, four modal images at the same slice position of the same scanning sequence are stacked, each mode is used as a channel, then all slice images under the scanning sequence and a segmentation result marking mask (mask) are respectively spliced to form a three-dimensional image and a marking sequence, and then the two sequences and a diagnosis result corresponding to the sequences are bound to form a sample.
After all samples are processed, the samples are divided into a training set and a testing set according to a determined proportion for later use.
And then, building a three-dimensional model by using a three-dimensional convolution processing method based on the original U-Net network.
Fig. 2 shows the model architecture used in the present invention, comprising a left-hand, one-down-sampled data path and a right-hand, one-up-sampled data path. Each layer on the left downsampled path contains two 3 x 3 convolutional layers, and each convolutional layer adopts dropout mode to prevent overfitting, and the activation function of the ReLU is used. After two convolutional layers, using a maximum pooling layer with a step size of 2 and a size of 3 × 3 × 3 to perform downsampling operation; and performing up-sampling operation between two layers of the right data path in a deconvolution mode, splicing the up-sampled features with the features of the left down-sampling layer, and performing convolution operation on the spliced features twice, wherein the convolution operation is the same as that of the left up-sampling layer.
The downsampling layer improves the receptive field of each pixel in the finally obtained deepest layer features by continuously reducing the resolution of the features, obtains a high-level representation with more abstract features, namely deep information, and has higher representation capability for picture classification and pixel class judgment in pictures, but the accuracy of pixel-level classification and the resolution of obtained classification results are greatly reduced due to the loss of the resolution. The middle cross-layer connection is shallow information, and the problem that the resolution ratio of the result is reduced compared with the input result is solved by fusing the middle cross-layer connection with the up-sampling path characteristics on the right side, so that the fused result is better in expression result.
After the characteristics of the fused deep-layer and shallow-layer information are finally obtained from the up-sampling channel, convolution with convolution kernel size of 3 × 3 × 3 is performed twice, and after the characteristics of the convolution are obtained, the model is divided into two branches: one continues to carry out convolution operation once, the output channel is the same as the classification in the semantic segmentation result, and then the output result containing the background, the edema, the tumor parenchyma, the necrosis and the enhancement kernel is obtained through softmax calculation; the other branch was subjected to a convolution operation, followed by Global Average Pooling (Global Average Pooling), followed by two fully-connected layers, the output of the second fully-connected layer being the same as the pathological diagnosis classification number, including the following categories: oligodendroglioma, anaplastic oligodendroglioma, astrocytoma, anaplastic astrocytoma, glioblastoma.
The above process can be regarded as a process of accurately identifying the brain tumor category by combining the background based on the tumor region as the attention region of the model. The process uses a multi-task learning method, so that the segmentation task and the classification task of the brain tumor can be simultaneously executed, and the segmentation task and the classification task can be jointly promoted under the mutual action due to the sharing of the shallow features.
The multi-task learning method is measured by the loss function and the model training method is used for optimizing the performance of the model.
1. Loss function
1) Selection of a loss function for brain tumor segmentation task model:
one challenge in medical image segmentation is the problem of class imbalance in the data, for example in brain tumor MRI images, the proportion of the entire data that is the target object to be segmented is particularly small, resulting in severe class imbalance. In this case, the training is hindered by using the traditional classification cross entropy loss function, and the Dice loss function can effectively deal with the class imbalance problem, so the invention adopts the loss function as the loss function of the segmentation model, which is specifically expressed as follows:
Figure BDA0002345694900000061
where u is the segmentation result output for the network, v is the segmentation of the labels, and i is the number of pixels in the training block
2) Selection of loss function for brain tumor classification task:
the problem is a classification problem, and a cross entropy function widely used by the classification problem is adopted as a loss function of the module, which is specifically expressed as follows:
Figure BDA0002345694900000071
where N is the total number of samples, K is the total number of classes, yi,jIs a tag value, pi,jIs a predicted value
2. The model training method comprises the following steps:
1) and setting a cross entropy loss function which only allows an image segmentation model or a tumor classification model to carry out back propagation for iterative training, wherein the overall loss function is expressed as:
Figure BDA0002345694900000072
2) combining the loss value of the image segmentation module and the loss value of the tumor classification model according to a certain proportion, and then carrying out back propagation for iterative training, namely expressing the total loss function as:
Loss2=LossDice+αLosscross
wherein α is an adjustable scaling factor.

Claims (5)

1. An attention-enhanced brain tumor auxiliary intelligent detection and identification method is characterized by comprising the following steps: comprises the following steps of;
the method comprises the following steps: establishing a three-dimensional convolution network model of a multitask neural network based on U-Net and suitable for segmentation and diagnosis of a brain glioma lesion region in a brain MRI image;
step two: a multi-task joint training objective;
step three: measuring the loss of multiple tasks and optimizing the result;
step four: and (4) model training, result evaluation and output.
2. The method for aided intelligent detection and identification of brain tumors according to claim 1, wherein the method comprises the following steps: the step of establishing a three-dimensional convolution network model of a multitask neural network based on U-Net and suitable for segmentation and diagnosis of a brain glioma lesion region in a brain MRI image comprises the following steps:
building a three-dimensional model of an attention area based on a tumor area as the model based on an original U-Net network by using a three-dimensional convolution processing method, wherein the framework of the model comprises a down-sampling data path and an up-sampling data path, each layer on the down-sampling path comprises two convolution layers of 3 x 3, and each convolution layer adopts a dropout mode to prevent overfitting and uses an activation function of a ReLU. After two convolutional layers, using a maximum pooling layer with a step size of 2 and a size of 3 × 3 × 3 to perform downsampling operation; the up-sampling operation is carried out between two layers of the up-sampling data path in a deconvolution mode, the features after up-sampling are spliced with the features of the left down-sampling layer, the spliced features are subjected to convolution operation twice as same as that of the left up-sampling layer, after the up-sampling path finally obtains the features fused with the deep layer and the shallow layer information, convolution with the convolution kernel size of 3 x 3 is carried out twice, and after the features after convolution are obtained, the model is divided into two branches: one continues to carry out convolution operation once, the output channel is the same as the classification in the semantic segmentation result, and then the output result containing the background, the edema, the tumor parenchyma, the necrosis and the enhancement kernel is obtained through softmax calculation; after the other branch is subjected to convolution operation for one time, Global Average Pooling (Global Average Pooling) is used, then two full-connection layers are followed, and the output of the second full-connection layer is the same as the classification number of pathological diagnosis;
and then traversing all brain MRI images imported from a case, counting average value and variance information, reserving the brain MRI images for use in a standardized operation in a training and predicting process, receiving input brain MRI image sequences of four modes of T1, T1 enhancement, T2 and T2-Flair, taking each mode as a channel, splicing all slice images in the scanning sequence and a segmentation result labeling mask respectively to form a three-dimensional picture and a labeling sequence, binding the two sequences and a diagnosis result corresponding to the sequences to be used as a sample, and processing all the images.
3. The method for aided intelligent detection and identification of brain tumors according to claim 2, wherein the method comprises the following steps: the multi-task joint training target step comprises the following steps:
on the full convolution model, a classification branch is added after shallow information and deep information are fused, a single task segmentation and classification model is used, and semantic segmentation and classification results are obtained simultaneously, so that a segmentation task and a classification task of brain tumors can be executed simultaneously and shallow features are shared;
after the features of the deep layer information and the shallow layer information are fused, convolution with convolution kernel size of 3 multiplied by 3 is carried out twice, and after the features after convolution are obtained, the model is divided into two branches: one continues to carry out convolution operation once, the output channel is the same as the classification in the semantic segmentation result, and then the output result containing the background, the edema, the tumor parenchyma, the necrosis and the enhancement kernel is obtained through softmax calculation; the other branch was subjected to a convolution operation, followed by Global Average Pooling (Global Average Pooling), followed by two fully-connected layers, the output of the second fully-connected layer being the same as the pathological diagnosis classification number, including the following categories: oligodendroglioma, anaplastic oligodendroglioma, astrocytoma, anaplastic astrocytoma, glioblastoma.
4. The method for aided intelligent detection and identification of brain tumors with enhanced attention according to claim 3, wherein the method comprises the following steps: the measuring the loss of the multiple tasks, and the optimizing the result step uses a loss function of the multiple task combination to measure the segmentation result and the classification result and optimize the segmentation and classification result, wherein:
the loss function of the image segmentation model adopts a Dice loss function;
the loss function of the tumor classification module selects a cross-entropy function.
5. The method for aided intelligent detection and identification of brain tumors according to claim 4, wherein the method comprises the following steps: the model training and result evaluation and output step comprises the following steps:
a model training step, namely setting a Dice pass of an image segmentation model or a cross entropy loss function of a tumor classification model to carry out back propagation for iterative training, and carrying out back propagation for iterative training after combining a pass value of the image segmentation model and a pass value of the tumor classification model in a certain proportion;
after the training effect is converged and a more ideal result is obtained, evaluating the effect on a training set;
and inputting a new case image to the evaluated model, and outputting a detection and identification result.
CN201911393654.4A2019-12-302019-12-30Attention-enhanced brain tumor auxiliary intelligent detection and identification methodActiveCN111047589B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201911393654.4ACN111047589B (en)2019-12-302019-12-30Attention-enhanced brain tumor auxiliary intelligent detection and identification method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201911393654.4ACN111047589B (en)2019-12-302019-12-30Attention-enhanced brain tumor auxiliary intelligent detection and identification method

Publications (2)

Publication NumberPublication Date
CN111047589Atrue CN111047589A (en)2020-04-21
CN111047589B CN111047589B (en)2022-07-26

Family

ID=70241643

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201911393654.4AActiveCN111047589B (en)2019-12-302019-12-30Attention-enhanced brain tumor auxiliary intelligent detection and identification method

Country Status (1)

CountryLink
CN (1)CN111047589B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111667458A (en)*2020-04-302020-09-15杭州深睿博联科技有限公司Method and device for detecting early acute cerebral infarction in flat-scan CT
CN111968127A (en)*2020-07-062020-11-20中国科学院计算技术研究所Cancer focus area identification method and system based on full-section pathological image
CN112085113A (en)*2020-09-142020-12-15四川大学华西医院Severe tumor image recognition system and method
CN112733873A (en)*2020-09-232021-04-30浙江大学山东工业技术研究院Chromosome karyotype graph classification method and device based on deep learning
CN112766333A (en)*2021-01-082021-05-07广东中科天机医疗装备有限公司Medical image processing model training method, medical image processing method and device
CN112927240A (en)*2021-03-082021-06-08重庆邮电大学CT image segmentation method based on improved AU-Net network
CN113112465A (en)*2021-03-312021-07-13上海深至信息科技有限公司System and method for generating carotid intima-media segmentation model
CN113223014A (en)*2021-05-082021-08-06中国科学院自动化研究所Brain image analysis system, method and equipment based on data enhancement
CN113223704A (en)*2021-05-202021-08-06吉林大学Auxiliary diagnosis method for computed tomography aortic aneurysm based on deep learning
CN113516671A (en)*2021-08-062021-10-19重庆邮电大学 A brain tissue segmentation method for infants and young children based on U-net and attention mechanism
CN114511738A (en)*2022-01-252022-05-17阿里巴巴(中国)有限公司 Fundus lesion identification method, device, electronic device and readable storage medium
CN114947807A (en)*2022-05-062022-08-30天津大学 A multi-task prediction method for brain invasion classification and meningioma grade
CN115222007A (en)*2022-05-312022-10-21复旦大学 An improved particle swarm parameter optimization method for glioma multi-task integrated network
CN115393293A (en)*2022-08-122022-11-25西南大学 Segmentation and localization of electron microscope red blood cells based on UNet network and watershed algorithm
CN116645381A (en)*2023-06-262023-08-25海南大学Brain tumor MRI image segmentation method, system, electronic equipment and storage medium
CN117726624A (en)*2024-02-072024-03-19北京长木谷医疗科技股份有限公司 A method and device for intelligent identification and evaluation of adenoid lesions in real-time under video streaming

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109087318A (en)*2018-07-262018-12-25东北大学A kind of MRI brain tumor image partition method based on optimization U-net network model
CN109191476A (en)*2018-09-102019-01-11重庆邮电大学The automatic segmentation of Biomedical Image based on U-net network structure
CN109754404A (en)*2019-01-022019-05-14清华大学深圳研究生院A kind of lesion segmentation approach end to end based on more attention mechanism
US20190205606A1 (en)*2016-07-212019-07-04Siemens Healthcare GmbhMethod and system for artificial intelligence based medical image segmentation
CN110120033A (en)*2019-04-122019-08-13天津大学Based on improved U-Net neural network three-dimensional brain tumor image partition method
CN110298844A (en)*2019-06-172019-10-01艾瑞迈迪科技石家庄有限公司X-ray contrastographic picture blood vessel segmentation and recognition methods and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20190205606A1 (en)*2016-07-212019-07-04Siemens Healthcare GmbhMethod and system for artificial intelligence based medical image segmentation
CN109087318A (en)*2018-07-262018-12-25东北大学A kind of MRI brain tumor image partition method based on optimization U-net network model
CN109191476A (en)*2018-09-102019-01-11重庆邮电大学The automatic segmentation of Biomedical Image based on U-net network structure
CN109754404A (en)*2019-01-022019-05-14清华大学深圳研究生院A kind of lesion segmentation approach end to end based on more attention mechanism
CN110120033A (en)*2019-04-122019-08-13天津大学Based on improved U-Net neural network three-dimensional brain tumor image partition method
CN110298844A (en)*2019-06-172019-10-01艾瑞迈迪科技石家庄有限公司X-ray contrastographic picture blood vessel segmentation and recognition methods and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TAN PAN 等: "A Multi-Task Convolutional Neural Network for Renal Tumor Segmentation and Classification Using Multi-Phasic CT Images", 《2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING》*

Cited By (23)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111667458B (en)*2020-04-302023-09-01杭州深睿博联科技有限公司Early acute cerebral infarction detection method and device in flat scanning CT
CN111667458A (en)*2020-04-302020-09-15杭州深睿博联科技有限公司Method and device for detecting early acute cerebral infarction in flat-scan CT
CN111968127B (en)*2020-07-062021-08-27中国科学院计算技术研究所Cancer focus area identification method and system based on full-section pathological image
CN111968127A (en)*2020-07-062020-11-20中国科学院计算技术研究所Cancer focus area identification method and system based on full-section pathological image
CN112085113A (en)*2020-09-142020-12-15四川大学华西医院Severe tumor image recognition system and method
CN112733873A (en)*2020-09-232021-04-30浙江大学山东工业技术研究院Chromosome karyotype graph classification method and device based on deep learning
CN112766333A (en)*2021-01-082021-05-07广东中科天机医疗装备有限公司Medical image processing model training method, medical image processing method and device
CN112766333B (en)*2021-01-082022-09-23广东中科天机医疗装备有限公司 Medical image processing model training method, medical image processing method and device
CN112927240A (en)*2021-03-082021-06-08重庆邮电大学CT image segmentation method based on improved AU-Net network
CN112927240B (en)*2021-03-082022-04-05重庆邮电大学 A CT Image Segmentation Method Based on Improved AU-Net Network
CN113112465A (en)*2021-03-312021-07-13上海深至信息科技有限公司System and method for generating carotid intima-media segmentation model
CN113223014A (en)*2021-05-082021-08-06中国科学院自动化研究所Brain image analysis system, method and equipment based on data enhancement
CN113223704A (en)*2021-05-202021-08-06吉林大学Auxiliary diagnosis method for computed tomography aortic aneurysm based on deep learning
CN113223704B (en)*2021-05-202022-07-26吉林大学Auxiliary diagnosis method for computed tomography aortic aneurysm based on deep learning
CN113516671A (en)*2021-08-062021-10-19重庆邮电大学 A brain tissue segmentation method for infants and young children based on U-net and attention mechanism
CN113516671B (en)*2021-08-062022-07-01重庆邮电大学 An image segmentation method of infant brain tissue based on U-net and attention mechanism
CN114511738A (en)*2022-01-252022-05-17阿里巴巴(中国)有限公司 Fundus lesion identification method, device, electronic device and readable storage medium
CN114947807A (en)*2022-05-062022-08-30天津大学 A multi-task prediction method for brain invasion classification and meningioma grade
CN115222007A (en)*2022-05-312022-10-21复旦大学 An improved particle swarm parameter optimization method for glioma multi-task integrated network
CN115393293A (en)*2022-08-122022-11-25西南大学 Segmentation and localization of electron microscope red blood cells based on UNet network and watershed algorithm
CN116645381A (en)*2023-06-262023-08-25海南大学Brain tumor MRI image segmentation method, system, electronic equipment and storage medium
CN117726624A (en)*2024-02-072024-03-19北京长木谷医疗科技股份有限公司 A method and device for intelligent identification and evaluation of adenoid lesions in real-time under video streaming
CN117726624B (en)*2024-02-072024-05-28北京长木谷医疗科技股份有限公司 A method and device for intelligently identifying and evaluating adenoid lesions in real time under video streaming

Also Published As

Publication numberPublication date
CN111047589B (en)2022-07-26

Similar Documents

PublicationPublication DateTitle
CN111047589A (en) An attention-enhanced brain tumor-assisted intelligent detection and recognition method
Shao et al.Brain ventricle parcellation using a deep neural network: Application to patients with ventriculomegaly
Akkus et al.Deep learning for brain MRI segmentation: state of the art and future directions
Cohen et al.Defining functional areas in individual human brains using resting functional connectivity MRI
Wei et al.Predicting PET-derived demyelination from multimodal MRI using sketcher-refiner adversarial training for multiple sclerosis
CN109035263A (en)Brain tumor image automatic segmentation method based on convolutional neural networks
CN111598864B (en)Liver cell cancer differentiation evaluation method based on multi-modal image contribution fusion
Yang et al.DBAN: Adversarial network with multi-scale features for cardiac MRI segmentation
ZhouModality-level cross-connection and attentional feature fusion based deep neural network for multi-modal brain tumor segmentation
CN106096636A (en)A kind of Advancement Type mild cognition impairment recognition methods based on neuroimaging
CN109498046A (en)The myocardial infarction quantitative evaluating method merged based on nucleic image with CT coronary angiography
Wang et al.SK-UNet: An improved U-Net model with selective kernel for the segmentation of LGE cardiac MR images
Karakis et al.Deep learning prediction of motor performance in stroke individuals using neuroimaging data
Ye et al.Segmentation of the cerebellar peduncles using a random forest classifier and a multi-object geometric deformable model: application to spinocerebellar ataxia type 6
Ebel et al.Classifying sex with volume-matched brain MRI
Pallawi et al.Study of Alzheimer’s disease brain impairment and methods for its early diagnosis: a comprehensive survey
Ren et al.Prostate segmentation in MRI using transformer encoder and decoder framework
Skandha et al.Magnetic resonance based Wilson’s disease tissue characterization in an artificial intelligence framework using transfer learning
Huang et al.Improved Prostate Biparameter Magnetic Resonance Image Segmentation Based on Def-UNet
Jafrasteh et al.MGA-Net: A novel mask-guided attention neural network for precision neonatal brain imaging
AderghalClassification of multimodal MRI images using Deep Learning: Application to the diagnosis of Alzheimer’s disease.
Williams et al.Thalamic nuclei segmentation from T $ _1 $-weighted MRI: unifying and benchmarking state-of-the-art methods with young and old cohorts
Gowda et al.Enhanced Magnetic Resonance Imaging for Accurate Classification of Benign and Malignant Brain Cells
SriharshaUsing Deep Learning to Classify and Diagnose Alzheimer's Disease
Jia et al.Multi-parametric MRIs based assessment of hepatocellular carcinoma differentiation with multi-scale ResNet

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp