Movatterモバイル変換


[0]ホーム

URL:


CN116758039B - Method for processing prostate cancer in multiparameter magnetic resonance image and related equipment - Google Patents

Method for processing prostate cancer in multiparameter magnetic resonance image and related equipment

Info

Publication number
CN116758039B
CN116758039BCN202310749057.0ACN202310749057ACN116758039BCN 116758039 BCN116758039 BCN 116758039BCN 202310749057 ACN202310749057 ACN 202310749057ACN 116758039 BCN116758039 BCN 116758039B
Authority
CN
China
Prior art keywords
magnetic resonance
image sequence
resonance image
prostate cancer
multiparameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310749057.0A
Other languages
Chinese (zh)
Other versions
CN116758039A (en
Inventor
陶杰
李文豪
魏强
郑博文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Filing date
Publication date
Application filed by Guangdong University of TechnologyfiledCriticalGuangdong University of Technology
Priority to CN202310749057.0ApriorityCriticalpatent/CN116758039B/en
Publication of CN116758039ApublicationCriticalpatent/CN116758039A/en
Application grantedgrantedCritical
Publication of CN116758039BpublicationCriticalpatent/CN116758039B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Abstract

A method and related equipment for treating prostate cancer in multiparameter magnetic resonance image. Compared with the prior art, the method and the related equipment for processing the prostate cancer in the multi-parameter magnetic resonance image can cut, normalize intensity and register images of different image sequences in the multi-parameter magnetic resonance image sequence, extract a plurality of different layers of characteristic images of different sequence images by adopting a convolution attention module and then fuse the extracted characteristic images and a segmentation mask by adopting an attention mechanism aiming at the problem that the characteristic extraction and the fusion of the different image sequences in the multi-parameter magnetic resonance image sequence are difficult, realize the characteristic extraction and the characteristic fusion by adopting the attention mechanism, automatically learn proper characteristic extraction and characteristic fusion parameters in the training process by presetting a characteristic extraction network model, and simultaneously complete the tasks of detection, segmentation and classification by adopting a Retina U-Net framework.

Description

Method for processing prostate cancer in multiparameter magnetic resonance image and related equipment
Technical Field
The invention relates to the field of medical image segmentation, in particular to a method for processing prostate cancer in a multiparameter magnetic resonance image and related equipment.
Background
Prostate cancer (PCa) is a malignancy that occurs in the prostate of men. MRI technology can provide images with different contrasts (namely modes), and is a non-invasive and good-performance soft tissue contrast imaging mode. MRI images can provide information on the shape, size, location, etc. of organs and lesions, and play a key role in disease analysis and diagnosis. Prostate Imaging Reporting and data system (Prodate Imaging-Reporting AND DATA SYSTEM, PI-RADS) is a structured Reporting scheme for multiparameter Prostate MRI to assess suspected Prostate cancer in untreated Prostate.
Compared with a single-mode medical image, the multi-mode medical image can provide more information of a focus area and surrounding areas from multiple layers, and focus characteristics are displayed from different angles, so that the multi-mode medical image is an important means for tumor diagnosis of patients in recent years. However, detection, segmentation and diagnosis of lesions in multi-modal medical images have high technical requirements on doctors, and are time-consuming and labor-consuming. The focus in the multi-mode medical image is automatically detected, segmented and diagnosed by using the deep learning technology, so that the workload of doctors can be reduced, the diagnosis speed is accelerated, and the method is a research hotspot in the field of medical image segmentation in recent years. However, the prior art has difficulty in extracting and fusing the characteristics of the multi-mode medical image and lacks a model capable of simultaneously executing detection, segmentation and diagnosis.
Therefore, a new medical image processing method is needed to perform full-automatic detection, segmentation and classification of lesions, which provides assistance for doctors.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a processing method and related equipment for extracting and fusing the multi-mode medical image characteristics with high efficiency and simultaneously carrying out full-automatic detection, segmentation and classification on prostate cancer in a multi-parameter magnetic resonance image.
In a first aspect, the present invention provides a method for treating prostate cancer in a multiparameter magnetic resonance image, comprising the steps of:
acquiring a multi-parameter magnetic resonance image sequence containing a prostate region, wherein the multi-parameter magnetic resonance image sequence comprises an apparent diffusion coefficient image sequence, a diffusion weighting image sequence and a T2 weighting image sequence;
preprocessing the multi-parameter magnetic resonance image sequence, wherein the preprocessing comprises cutting images of different image sequences in the multi-parameter magnetic resonance image sequence into the same size, performing intensity normalization, and registering the apparent diffusion coefficient image sequence and the diffusion weighting image sequence into the T2 weighting image sequence;
Respectively extracting different levels of feature graphs in the apparent diffusion coefficient image sequence, the diffusion weighted image sequence and the T2 weighted image sequence based on a preset feature extraction network model to respectively obtain an apparent diffusion coefficient feature graph, a diffusion weighted feature graph and a T2 weighted feature graph;
Taking the T2 weighted image sequence as the input of the preset feature extraction network model, and processing to obtain a prostate segmentation mask, a central zone gland segmentation mask and a peripheral zone segmentation mask; the apparent diffusion coefficient feature map, the diffusion weighted feature map, the T2 weighted feature map, the prostate segmentation mask, the central zone gland segmentation mask and the peripheral zone segmentation mask are connected in series to form a series feature map;
Processing the series characteristic images through an attention mechanism to obtain a fusion characteristic image;
and simultaneously carrying out evaluation classification, lesion detection and lesion segmentation on the fusion feature map through a preset detection network architecture to obtain a final detection result.
Preferably, the preset feature extraction network model comprises a convolution attention module, a first semantic segmentation network and a second semantic segmentation network, and the convolution attention module comprises a channel attention module and a space attention module.
Preferably, the convolution attention module satisfies the following relation:
wherein, theRepresenting element-wise multiplication, F″ represents the final refined output, MC represents channel attention, MS represents spatial attention, F represents the input of channel attention, and F' represents the output of channel attention.
Preferably, the channel attention module satisfies the following relation:
wherein sigma represents a sigmoid activation function,The representation W0 is a matrix of C/r x C,Representing W1 as a matrix of C x C/r, avgPool representing the average pooling layer, maxPool representing the maximum pooling layer, MLP representing the multi-layer sensor,The calculation result of AvgPool (F) is shown,The results of the calculation of MaxPool (F) are shown.
Preferably, the spatial attention module satisfies the following relation:
where f7×7 represents a convolution with a kernel size of 7 x 7.
Preferably, the processing of the T2 weighted image sequence as the input of the preset feature extraction network model to obtain the prostate segmentation mask, the central band gland segmentation mask and the peripheral band segmentation mask includes:
taking the T2 weighted image sequence as the input of the first semantic segmentation network in the preset feature extraction network model to obtain the prostate segmentation mask;
Taking the T2 weighted image sequence and the prostate segmentation mask as the input of the second semantic segmentation network in the preset feature extraction network model to obtain a central glandular segmentation mask;
the peripheral band split mask is calculated by subtracting the median band split mask from the prostate split mask.
Preferably, the attention mechanism employs ECANet.
Preferably, the preset detection network architecture is Retina U-Net.
In a second aspect, the present invention also provides a computer device comprising a memory, a processor and a processing program stored on the memory and executable on the processor for processing prostate cancer in a multiparameter magnetic resonance image, wherein the processor, when executing the processing program for processing prostate cancer in the multiparameter magnetic resonance image, implements the steps of the processing method for prostate cancer in a multiparameter magnetic resonance image according to any of the embodiments above.
In a third aspect, the present invention also provides a computer readable storage medium, on which a processing program for prostate cancer in a multiparameter magnetic resonance image is stored, which when executed by a processor, implements the steps of the method for processing prostate cancer in a multiparameter magnetic resonance image according to any one of the embodiments above.
Compared with the prior art, the prostate cancer processing method and related equipment in the multi-parameter magnetic resonance image can cut, normalize intensity and register images of different image sequences in the multi-parameter magnetic resonance image sequence, extract a plurality of different layers of characteristic images of different image sequences by adopting a convolution attention module, fuse the extracted characteristic images and a segmentation mask by adopting a ECANet attention mechanism, fuse the prostate region segmentation mask, a central band segmentation mask and a peripheral band segmentation mask into a characteristic fusion module, improve the classifying capability, the detecting capability and the segmentation capability of the prostate cancer subsequently, realize characteristic extraction and characteristic fusion by adopting an attention mechanism, automatically learn proper characteristic extraction and characteristic fusion parameters by a preset characteristic extraction network model in the training process, simultaneously complete tasks of full-automatic detection, segmentation and classification by adopting a Retna U-Net framework, improve the reasoning speed, share one framework by adopting a plurality of tasks, and mutually complement apparent tasks by improving the apparent sharing information.
Drawings
The present invention will be described in detail with reference to the accompanying drawings. The foregoing and other aspects of the invention will become more apparent and more readily appreciated from the following detailed description taken in conjunction with the accompanying drawings. In the accompanying drawings:
FIG. 1 is a block flow diagram of a method for treating prostate cancer in a multiparameter magnetic resonance image according to an embodiment of the present invention;
fig. 2 is a flowchart of a technical scheme of a method for treating prostate cancer in a multiparameter magnetic resonance image according to an embodiment of the present invention;
FIG. 3 is a block diagram of a convolution attention module of a method for processing prostate cancer in a multiparameter magnetic resonance image according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a channel attention module of a method for treating prostate cancer in a multiparameter MR image according to an embodiment of the present invention;
FIG. 5 is a block diagram of a spatial attention module of a method for treating prostate cancer in a multiparameter MR image according to an embodiment of the present invention;
FIG. 6 is a ECANet block diagram of a method for treating prostate cancer in a multiparameter MR image according to an embodiment of the present invention;
FIG. 7 is a Retina U-Net block diagram of a method for treating prostate cancer in a multiparameter MR image according to an embodiment of the present invention;
Fig. 8 is a schematic diagram of a computer device for treating prostate cancer in a multiparameter magnetic resonance image according to an embodiment of the present invention.
Detailed Description
The detailed description/examples set forth herein are specific embodiments of the application and are intended to be illustrative and exemplary of the concepts of the application and are not to be construed as limiting the scope of the application. In addition to the embodiments described herein, those skilled in the art will be able to adopt other obvious solutions based on the disclosure of the claims and specification, including any obvious alterations and modifications to the embodiments described herein, all within the scope of the present application.
The following describes in detail the embodiments of the present invention with reference to the drawings.
Example one
Referring to fig. 1-7, the present invention provides a method for treating prostate cancer in a multiparameter magnetic resonance image, comprising the following steps:
S101, acquiring a multi-parameter magnetic resonance image sequence containing a prostate region, wherein the multi-parameter magnetic resonance image sequence comprises an apparent diffusion coefficient image sequence, a diffusion weighting image sequence and a T2 weighting image sequence.
In an embodiment of the invention, the multi-parameter magnetic resonance image mpMRI (Multiparametric Magnetic Resonance Imaging) sequence includes an apparent diffusion coefficient image ADC (Apparent Diffusion Coefficient) sequence, a diffusion weighted image DWI (Diffusion WeightedImaging) sequence, a T2weighted image T2W (T2 WEIGHTED IMAGE) sequence. Specifically, the apparent diffusion coefficient image ADC sequence is used for describing the speed and the range of molecular diffusion motion in different directions in the diffusion weighted image sequence, the diffusion weighted image DWI sequence can reflect the diffusion motion and the limited degree of water molecules in tissues and lesions, and the T2weighted image T2W sequence can clearly see the position and the size of a focus.
S102, preprocessing the multi-parameter magnetic resonance image sequence, wherein the preprocessing comprises cutting images of different image sequences in the multi-parameter magnetic resonance image sequence into the same size, performing intensity normalization, and registering the apparent diffusion coefficient image sequence and the diffusion weighting image sequence into the T2 weighting image sequence;
In the embodiment of the invention, all images in the multiparameter magnetic resonance image are cut into a periprostatic area with the size of 160 multiplied by 24 voxels and the interval of (0.5,0.5,3) mm, wherein all image interpolation adopts third-order B-spline interpolation, the intensity of each channel of the cut image is normalized, and the apparent diffusion coefficient image sequence and the diffusion weighting image sequence are registered into the T2 weighting image sequence. Specifically, non-rigid registration (based on B-spline transformation) is performed between the spatial gradient of the T2 weighted image sequence and the apparent diffusion coefficient image sequence using Python library SimpleTK, with Mattes Mutual Information as the loss function and gradient descent as optimization of B-spline parameters.
S103, respectively extracting different levels of feature images in the apparent diffusion coefficient image sequence, the diffusion weighted image sequence and the T2 weighted image sequence based on a preset feature extraction network model to respectively obtain an apparent diffusion coefficient feature image, a diffusion weighted feature image and a T2 weighted feature image;
In the embodiment of the invention, a CBAM (ConvolutionalBlock Attention Module) and CBAM lightweight convolution attention module is adopted as a preset feature extraction network model. The convolution attention module CBAM includes two sub-modules, namely a channel attention module CAM (Channel Attention Module) and a spatial attention module SAM (Spartial Attention Module), which perform channel and spatial attention mechanisms, respectively. The input features pass through a channel attention module to obtain a weighted result, then pass through a space attention module to finally weight to obtain the result. The overall attentiveness mechanism can be summarized as:
wherein, theRepresenting element-wise multiplication, F″ represents the final refined output, MC represents channel attention, MS represents spatial attention, F represents the input of channel attention, and F' represents the output of channel attention.
The channel attention module pays attention to meaningful information in input features, the input feature map is changed into a size of CxH x W from Cx1 x 1 through two parallel maximum pooling layers and average pooling layers, and then the channel attention module is passed through a Share MLP module, wherein the channel number is compressed to be 1/r (Reduction rate) times of the original channel number, and then the channel number is expanded to be the original channel number, and two activated results are obtained through a ReLU activation function. And adding the two output results element by element, obtaining an output result of the channel attention module through a sigmoid activation function, multiplying the output result by an original image, and changing the output result back to the size of C multiplied by H multiplied by W. The channel attention module satisfies the following relationship:
wherein sigma represents a sigmoid activation function,The representation W0 is a matrix of C/r x C,Representing W1 as a matrix of C x C/r, avgPool representing the average pooling layer, maxPool representing the maximum pooling layer, MLP representing the multi-layer sensor,The calculation result of AvgPool (F) is shown,The results of the calculation of MaxPool (F) are shown.
The spatial attention module focuses on the position information of the target, the output result of the channel attention module is subjected to maximum pooling and average pooling to obtain two 1 XH XW characteristic graphs, then the two characteristic graphs are spliced through Concat operation, the characteristic graphs are changed into the characteristic graph of the 1 channel through 7X 7 convolution, the characteristic graph of the spatial attention is obtained through a sigmoid function, and finally the output result is multiplied by the original graph to be changed back to the size of C XH XW to obtain an apparent dispersion coefficient characteristic graph, a dispersion weighting characteristic graph and a weighting characteristic graph. The spatial attention module satisfies the following relationship:
where f7×7 represents a convolution with a kernel size of 7 x 7.
S104, taking the T2 weighted image sequence as the input of the preset feature extraction network model, and processing to obtain a prostate segmentation mask, a central zone gland segmentation mask and a peripheral zone segmentation mask, wherein the apparent diffusion coefficient feature map, the diffusion weighted feature map, the T2 weighted feature map, the prostate segmentation mask, the central zone gland segmentation mask and the peripheral zone segmentation mask are connected in series to form a series feature map;
in the embodiment of the invention, the step S103 is used for extracting different levels of feature graphs in an image sequence, a prostate segmentation mask (prostate segmentation mask) is obtained based on a pre-trained first semantic segmentation network U-Net in the preset feature extraction network model, a T2 weighted image sequence is used as an input of the first semantic segmentation network U-Net, a pre-trained second semantic segmentation network U-Net in the preset feature extraction network model is used as an input of the second semantic segmentation network U-Net, a central band gland segmentation mask (CG segmentation mask) is obtained, and the central band gland segmentation mask is subtracted from the prostate segmentation mask to obtain a peripheral band segmentation mask (PZ segmentation mask). Finally, a series characteristic map is formed by connecting the apparent diffusion coefficient characteristic map, the diffusion weighted characteristic map, the T2 weighted characteristic map, the prostate segmentation mask, the central zone gland segmentation mask and the peripheral zone segmentation mask in series. The prostate region segmentation mask, the central zone gland segmentation mask and the peripheral zone segmentation mask are connected in series, so that the prostate cancer classification capability, the prostate cancer detection capability and the prostate cancer segmentation capability of the subsequent steps are improved.
S105, processing the series feature images through an attention mechanism to obtain a fusion feature image;
In an embodiment of the invention, the attention mechanism employs ECANet (EFFICIENT CHANNEL attention). ECANet is a channel attention mechanism, the input feature map is subjected to global average pooling, the feature map size is changed from C×H×W to C×1×1, the self-adaptive one-bit convolution kernel size is obtained through calculation and is applied to one-dimensional convolution, the weight of each channel of the feature map is obtained, and the normalized weight and the original input feature map are multiplied channel by channel to generate a weighted fusion feature map.
And S106, carrying out evaluation classification, lesion detection and lesion segmentation on the fusion feature map through a preset detection network architecture to obtain a final detection result.
In an embodiment of the present invention, the assessment classification employs a Prostate Imaging report and data system (Prodate Imaging-Reporting AND DATA SYSTEM, PI-RADS), which is a structured Reporting scheme for assessing suspected Prostate cancer in untreated Prostate. The preset detection network architecture is Retina U-Net, and the Retina U-Net architecture combines RETINA NET detectors with a U-Net split network. RETINA NET is a simple one-stage detection network based on FPN. As shown in fig. 7, where two subnetworks are classified and bounding box regressed at pyramid levels P3-P6, respectively. The pyramid level Pj represents a feature map of the jth decoder level, where j increases with decreasing resolution. The Retina U-Net architecture shifts the pyramid level of subnet operation to P2-P5 due to the presence of small objects in the medical image. In addition, two high resolution pyramid levels are added to the FPN in Retina U-Net, thereby creating a final split layer, making the expanded FPN architecture very similar to U-Net. Thus, the segmentation of lesions is independent of detection, which greatly simplifies the structure.
Compared with the prior art, the prostate cancer processing method and related equipment in the multi-parameter magnetic resonance image can cut, normalize intensity and register images of different image sequences in the multi-parameter magnetic resonance image sequence, extract a plurality of different layers of characteristic images of different image sequences by adopting a convolution attention module, fuse the extracted characteristic images and a segmentation mask by adopting a ECANet attention mechanism, fuse the prostate region segmentation mask, a central band segmentation mask and a peripheral band segmentation mask into a characteristic fusion module, improve the classifying capability, the detecting capability and the segmentation capability of the prostate cancer subsequently, realize characteristic extraction and characteristic fusion by adopting an attention mechanism, automatically learn proper characteristic extraction and characteristic fusion parameters by a preset characteristic extraction network model in the training process, simultaneously complete full-automatic detection, segmentation and classification tasks by adopting a Retna U-Net framework, improve the reasoning speed, share a framework by adopting a plurality of tasks, and mutually complement each other by improving the sharing information.
Example two
Referring to fig. 8, fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present invention, where the computer device 200 includes a memory 202, a processor 201, and a computer program stored in the memory 202 and capable of running on the processor 201.
The processor 201 invokes the computer program stored in the memory 202 to execute the steps in the method for processing prostate cancer in a multiparameter magnetic resonance image provided by the embodiment of the present invention, please refer to fig. 1, specifically including the following steps:
s101, acquiring a multi-parameter magnetic resonance image sequence containing a prostate region, wherein the multi-parameter magnetic resonance image sequence comprises an apparent diffusion coefficient image sequence, a diffusion weighted image sequence and a T2 weighted image sequence;
S102, preprocessing the multi-parameter magnetic resonance image sequence, wherein the preprocessing comprises cutting images in different image sequences in the multi-parameter magnetic resonance image sequence into the same size, performing intensity normalization, and registering the apparent diffusion coefficient image sequence and the diffusion weighting image sequence into the T2 weighting image sequence;
S103, respectively extracting different levels of feature images in the apparent diffusion coefficient image sequence, the diffusion weighted image sequence and the T2 weighted image sequence based on a preset feature extraction network model to respectively obtain an apparent diffusion coefficient feature image, a diffusion weighted feature image and a T2 weighted feature image;
S104, taking the T2 weighted image sequence as the input of the preset feature extraction network model, processing to obtain a prostate segmentation mask, a central zone gland segmentation mask and a peripheral zone segmentation mask, and connecting the apparent diffusion coefficient feature map, the diffusion weighted feature map, the T2 weighted feature map, the prostate segmentation mask, the central zone gland segmentation mask and the peripheral zone segmentation mask in series to form a series feature map;
S105, processing the series feature images through an attention mechanism to obtain a fusion feature image;
And S106, carrying out evaluation classification, lesion detection and lesion segmentation on the fusion feature map through a preset detection network architecture to obtain a final detection result.
The computer device 200 provided in the embodiment of the present invention can implement the steps in the method for processing prostate cancer in a multiparameter magnetic resonance image in the above embodiment, and can implement the same technical effects, and is not described in detail herein with reference to the description in the above embodiment.
Example III
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a processing program of the prostate cancer in the multiparameter magnetic resonance image, and when the processing program of the prostate cancer in the multiparameter magnetic resonance image is executed by a processor, each process and steps in the processing method of the prostate cancer in the multiparameter magnetic resonance image provided by the embodiment of the invention are realized, and the same technical effects can be realized, so that repetition is avoided and no redundant description is provided here.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM) or the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
While the embodiments of the present invention have been illustrated and described in connection with the drawings, what is presently considered to be the most practical and preferred embodiments of the invention, it is to be understood that the invention is not limited to the disclosed embodiments, but on the contrary, is intended to cover various equivalent modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (10)

CN202310749057.0A2023-06-21Method for processing prostate cancer in multiparameter magnetic resonance image and related equipmentActiveCN116758039B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202310749057.0ACN116758039B (en)2023-06-21Method for processing prostate cancer in multiparameter magnetic resonance image and related equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202310749057.0ACN116758039B (en)2023-06-21Method for processing prostate cancer in multiparameter magnetic resonance image and related equipment

Publications (2)

Publication NumberPublication Date
CN116758039A CN116758039A (en)2023-09-15
CN116758039Btrue CN116758039B (en)2025-10-17

Family

ID=

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2020119679A1 (en)*2018-12-142020-06-18深圳先进技术研究院Three-dimensional left atrium segmentation method and apparatus, terminal device, and storage medium
CN114022462A (en)*2021-11-102022-02-08华东理工大学 Method, system, device, processor and computer-readable storage medium for realizing lesion segmentation of multi-parameter nuclear magnetic resonance images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2020119679A1 (en)*2018-12-142020-06-18深圳先进技术研究院Three-dimensional left atrium segmentation method and apparatus, terminal device, and storage medium
CN114022462A (en)*2021-11-102022-02-08华东理工大学 Method, system, device, processor and computer-readable storage medium for realizing lesion segmentation of multi-parameter nuclear magnetic resonance images

Similar Documents

PublicationPublication DateTitle
Saxena et al.Predictive modeling of brain tumor: a deep learning approach
Yamuna et al.Integrating AI for Improved Brain Tumor Detection and Classification
Chanu et al.Retracted article: computer-aided detection of brain tumor from magnetic resonance images using deep learning network
Kanchanamala et al.Optimization-enabled hybrid deep learning for brain tumor detection and classification from MRI
Punn et al.Multi-modality encoded fusion with 3D inception U-net and decoder model for brain tumor segmentation
CN112102266A (en)Attention mechanism-based cerebral infarction medical image classification model training method
Yue et al.Retinal vessel segmentation using dense U-net with multiscale inputs
Arif et al.[Retracted] Automated Detection of Nonmelanoma Skin Cancer Based on Deep Convolutional Neural Network
Raut et al.Gastrointestinal tract disease segmentation and classification in wireless capsule endoscopy using intelligent deep learning model
Majji et al.Social bat optimisation dependent deep stacked auto‐encoder for skin cancer detection
CN118657800B (en) Joint segmentation method of multiple lesions in retinal OCT images based on hybrid network
CN113409326B (en)Image segmentation method and system
Chowdhury et al.Leveraging deep neural networks to uncover unprecedented levels of precision in the diagnosis of hair and scalp disorders
Naveena et al.DOTHE based image enhancement and segmentation using U-Net for effective prediction of human skin cancer
Nawaz et al.MSeg‐Net: A Melanoma Mole Segmentation Network Using CornerNet and Fuzzy K‐Means Clustering
Mansour et al.Kidney segmentations using cnn models
Gokapay et al.Enhanced MRI-based brain tumor segmentation and feature extraction using Berkeley wavelet transform and ETCCNN
CN116758039B (en)Method for processing prostate cancer in multiparameter magnetic resonance image and related equipment
PR et al.Automated biomedical image classification using multi-scale dense dilated semi-supervised u-net with cnn architecture
Butta et al.Ensemble deep learning approach for early diagnosis of Alzheimer's disease
Naveena et al.Effective skin cancer prediction using hybrid Xception and XGBoost approach tuned by honey badger optimization
Radhabai et al.An effective no-reference image quality index prediction with a hybrid Artificial Intelligence approach for denoised MRI images
Neelima et al.CAHO-DNFN: ME-Net-based segmentation and optimized deep neuro fuzzy network for brain tumour classification with MRI
Sumaiya et al.Federated Learning Assisted Deep learning methods fostered Skin Cancer Detection: A Survey
CN116758039A (en)Method for processing prostate cancer in multiparameter magnetic resonance image and related equipment

Legal Events

DateCodeTitleDescription
PB01Publication
SE01Entry into force of request for substantive examination
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp