Movatterモバイル変換


[0]ホーム

URL:


CN119205549B - A medical image denoising model training method and device - Google Patents

A medical image denoising model training method and device

Info

Publication number
CN119205549B
CN119205549BCN202411308847.6ACN202411308847ACN119205549BCN 119205549 BCN119205549 BCN 119205549BCN 202411308847 ACN202411308847 ACN 202411308847ACN 119205549 BCN119205549 BCN 119205549B
Authority
CN
China
Prior art keywords
image
feature map
hessian
loss
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411308847.6A
Other languages
Chinese (zh)
Other versions
CN119205549A (en
Inventor
项磊
高婕
张志浩
宫恩浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhitou Medical Technology Development Shanghai Co ltd
Original Assignee
Shenzhitou Medical Technology Development Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhitou Medical Technology Development Shanghai Co ltdfiledCriticalShenzhitou Medical Technology Development Shanghai Co ltd
Priority to CN202411308847.6ApriorityCriticalpatent/CN119205549B/en
Publication of CN119205549ApublicationCriticalpatent/CN119205549A/en
Application grantedgrantedCritical
Publication of CN119205549BpublicationCriticalpatent/CN119205549B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本申请公开了一种医学图像去噪模型训练方法及装置,通过获取噪声图像和干净图像,并进行预处理,得到输入图像、标签图像和掩码图像;将输入图像输入至编码器中进行特征提取,得到原始特征图;计算原始特征图的Hessian矩阵和矩阵特征值;根据矩阵特征值计算Hessian响应,并提取边缘特征图,得到深度Hessian注意力特征图;将原始特征图和深度Hessian注意力特征图进行拼接,并输入至解码器中进行特征融合,得到输出图像;根据掩码图像计算输出图像与标签图像的之间的损失,并反向传播更新权重,从而得到训练好的医学图像去噪模型。本申请有助于组织分层,且提高了去噪后的图像质量,使得诊断结果更加准确。

This application discloses a method and device for training a medical image denoising model. The method comprises obtaining a noisy image and a clean image and performing preprocessing to obtain an input image, a label image, and a mask image. The input image is input into an encoder for feature extraction to obtain an original feature map. The Hessian matrix and matrix eigenvalues of the original feature map are calculated. The Hessian response is calculated based on the matrix eigenvalues, and an edge feature map is extracted to obtain a deep Hessian attention feature map. The original feature map and the deep Hessian attention feature map are concatenated and input into a decoder for feature fusion to obtain an output image. The loss between the output image and the label image is calculated based on the mask image, and the weights are updated by backpropagation, thereby obtaining a trained medical image denoising model. This application facilitates tissue stratification and improves the quality of denoised images, making diagnostic results more accurate.

Description

Medical image denoising model training method and device
Technical Field
The application relates to the technical field of medical image processing, in particular to a medical image denoising model training method and device.
Background
Optical coherence tomography (Optical Coherence Tomography, OCT) is a low coherence optical imaging modality, and is widely used in clinical imaging in ophthalmology, dermatology, cardiology, and gastrointestinal tract due to its non-invasive, non-radiative, high resolution, real-time imaging characteristics, etc. In particular, OCT is an effective tool for diagnosing various ocular diseases such as retinal diseases and glaucoma in the field of ophthalmology. However, due to the low coherence light scattering of OCT, the inherent speckle noise reduces the signal-to-noise ratio, affecting image quality and accurate diagnosis. Classical algorithms such as a filtering method, a non-local mean method and the like can effectively remove speckle noise in the OCT image. However, these methods are very time-consuming due to the high computational complexity.
With the popularity of deep learning techniques in image processing, many classical convolutional neural network (Convolutional Neural Network, CNN) architectures are used for OCT image denoising in an effort to produce high quality images, such as U-Net or ResNet. However, such methods are prone to cases where tissue boundary delamination in OCT images is not evident. The method also utilizes GAN (GENERATIVE ADVERSARIAL Network) to complete OCT image speckle noise removal, for example, patent documents with application numbers 201910515611.2 and 202211149964.3, has the problems of high model training complexity, strong dependence on a large amount of data and insufficient generalization on different types of OCT images, and utilizes a three-dimensional convolutional neural Network to conduct denoising, for example, patent document with application number 202311293845.X, and has the advantages of high calculation resource consumption, long training time and possibly facing memory bottlenecks when processing large-scale images, so that denoising performance is poor.
Disclosure of Invention
Therefore, the application provides a medical image denoising model training method and device, which are used for solving the problems that the existing OCT image denoising method is easy to have unobvious tissue boundary layering and poor denoising performance.
In order to achieve the above object, the present application provides the following technical solutions:
in a first aspect, a medical image denoising model training method includes:
Step 1, acquiring medical images of the same part, wherein the medical images comprise noise images and clean images;
step 2, adjusting the sizes of the noise image and the clean image, and carrying out normalization processing to obtain an input image and a label image;
step 3, preparing the clean image as a mask image;
step 4, inputting the input image into an encoder of a U-Net network for feature extraction to obtain an original feature map;
Step 5, calculating a Hessian matrix of the original feature map, and calculating a matrix feature value of the Hessian matrix;
step 6, calculating Hessian response according to the matrix eigenvalues, and extracting an edge eigenvector to obtain a depth Hessian attention eigenvector;
Step 7, splicing the original feature map and the depth Hessian attention feature map, and inputting the spliced original feature map and the depth Hessian attention feature map into a decoder of the U-Net network for feature fusion to obtain an output image;
And 8, calculating the loss between the output image and the label image according to the mask image, and back-propagating the update weight so as to obtain a trained medical image denoising model.
Preferably, the step 3 specifically includes making an all 0-value image with the same size as the clean image, generating a polygonal area in a target tissue area of the all 0-value image, and filling with 1-value to obtain a mask image.
Preferably, in the step 4, the U-Net network is ResUNet networks, attention U-Net networks, or Mamba-UNet.
Preferably, in the step 5, the Hessian matrix calculation formula is:
Wherein Hi is a Hessian matrix,Represents partial differentiation, x represents transverse coordinates, y represents longitudinal coordinates, Ei represents the original feature map,Representing the second partial derivative of Ei in the x-direction,Representing the second partial derivative of Ei in the y-direction,Representing the mixed partial derivative of Ei in the x, y directions.
Preferably, in the step 6, the method Jerman, the method Frangi or the method Erdt is used for calculating the Hessian response.
Preferably, in the step 7, when the original feature map and the depth Hessian attention feature map are spliced, a ratio of the original feature map to the depth Hessian attention feature map is 1:2.
Preferably, in the step 8, when the loss between the output image and the tag image is calculated from the mask image, the loss function is any combination of a mean square error loss, an L1 loss, a PSNR loss, and an SSIM loss.
Preferably, when the loss function is a combination of L1 loss and SSIM loss, the loss function calculation formula is:
wherein, theRepresents the output image, b represents the label image,Representing the loss of the SSIM,Representing the loss of L1, Imask representing the mask image,AndThe weight coefficient is represented by a number of weight coefficients,A process for sharpening enhancement using 3*3 convolution kernels is shown.
Preferably, in the step 8, the label image is a sharpened and enhanced label image.
In a second aspect, a medical image denoising model training apparatus includes:
the medical image acquisition module is used for acquiring medical images of the same part, wherein the medical images comprise noise images and clean images;
The data preprocessing module is used for adjusting the sizes of the noise image and the clean image and carrying out normalization processing to obtain an input image and a label image;
a mask image making module, configured to make the clean image into a mask image;
The original feature map extraction module is used for inputting the input image into an encoder of the U-Net network to perform feature extraction to obtain an original feature map;
the calculation module is used for calculating a Hessian matrix of the original feature map and calculating matrix feature values of the Hessian matrix;
the depth Hessian attention feature map extraction module is used for calculating Hessian response according to the matrix feature values and extracting an edge feature map to obtain a depth Hessian attention feature map;
The feature fusion module is used for splicing the original feature image and the depth Hessian attention feature image, inputting the spliced original feature image and the depth Hessian attention feature image into a decoder of the U-Net network for feature fusion, and obtaining an output image;
and the training module is used for calculating the loss between the output image and the label image according to the mask image and back-propagating the update weight so as to obtain a trained medical image denoising model.
Compared with the prior art, the application has at least the following beneficial effects:
The application provides a medical image denoising model training method and device, which are characterized in that a noise image and a clean image of the same part are obtained, preprocessing is carried out to obtain an input image, a label image and a mask image, the input image is input into an encoder of a U-Net network for feature extraction to obtain an original feature image, a Hessian matrix of the original feature image is calculated, matrix feature values of the Hessian matrix are calculated, hessian response is calculated according to the matrix feature values, an edge feature image is extracted to obtain a depth Hessian attention feature image, the original feature image and the depth Hessian attention feature image are spliced and are input into a decoder of the U-Net network for feature fusion to obtain an output image, loss between the output image and the label image is calculated according to the mask image, and updating weights are reversely propagated to obtain a trained medical image denoising model. The application enhances the attention of the U-Net network to the structural details by combining the depth Hessian attention characteristic, can pay more attention to the tissue boundary information and the texture details in the medical image when removing inherent speckles of the medical image and reconstructing the original tissue structure of the medical image, is beneficial to tissue layering, and remarkably improves the image quality of the medical image after denoising, so that the diagnosis result is more accurate.
Drawings
In order to more intuitively illustrate the prior art and the application, exemplary drawings are presented below. It should be understood that the specific shapes and configurations shown in the drawings are not generally considered to be limiting conditions in implementing the present application, and that, for example, those skilled in the art will be able to make conventional adjustments or further optimization of the addition/subtraction/attribution division, specific shapes, positional relationships, connection manners, dimensional proportion relationships, etc. of certain units (components) based on the technical concepts and exemplary drawings disclosed in the present application.
FIG. 1 is a flowchart of a training method for a denoising model of a medical image according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a data preprocessing structure according to a first embodiment of the present application;
Fig. 3 is a schematic diagram of a ResUNet network structure based on deep Hessian attention feature improvement according to an embodiment of the present application;
FIG. 4 is a schematic illustration of an improved deep Hessian attention feature supplemental connection configuration provided in accordance with an embodiment of the present application;
Fig. 5 is a schematic diagram of a model training structure according to a first embodiment of the present application.
Detailed Description
The application will be further described in detail by means of specific embodiments with reference to the accompanying drawings.
In the description of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more. The terms "first," "second," "third," and the like in this disclosure are intended to distinguish between the referenced objects without a special meaning in terms of technical connotation (e.g., should not be construed as emphasis on the degree of importance or order, etc.). The expressions "comprising", "including", "having", etc. also mean "not limited to" (certain units, components, materials, steps, etc.).
The terms such as "upper", "lower", "left", "right", "middle", and the like, as used herein, are generally used for the purpose of facilitating an intuitive understanding with reference to the drawings and are not intended to be an absolute limitation of the positional relationship in actual products.
The application provides a novel medical image denoising method, namely a medical image denoising method based on a depth Hessian attention feature improved neural network. The main idea is to enhance the attention of the U-Net network to structural details by combining with depth Hessian attention features which emphasize boundary information in OCT images, thereby injecting a visual attention to the network. In addition, to effectively help restore more structural detail, the present application introduces mask loss to improve OCT image quality, especially in clinically significant areas. The mask is a binary image used to mark a region of interest, such as a retinal region in an ophthalmic image. By incorporating the mask into the loss function, the denoising process is focused on these critical areas, enhancing overall image quality and diagnostic accuracy. The method effectively solves the challenges of maintaining tissue boundary integrity and enhancing texture features in OCT images, thereby significantly improving denoising performance and diagnostic accuracy.
The method has good application prospect when processing other imaging modes such as CT (Computed Tomography, CT) and the like. The details of the vascular structure in the CT image are important information for diagnosing lung diseases, and the definition of the blood vessels during imaging directly influences the accuracy of diagnosis. However, noise and artifacts are often present in CT images, which may affect image quality. By introducing deep Hessian attention features, the method can enhance the focus on fine structures in CT images, thereby improving the visibility and resolution of these structures, particularly the edges and bifurcation points of blood vessels. By combining with mask loss, the method can ensure that the denoising process is focused on key areas such as blood vessels in the lung image, and the image quality of the areas is improved, so that the diagnosis precision and reliability are improved. The multi-mode image processing method not only can improve the quality of OCT images, but also has obvious effect on clear display of pulmonary vascular structures in CT images. The popularization and application of the method can play an important role in different medical image fields.
Example 1
Referring to fig. 1, the embodiment provides a medical image denoising model training method, which includes:
S1, acquiring medical images of the same part, wherein the medical images comprise a noise image Inoisy and a clean image Iclean;
S2, adjusting the sizes of the noise image and the clean image, and carrying out normalization processing to obtain an input image and a label image;
Referring to fig. 2, the step of data preprocessing is to perform data preprocessing on the acquired noise image Inoisy and clean image Iclean, where the data preprocessing includes normalizing the noise image Inoisy and the clean image Iclean, and adjusting the data range to be between 0 and 1. The noise image Inoisy is the input image Iinput, and the clean image Iclean is the label image Iabel.
S3, preparing a clean image as a mask image;
Specifically, for each clean image Iclean, this step generates a polygon area in the target tissue area of the all 0-value image by making the all 0-value image of the same size as the clean image Iclean, and fills in with 1-values, to obtain a mask image Imake.
S4, inputting the input image into an encoder of a U-Net network for feature extraction to obtain an original feature map;
Specifically, the U-Net network may be ResUNet network, attention U-Net network or Mamba-UNet network, and is preferably ResUNet network. The present embodiment is based on ResUNet network, which replaces the original jump connection of ResUNet with a modified depth Hessian attention feature complementary connection, so ResUNet network comprises three parts of encoder, decoder and depth Hessian attention feature complementary connection, wherein both encoder and decoder have residual structure as shown in fig. 3.
Referring to fig. 4, assuming that the number of downsampling times of the encoder is n, the feature map of the input image Iinput that is downsampled and extracted by the encoder isI.e. Ei is the original signature, in this step the number of next samples of the encoder can be increased or decreased, e.g. four times.
S5, calculating a Hessian matrix of the original feature map, and calculating matrix feature values of the Hessian matrix;
Specifically, the calculation formula of the Hessian matrix is as follows:
Wherein Hi is a Hessian matrix,Represents partial differentiation, x represents transverse coordinates, y represents longitudinal coordinates, Ei represents the original feature map,Representing the second partial derivative of Ei in the x-direction,Representing the second partial derivative of Ei in the y-direction,Representing the mixed partial derivative of Ei in the x, y directions.
S6, calculating Hessian response according to the matrix eigenvalues, and extracting an edge eigenvector to obtain a depth Hessian attention eigenvector;
Specifically, the step may calculate the Hessian response using the Jerman method, the Frangi method, or the Erdt method.
S7, splicing the original feature map and the depth Hessian attention feature map, and inputting the spliced original feature map and the depth Hessian attention feature map into a decoder of a U-Net network for feature fusion to obtain an output image;
Specifically, this step takes the original signature Ei and the depth Hessian attention signatureThe input for jump connection after splicing is:
The feature map Dn-i which enters the decoder for n-i times of last sample extraction is spliced with Fi and is transmitted to n-i+1 times of last samples, namely:
In this step, the number of times of last sampling of the decoder can be increased or decreased, for example, the number of times of last sampling can be increased;
When the original feature map and the extracted depth Hessian attention feature are spliced to be input as jump connection, the ratio between the original feature map and the depth Hessian attention feature can be changed, for example, the ratio between the original feature map and the depth Hessian attention feature map can be 1:2.
And S8, calculating the loss between the output image and the label image according to the mask image, and back-propagating the update weight so as to obtain a trained medical image denoising model.
Specifically, referring to fig. 5, during training, in order to increase the sharpening degree of the output image, the output image and the label image after the sharpening enhancement are subjected to loss calculation, and update weights are propagated reversely. When calculating the loss between the output image and the label image according to the mask image, the loss function is any combination of mean square error loss, L1 loss, PSNR loss and SSIM loss, and other regularization terms or loss terms, such as contrast loss (Contrastive Loss), perception loss (Perceptual Loss) and the like, can also be introduced to further improve the denoising effect and the image quality.
When the loss function is a combination of L1 loss and SSIM loss, the loss function f is calculated as:
wherein, theRepresenting the output image, i.e., Ioutput, b representing the label image, i.e., Ilabel,Representing the loss of the SSIM,Representing the loss of L1, Imask representing the mask image,AndThe weight coefficient is represented by a number of weight coefficients,+=1,≥0,≥0。
The SSIM loss calculation formula is:
the L1 loss calculation formula is:
wherein, theRepresents the process of sharpening enhancement using 3*3 convolution kernels [ [0, -0.5,0], [ -0.5,3, -0.5], [0, -0.5,0],AndRespectively representThe mean and standard deviation of b, c1 and c2 are two constants, n representsThe number of pixels in b,Representation ofAnd the ith pixel value in b.
The sharpened convolution kernel [ [0, -0.5,0], [ -0.5,3, -0.5], [0, -0.5,0] ] may be replaced with other sharpened convolution kernels, such as [ [0, -1, 0], [ -1, 4, -1], [0, -1, 0] ], and the like.
When the medical image denoising model trained by the embodiment is utilized to perform OCT image denoising, Iinput is input into a ResUNet network (namely the medical image denoising model) trained based on depth Hessian attention characteristic improvement to obtain Ioutput, and then the sharpened convolution is used for checking the enhancement of Ioutput to obtainWill beThe value range of (2) is adjusted between 0 and 1, and the final result is obtained by inverse normalization.
According to the medical image denoising model training method, through combining the depth Hessian attention characteristic, the attention of the U-Net network to structural details is enhanced, when intrinsic speckles of an OCT image are removed and an original tissue structure of the OCT is reconstructed, tissue boundary information and texture details in the OCT image are more concerned, tissue layering is facilitated, the image quality of the OCT image after denoising is remarkably improved, the signal to noise ratio is improved, and a diagnosis result is more accurate. In addition, the method is also suitable for other imaging modes, such as CT images, and can enhance the visibility of the pulmonary vascular structures in the CT images, so that the edges and details of the blood vessels are more obvious, and the image quality and the diagnosis accuracy are improved.
Compared with the problem of long time consumption of the traditional denoising method (such as a filtering method and a non-local mean method), the embodiment utilizes a deep learning model to realize more efficient calculation. Even under the condition of large-scale data processing, the method can still quickly generate the denoising image with high quality, and has higher time efficiency. In addition, by introducing mask loss, the embodiment can concentrate on areas with important clinical significance, such as retina in OCT images and pulmonary vascular structures in CT images, so that the image quality and diagnosis accuracy of the areas are further improved, and the method has important significance for clinically and accurately positioning lesions and diagnosis.
Example two
The embodiment provides a medical image denoising model training apparatus, which comprises:
the medical image acquisition module is used for acquiring medical images of the same part, wherein the medical images comprise noise images and clean images;
The data preprocessing module is used for adjusting the sizes of the noise image and the clean image and carrying out normalization processing to obtain an input image and a label image;
a mask image making module, configured to make the clean image into a mask image;
The original feature map extraction module is used for inputting the input image into an encoder of the U-Net network to perform feature extraction to obtain an original feature map;
the calculation module is used for calculating a Hessian matrix of the original feature map and calculating matrix feature values of the Hessian matrix;
the depth Hessian attention feature map extraction module is used for calculating Hessian response according to the matrix feature values and extracting an edge feature map to obtain a depth Hessian attention feature map;
The feature fusion module is used for splicing the original feature image and the depth Hessian attention feature image, inputting the spliced original feature image and the depth Hessian attention feature image into a decoder of the U-Net network for feature fusion, and obtaining an output image;
and the training module is used for calculating the loss between the output image and the label image according to the mask image and back-propagating the update weight so as to obtain a trained medical image denoising model.
For details of implementation of each module in a medical image denoising model training apparatus, reference may be made to the above definition of a medical image denoising model training method, which is not repeated here.
Any combination of the features of the above embodiments may be used (as long as there is no contradiction between the combinations of the features), and for brevity of description, all of the possible combinations of the features of the above embodiments are not described, and all of the embodiments not explicitly described are also to be considered as being within the scope of the description.

Claims (9)

CN202411308847.6A2024-09-192024-09-19 A medical image denoising model training method and deviceActiveCN119205549B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202411308847.6ACN119205549B (en)2024-09-192024-09-19 A medical image denoising model training method and device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202411308847.6ACN119205549B (en)2024-09-192024-09-19 A medical image denoising model training method and device

Publications (2)

Publication NumberPublication Date
CN119205549A CN119205549A (en)2024-12-27
CN119205549Btrue CN119205549B (en)2025-08-22

Family

ID=94049563

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202411308847.6AActiveCN119205549B (en)2024-09-192024-09-19 A medical image denoising model training method and device

Country Status (1)

CountryLink
CN (1)CN119205549B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2022005336A1 (en)*2020-06-292022-01-06Autonomous Non-Profit Organization For Higher Education «Skolkovo Institute Of Science And Technology»Noise-resilient vasculature localization method with regularized segmentation
CN117765262A (en)*2023-12-272024-03-26电子科技大学Magnetic resonance image semantic segmentation method for cerebral artery and vein deformity

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US10713566B2 (en)*2016-10-112020-07-14Siemens AktiengesellschaftEfficient calculations of negative curvature in a hessian free deep learning framework
CN115797213A (en)*2022-12-092023-03-14武汉大学Cyclic iteration raindrop removing method based on regional perception
CN116797790B (en)*2023-06-142025-08-19北京理工大学Aortic dissection boundary determination method, aortic dissection boundary determination system, electronic equipment and medium
CN117911423A (en)*2023-12-282024-04-19上海师范大学 A dual decoder-based edge-enhanced medical image segmentation method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2022005336A1 (en)*2020-06-292022-01-06Autonomous Non-Profit Organization For Higher Education «Skolkovo Institute Of Science And Technology»Noise-resilient vasculature localization method with regularized segmentation
CN117765262A (en)*2023-12-272024-03-26电子科技大学Magnetic resonance image semantic segmentation method for cerebral artery and vein deformity

Also Published As

Publication numberPublication date
CN119205549A (en)2024-12-27

Similar Documents

PublicationPublication DateTitle
CN110827216B (en) Multi-Generator Generative Adversarial Network Learning Method for Image Denoising
JP2023540910A (en) Connected Machine Learning Model with Collaborative Training for Lesion Detection
CN112215844A (en) MRI multimodal image segmentation method and system based on ACU-Net
CN109215035B (en)Brain MRI hippocampus three-dimensional segmentation method based on deep learning
CN116503607B (en)CT image segmentation method and system based on deep learning
CN117495882B (en)Liver tumor CT image segmentation method based on AGCH-Net and multi-scale fusion
CN114332278B (en) A deep learning-based OCTA image motion correction method
CN119494863B (en) A cross-modal medical image registration method based on latent space diffusion model
CN114048806A (en)Alzheimer disease auxiliary diagnosis model classification method based on fine-grained deep learning
CN112562058B (en)Method for quickly establishing intracranial vascular simulation three-dimensional model based on transfer learning
CN112102259A (en)Image segmentation algorithm based on boundary guide depth learning
CN115908888A (en)Vascular interventional instrument tracking method of interventional surgical robot based on DSA (digital radiography) image sequence of Unet
CN117611895A (en)Thyroid nodule ultrasound image auxiliary diagnosis method fused with medical priori knowledge
CN116433976A (en)Image processing method, device, equipment and storage medium
CN109919098B (en) Target object recognition method and device
Bhandari et al.Soft Attention Mechanism Based Network to Extract Blood Vessels From Retinal Image Modality
Liu et al.Lesion region inpainting: an approach for pseudo-healthy image synthesis in intracranial infection imaging
CN119131383A (en) Medical image segmentation method, device, equipment and computer-readable storage medium
CN119205549B (en) A medical image denoising model training method and device
Xie et al.CFIFusion: Dual‐Branch Complementary Feature Injection Network for Medical Image Fusion
CN112529949A (en)Method and system for generating DWI image based on T2 image
CN115937113B (en)Method, equipment and storage medium for identifying multiple types of skin diseases by ultrasonic images
CN115731444A (en) A medical image fusion method based on artificial intelligence and superpixel segmentation
CN114792296A (en) A method and system for fusion of nuclear magnetic resonance image and ultrasound image
CN115018860A (en) A Brain MRI Registration Method Based on Frequency Domain and Image Domain Features

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp