Movatterモバイル変換


[0]ホーム

URL:


CN111325236A - Ultrasonic image classification method based on convolutional neural network - Google Patents

Ultrasonic image classification method based on convolutional neural network
Download PDF

Info

Publication number
CN111325236A
CN111325236ACN202010070699.4ACN202010070699ACN111325236ACN 111325236 ACN111325236 ACN 111325236ACN 202010070699 ACN202010070699 ACN 202010070699ACN 111325236 ACN111325236 ACN 111325236A
Authority
CN
China
Prior art keywords
image
network
generator
data set
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010070699.4A
Other languages
Chinese (zh)
Other versions
CN111325236B (en
Inventor
金志斌
周雪
程裕家
袁杰
彭成磊
李睿钦
张玮婧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing UniversityfiledCriticalNanjing University
Priority to CN202010070699.4ApriorityCriticalpatent/CN111325236B/en
Publication of CN111325236ApublicationCriticalpatent/CN111325236A/en
Application grantedgrantedCritical
Publication of CN111325236BpublicationCriticalpatent/CN111325236B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种基于卷积神经网络的超声图像分类方法。包括:从原始图像圈定感兴趣区域并裁剪,获得剪裁后的图像;对所述裁剪后的图像采用添加高斯噪声方法与直方图均衡方法进行数据增广,获得数据增广后的数据集;利用所述增广后的数据集进行生成对抗网络的训练,并验证测试,获得训练好的生成器;加载所述训练好的生成器,通过噪声推理出图像,并对生成的所述图像标定标签;将所述生成器生成的图像扩充到分类数据集中,重新训练卷积神经网络对超声图像进行分类,并输出准确率和召回率,评估网络性能。本发明在对超声图像分类时,解决了神经网络中训练数据集不足的问题,提高了网络的泛化性能。

Figure 202010070699

The invention discloses an ultrasonic image classification method based on a convolutional neural network. The method includes: delineating a region of interest from the original image and cropping to obtain a cropped image; performing data augmentation on the cropped image by adding Gaussian noise and histogram equalization to obtain a data set after data augmentation; using The augmented data set is trained to generate a confrontation network, and the verification test is performed to obtain a trained generator; the trained generator is loaded, the image is inferred through noise, and the generated image is demarcated. ; Expand the images generated by the generator into the classification data set, retrain the convolutional neural network to classify the ultrasound images, and output the accuracy rate and recall rate to evaluate the network performance. When classifying ultrasonic images, the invention solves the problem of insufficient training data sets in the neural network, and improves the generalization performance of the network.

Figure 202010070699

Description

Translated fromChinese
一种基于卷积神经网络的超声图像分类方法An Ultrasound Image Classification Method Based on Convolutional Neural Network

技术领域technical field

本发明涉及超声图像分析领域,尤其涉及一种基于卷积神经网络的超声图像分类方法。The invention relates to the field of ultrasonic image analysis, in particular to an ultrasonic image classification method based on a convolutional neural network.

背景技术Background technique

在深度学习的图像分类研究中,通常都依赖于大规模的数据集以避免过拟合问题的发生。当图像数据量不够或者是图像不同类间数量分布不均衡时,通常采用传统图像增广方式进行图像增广,例如多次裁剪、添加高斯噪声、灰度均衡等传统图像增广方法。In deep learning image classification research, large-scale datasets are usually relied on to avoid overfitting problems. When the amount of image data is not enough or the quantity distribution of different image categories is uneven, traditional image augmentation methods are usually used for image augmentation, such as multiple cropping, adding Gaussian noise, grayscale equalization and other traditional image augmentation methods.

虽然这些传统图像增广方法可以对现有数据集实现扩充,但是也会带来网络过拟合的问题,原因是这些传统的图像增广方法只能产生和原始图像极其相似的图像,随着增广数据量的增加,数据集中雷同的数据项越来越多最终导致网络过拟合,泛化性能差,即只能分辨该数据集中的图像,而在新的形态有所差异的图像上的分类效果差。Although these traditional image augmentation methods can expand existing datasets, they also bring about the problem of network overfitting, because these traditional image augmentation methods can only generate images that are very similar to the original images. The amount of augmented data increases, and there are more and more similar data items in the data set, which eventually leads to overfitting of the network and poor generalization performance, that is, only the images in the data set can be distinguished, and images with different shapes can be distinguished. The classification effect is poor.

发明内容SUMMARY OF THE INVENTION

发明目的:在深度学习领域中常常存在着图像数据量不够,或者图像种类不够丰富等情况,使用良好的图像增广方法往往能起到事半功倍甚至是决定性的作用;但与此同时单一的图像增广方式也有可能会导致网络的过拟合,即仅在当前训练集上能够取得不错的分类效果,而网络的泛化性能差。本发明所要解决的技术问题是利用生成对抗网络合成图像,与传统图像增广方式共同扩大分类数据集,从而提高卷积神经网络对于超声图像分类性能。Purpose of the invention: In the field of deep learning, there are often cases where the amount of image data is not enough, or the types of images are not rich enough. Using a good image augmentation method can often play a multiplier or even decisive role; but at the same time, a single image augmentation The broad method may also lead to overfitting of the network, that is, only a good classification effect can be achieved on the current training set, and the generalization performance of the network is poor. The technical problem to be solved by the present invention is to use the generative confrontation network to synthesize images, and to expand the classification data set together with the traditional image augmentation method, thereby improving the classification performance of the convolutional neural network for ultrasonic images.

为了解决上述技术问题,提出一种基于卷积神经网络的超声图像分类方法,包括如下步骤:In order to solve the above technical problems, a method for ultrasonic image classification based on convolutional neural network is proposed, which includes the following steps:

步骤1,从原始图像圈定感兴趣区域并裁剪,获得剪裁后的图像;Step 1, delineate the region of interest from the original image and crop it to obtain a cropped image;

步骤2,对所述裁剪后的图像采用添加高斯噪声方法与直方图均衡方法进行数据增广,获得数据增广后的数据集;Step 2, using the Gaussian noise addition method and the histogram equalization method to perform data augmentation on the cropped image to obtain a data set after data augmentation;

步骤3,利用所述增广后的数据集进行生成对抗网络的训练,并验证测试,获得训练好的生成器;Step 3, using the augmented data set to train the generative adversarial network, and verifying the test to obtain a trained generator;

步骤4,加载所述训练好的生成器,通过噪声推理出图像,并对生成的所述图像标定标签;Step 4, load the trained generator, infer the image through noise, and demarcate the label to the generated image;

步骤5,将所述生成器生成的图像扩充到分类数据集中,重新训练卷积神经网络对超声图像进行分类,并输出准确率和召回率,评估网络性能。Step 5: Expand the images generated by the generator into the classification data set, retrain the convolutional neural network to classify the ultrasound images, and output the accuracy and recall to evaluate the network performance.

进一步地,在一种实现方式中,所述步骤1包括:从所述原始图像中选择包含目标区域的图像子块并进行裁剪,获得剪裁后的图像,所述裁剪后的图像的大小为包含目标区域在内的统一尺寸,所述包含目标区域的图像子块即原始图像的感兴趣区域。Further, in an implementation manner, thestep 1 includes: selecting an image sub-block containing the target area from the original image and cropping to obtain a cropped image, the size of the cropped image is The uniform size including the target area, the image sub-block including the target area is the area of interest of the original image.

进一步地,在一种实现方式中,所述步骤2包括:Further, in an implementation manner, the step 2 includes:

对所述裁剪后的图像添加高斯白噪声,使得添加高斯白噪声后的图像的直方图曲线符合一维高斯分布;adding Gaussian white noise to the cropped image, so that the histogram curve of the image after adding the Gaussian white noise conforms to a one-dimensional Gaussian distribution;

对所述裁剪后的图像做直方图均衡,使得映射后的图像的像素值符合均匀分布。Histogram equalization is performed on the cropped image, so that the pixel values of the mapped image conform to a uniform distribution.

进一步地,在一种实现方式中,所述步骤3包括:Further, in an implementation manner, the step 3 includes:

步骤3-1,将通过所述步骤2获得的数据集中的图像添加到真实图像数据集,将所述真实图像数据集中的真实图像输入到生成对抗网络中,和生成器推理出的生成图像一起作为判别器的输入图像,其中,所述真实图像的标签为真,所述生成图像的标签为假;Step 3-1, adding the images in the data set obtained by the step 2 to the real image data set, inputting the real images in the real image data set into the generative adversarial network, together with the generated images deduced by the generator As the input image of the discriminator, wherein, the label of the real image is true, and the label of the generated image is false;

步骤3-2,在所述生成器后串接判别器,输入随机噪声,经由生成器后,将生成图像输入到判别器中,并且此时所述生成图像的标签设置为真,将损失函数值回传,只更新所述生成器的网络参数而保持判别器的网络参数不变;Step 3-2, connect the discriminator in series after the generator, input random noise, input the generated image into the discriminator after passing through the generator, and set the label of the generated image to true at this time, and set the loss function The value is returned, and only the network parameters of the generator are updated and the network parameters of the discriminator are kept unchanged;

步骤3-3,由所述训练好的生成器的网络参数生成生成器权重文件。Step 3-3, generating a generator weight file from the network parameters of the trained generator.

进一步地,在一种实现方式中,所述步骤4包括:Further, in an implementation manner, the step 4 includes:

步骤4-1,将所述步骤3中生成器的网络参数直接导入生成器权重文件,进行推理;In step 4-1, the network parameters of the generator in the step 3 are directly imported into the generator weight file for reasoning;

步骤4-2,通过所述生成器生成图像,并给所述生成器生成的图像标定标签。Step 4-2, generating an image by the generator, and labeling the image generated by the generator.

进一步地,在一种实现方式中,所述步骤5包括:Further, in an implementation manner, the step 5 includes:

步骤5-1,将所述步骤4已标记的生成图像与原数据集合并,作为残差分类网络的训练集;Step 5-1, combine the generated images marked in step 4 with the original data set as the training set of the residual classification network;

步骤5-2,所述残差分类网络的训练过程分为训练阶段和验证阶段,数据集完整一次迭代即进行一次验证,并且通过更新参数追踪表现最好的网络模型,所述表现最好的网络模型即验证准确率最高的模型,在训练结束时返回所述验证准确率最高的模型;Step 5-2, the training process of the residual classification network is divided into a training phase and a verification phase. One complete iteration of the data set is performed once, and the network model with the best performance is tracked by updating the parameters. The network model is the model with the highest verification accuracy, and returns the model with the highest verification accuracy at the end of training;

在训练结束之后,通过将已做好标签的测试数据集输入到训练好的网络中,计算分类准确的样本数占所述测试数据集样本总数的比例,得到残差分类网络的准确率,所述准确率越高,网络性能越好;同时,输出召回率,即计算训练数据集经过所述残差分类网络后,分类准确的样本数占所述训练数据集样本总数的比例,所述召回率越高,网络性能越好。After the training is over, by inputting the labeled test data set into the trained network, calculating the ratio of the number of accurately classified samples to the total number of samples in the test data set, and obtaining the accuracy of the residual classification network, so The higher the accuracy rate, the better the network performance; at the same time, the output recall rate, that is, the ratio of the number of accurately classified samples to the total number of samples in the training data set after the training data set passes through the residual classification network, the recall rate is calculated. The higher the rate, the better the network performance.

由以上技术方案可知,本发明实施例提供一种基于卷积神经网络的超声图像分类方法,包括:步骤1,从原始图像圈定感兴趣区域并裁剪,获得剪裁后的图像;步骤2,对所述裁剪后的图像采用添加高斯噪声方法与直方图均衡方法进行数据增广,获得数据增广后的数据集;步骤3,利用所述增广后的数据集进行生成对抗网络的训练,并验证测试,获得训练好的生成器;步骤4,加载所述训练好的生成器,通过噪声推理出图像,并对生成的所述图像标定标签;步骤5,将所述生成器生成的图像扩充到分类数据集中,重新训练卷积神经网络对超声图像进行分类,并输出准确率和召回率,评估网络性能。As can be seen from the above technical solutions, an embodiment of the present invention provides a method for classifying an ultrasound image based on a convolutional neural network, including:step 1, delineating a region of interest from an original image and cropping to obtain a cropped image; The cropped image is augmented by the method of adding Gaussian noise and histogram equalization to obtain a data set after data augmentation; step 3, using the augmented data set to train a generative adversarial network, and verifying Test, obtain a trained generator; Step 4, load the trained generator, infer the image through noise, and demarcate the label to the generated image; Step 5, expand the image generated by the generator to In the classification dataset, the convolutional neural network is retrained to classify ultrasound images, and the accuracy and recall are output to evaluate the network performance.

现有技术中,传统图像增广方法可以对现有数据集实现扩充,但是也会带来网络过拟合的问题,导致在新的形态有所差异的图像上的分类效果差。而采用前述方法,运用生成对抗网络生成样本图像,可以获得大量训练样本,从而解决图像训练样本数量不足的问题,同时,扩展了数据增广的方法。In the prior art, the traditional image augmentation method can expand the existing data set, but it also brings about the problem of network overfitting, resulting in poor classification effect on new images with different shapes. Using the aforementioned method, using the generative adversarial network to generate sample images, a large number of training samples can be obtained, thereby solving the problem of insufficient image training samples, and at the same time, the method of data augmentation is expanded.

具体的,本发明中,将生成的有效图像扩充进分类数据集中运用残差分类网络重新训练分类模型并验证测试,提高了分类精度和可靠性,因此相对于现有技术,本发明解决了仅利用现有图像样本进行深度学习的训练数据量不足的问题,并且避免了局限于传统增广方式而造成的网络过拟合问题;同时,将生成器生成的有效图像扩充进分类数据集中,运用残差分类网络重新训练分类模型,二者相结合,训练的网络分类精度提高,解决了先前研究仅在现有数据集上分类效果好的问题,提高了网络的泛化性能。Specifically, in the present invention, the generated effective images are expanded into the classification data set, and the residual classification network is used to retrain the classification model and verify the test, thereby improving the classification accuracy and reliability. Therefore, compared with the prior art, the present invention solves the problem of only Using existing image samples for deep learning requires insufficient training data, and avoids the problem of network overfitting caused by traditional augmentation methods. The residual classification network retrains the classification model. The combination of the two improves the classification accuracy of the trained network, solves the problem that the previous research only has good classification results on the existing data set, and improves the generalization performance of the network.

附图说明Description of drawings

为了更清楚地说明本发明的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solutions of the present invention more clearly, the accompanying drawings required in the embodiments will be briefly introduced below. Obviously, for those of ordinary skill in the art, without creative work, the Additional drawings can be obtained from these drawings.

图1是本发明实施例部分提供的一种基于卷积神经网络的超声图像分类方法中生成对抗网络的工作流程示意图;1 is a schematic diagram of the workflow of generating an adversarial network in a convolutional neural network-based ultrasound image classification method provided in part by an embodiment of the present invention;

图2是本发明实施例部分提供的一种基于卷积神经网络的超声图像分类方法中判别器的神经网络架构示意图;2 is a schematic diagram of a neural network architecture of a discriminator in a method for classifying ultrasound images based on a convolutional neural network provided in part by an embodiment of the present invention;

图3是本发明实施例部分提供的一种基于卷积神经网络的超声图像分类方法中生成器的神经网络架构示意图;3 is a schematic diagram of a neural network architecture of a generator in a method for classifying ultrasound images based on a convolutional neural network provided in part by an embodiment of the present invention;

图4a是本发明实施例部分提供的一种基于卷积神经网络的超声图像分类方法中生成对抗网络的生成图像;4a is a generated image generated by a confrontational network in a method for classifying ultrasound images based on a convolutional neural network provided in part by an embodiment of the present invention;

图4b是本发明实施例部分提供的一种基于卷积神经网络的超声图像分类方法中生成对抗网络的原始图像;Fig. 4b is the original image of generating adversarial network in a kind of ultrasonic image classification method based on convolutional neural network provided by the embodiment part of the present invention;

图5是本发明实施例部分提供的一种基于卷积神经网络的超声图像分类方法中残差分类网络模块示意图。5 is a schematic diagram of a residual classification network module in a method for classifying ultrasound images based on a convolutional neural network provided in an embodiment of the present invention.

具体实施方式Detailed ways

为使本发明的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本发明作进一步详细的说明。In order to make the above objects, features and advantages of the present invention more clearly understood, the present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments.

本发明实施例公开一种基于卷积神经网络的超声图像分类方法,本方法应用于关节炎超声图像的评级研究,由于该病的患病人群较少,可供研究的样本不足,进而导致对超声图像分类的精度较低。The embodiment of the present invention discloses a method for classifying ultrasound images based on convolutional neural networks. The method is applied to the research on the rating of ultrasound images of arthritis. Since there are few patients with the disease, there are insufficient samples for research, which leads to The accuracy of ultrasound image classification is low.

本实施例所述的一种基于卷积神经网络的超声图像分类方法,包括如下步骤:The method for classifying ultrasound images based on a convolutional neural network described in this embodiment includes the following steps:

步骤1,从原始图像圈定感兴趣区域并裁剪,获得剪裁后的图像;本实施例中,可以采用画图软件从原始图像圈定感兴趣区域并进行确定尺寸裁剪,从而获得剪裁后的图像。Step 1: Delineate and crop the region of interest from the original image to obtain a trimmed image; in this embodiment, drawing software may be used to delineate the region of interest from the original image and crop it to a certain size to obtain a trimmed image.

步骤2,对所述裁剪后的图像采用添加高斯噪声方法与直方图均衡方法进行数据增广,获得数据增广后的数据集;Step 2, using the Gaussian noise addition method and the histogram equalization method to perform data augmentation on the cropped image to obtain a data set after data augmentation;

步骤3,利用所述增广后的数据集进行生成对抗网络的训练,并验证测试,获得训练好的生成器;本实施例中,所述生成对抗网络(Generative Adversarial Networks,GAN)指通过生成器和判别器经过级联形成的组合网络。Step 3, using the augmented data set to train a generative adversarial network, and verify the test to obtain a trained generator; in this embodiment, the generative adversarial network (Generative Adversarial Networks, GAN) refers to generating It is a combined network formed by cascading the discriminator and the discriminator.

步骤4,加载所述训练好的生成器,通过噪声推理出图像,并对生成的所述图像标定标签;本步骤中,所述通过噪声推理出图像包括大量图像,通过每次将推理出的图像加入原始数据集,形成新的数据集,从而测试在此训练集上训练的网络的泛化性能,不断增加这个推理的量直到网络性能达到预期。Step 4, load the trained generator, infer the image through noise, and demarcate the label for the generated image; Images are added to the original dataset to form a new dataset to test the generalization performance of the network trained on this training set, increasing the amount of inference until the network performs as expected.

步骤5,将所述生成器生成的图像扩充到分类数据集中,重新训练卷积神经网络对超声图像进行分类,并输出准确率和召回率评估网络性能。本步骤中,所述分类数据集指通过执行步骤1、步骤2和步骤4获得的总的数据集。本实施例中,所述超声图像通过拥有专业设备的医院获取。Step 5: Expand the images generated by the generator into the classification data set, retrain the convolutional neural network to classify the ultrasound images, and output the accuracy rate and recall rate to evaluate the network performance. In this step, the classification data set refers to the total data set obtained by performingsteps 1, 2 and 4. In this embodiment, the ultrasound image is acquired by a hospital with professional equipment.

本实施例所述的一种基于卷积神经网络的超声图像分类方法中,所述步骤1包括:从所述原始图像中选择包含目标区域的图像子块并进行裁剪,获得剪裁后的图像,所述裁剪后的图像的大小为包含目标区域在内的统一尺寸,所述包含目标区域的图像子块即原始图像的感兴趣区域。In the method for classifying ultrasound images based on a convolutional neural network described in this embodiment, thestep 1 includes: selecting an image sub-block containing a target area from the original image and cropping to obtain a cropped image, The size of the cropped image is a uniform size including the target area, and the image sub-block including the target area is the region of interest of the original image.

具体的,本步骤中,裁剪图像大小为包含目标区域在内的统一尺寸,获得裁剪后的图像,所述包含目标区域的图像子块即原始图像的感兴趣区域,后续的处理都针对这个感兴趣区域以减少处理时间、提高精度。本实施例中,使用的原始图像是由医学超声成像设备采集得到的关节炎患病部位图像,图像的成像深度根据采集设备的不同而有所区别。所述原始图像的分辨率为1024*768,为了剔除所述原始图像的无效区域,减少生成对抗网络以及残差分类网络的计算量和计算时间,提高分类的准确度和可靠性,将原始图像裁剪为520*120分辨率的图像作为训练样本,其中目标区域为滑膜所在位置。Specifically, in this step, the size of the cropped image is a uniform size including the target area, and a cropped image is obtained. The image sub-blocks including the target area are the area of interest of the original image, and subsequent processing is aimed at this sense. Regions of interest to reduce processing time and improve accuracy. In this embodiment, the original image used is an image of the diseased part of arthritis acquired by a medical ultrasound imaging device, and the imaging depth of the image is different according to the different acquisition devices. The resolution of the original image is 1024*768. In order to eliminate the invalid area of the original image, reduce the calculation amount and calculation time of the generative adversarial network and the residual classification network, and improve the accuracy and reliability of the classification, the original image Images cropped to a resolution of 520*120 are used as training samples, where the target area is the location of the synovial membrane.

本实施例所述的一种基于卷积神经网络的超声图像分类方法中,所述步骤2包括:In the method for classifying ultrasound images based on a convolutional neural network described in this embodiment, the step 2 includes:

对所述裁剪后的图像添加高斯白噪声,使得添加高斯白噪声后的图像的直方图曲线符合一维高斯分布,具体的,本实施例中,添加高斯白噪声的方法为:Add Gaussian white noise to the cropped image, so that the histogram curve of the image after adding Gaussian white noise conforms to a one-dimensional Gaussian distribution. Specifically, in this embodiment, the method of adding Gaussian white noise is:

Figure BDA0002377249130000051
Figure BDA0002377249130000051

其中,x为输入,μ为均值,σ为标准差;where x is the input, μ is the mean, and σ is the standard deviation;

对所述裁剪后的图像做直方图均衡,使得映射后的图像的像素值符合均匀分布,具体的,本实施例中,映射方法为:Perform histogram equalization on the cropped image, so that the pixel values of the mapped image conform to a uniform distribution. Specifically, in this embodiment, the mapping method is:

Figure BDA0002377249130000061
Figure BDA0002377249130000061

其中,sk是累计概率,n为图像中像素的总和,L是图像中可能的灰度级总数,ni是第i个灰度级的像素个数。Among them,sk is the cumulative probability, n is the sum of pixels in the image, L is the total number of possible gray levels in the image, and ni is the number of pixels in the ith gray level.

具体的,本实施例中,可以先对所述裁剪后的图像添加高斯白噪声再进行直方图均衡,也可以先对所述裁剪后的图像进行直方图均衡再添加高斯白噪声。本实施例中,将通过所述步骤1获得的裁剪后的图像进行直方图均衡和添加高斯白噪声的方式进行增广,使图像样本数量增广为原来的三倍。Specifically, in this embodiment, Gaussian white noise may be added to the cropped image first and then histogram equalization may be performed, or histogram equalization may be performed on the cropped image first, and then Gaussian white noise may be added. In this embodiment, the cropped image obtained instep 1 is enlarged by performing histogram equalization and adding Gaussian white noise, so that the number of image samples is tripled.

本实施例所述的一种基于卷积神经网络的超声图像分类方法中,所述步骤3包括:In the method for classifying ultrasound images based on a convolutional neural network described in this embodiment, the step 3 includes:

步骤3-1,将通过所述步骤2获得的数据集中的图像添加到真实图像数据集,将所述真实图像数据集中的真实图像输入到生成对抗网络中,和生成器推理出的生成图像一起作为判别器的输入图像,其中,所述真实图像的标签为真,所述生成图像的标签为假;本实施例中,所述真实图像数据集为通过步骤1和步骤2获得的数据集,所述真实图像数据集仅对步骤3有效。本实施例中,所述生成对抗网络指通过生成器和判别器经过级联形成的组合网络。此外,本实施例中,真假数据的区分仅仅是在训练生成对抗网络中,而在分类数据集中不区分真假数据,所述步骤3训练好的生成器生成的图像,即标签为假的图像也作为分类数据集的一部分。Step 3-1, adding the images in the data set obtained by the step 2 to the real image data set, inputting the real images in the real image data set into the generative adversarial network, together with the generated images deduced by the generator As the input image of the discriminator, the label of the real image is true, and the label of the generated image is false; in this embodiment, the real image data set is the data set obtained throughsteps 1 and 2, The real image dataset is only valid for step 3. In this embodiment, the generative adversarial network refers to a combined network formed by cascading a generator and a discriminator. In addition, in this embodiment, the distinction between true and false data is only in the training of the generative adversarial network, and the true and false data are not distinguished in the classification data set. The images generated by the trained generator in step 3 are labeled as false. Images are also included as part of the classification dataset.

步骤3-2,在所述生成器后串接判别器,输入随机噪声,经由生成器后,将生成图像输入到判别器中,并且此时所述生成图像的标签设置为真,将损失函数值回传,只更新所述生成器的网络参数而保持判别器的网络参数不变;Step 3-2, connect the discriminator in series after the generator, input random noise, input the generated image into the discriminator after passing through the generator, and set the label of the generated image to true at this time, and set the loss function The value is returned, and only the network parameters of the generator are updated and the network parameters of the discriminator are kept unchanged;

本实施例中,判别器的损失函数包括两个部分,为对真实图像的误差计算结果和对生成图像的误差计算结果之和。其中,在Pytorch下,损失函数的计算方法为BCEloss:In this embodiment, the loss function of the discriminator includes two parts, which are the sum of the error calculation result for the real image and the error calculation result for the generated image. Among them, under Pytorch, the calculation method of the loss function is BCEloss:

lossreal=criterion(realout,reallabel)lossreal =criterion(realout ,reallabel )

lossfake=criterion(fakeout,fakelabel)lossfake =criterion(fakeout ,fakelabel )

lossd=lossreal+lossfakelossd = lossreal + lossfake

其中,lossreal为判别器对真实图像得出的损失函数值,lossfake为判别器对生成图像得出的损失函数值,reallabel为真实图像的标签,realout为真实图像的具体图像;fakeout为生成图像的标签,fakelabel为生成图像的具体图像,lossd是经由生成图像和真实图像的结果汇总之后所得到的判别器的整体损失函数,criterion代表损失函数的计算方法,本质上是一种仿函数。Among them, lossreal is the loss function value obtained by the discriminator for the real image, lossfake is the loss function value obtained by the discriminator for the generated image, reallabel is the label of the real image, and realout is the specific image of the real image; fakeout is the label of the generated image, fakelabel is the specific image of the generated image, lossd is the overall loss function of the discriminator obtained by summarizing the results of the generated image and the real image, and criterion represents the calculation method of the loss function, which is essentially A functor.

生成器的损失函数则是以真实标签和生成图像相结合,以BCEloss来计算损失函数,本实施例中,真实标签即在网络中记为1:The loss function of the generator is based on the combination of the real label and the generated image, and the loss function is calculated by BCEloss. In this embodiment, the real label is recorded as 1 in the network:

lossg=criterion(output,real_label)lossg =criterion(output,real_label)

其中,lossg是生成器的损失函数,output代表生成图像,real_label代表真实标签,criterion代表损失函数的计算方法,本质上是一种仿函数。Among them, lossg is the loss function of the generator, output represents the generated image, real_label represents the real label, and criterion represents the calculation method of the loss function, which is essentially a functor.

此外,由于卷积神经网络的需要,生成器和判别器均需要选择合适的优化算法,保证损失函数在极大值收敛的同时,防止损失函数值的发散。具体的实现上,生成器和判别器选用了Adam优化器进行参数更新。学习速率Learning Rate=0.0003,防止学习率过大产生震荡现象。In addition, due to the needs of the convolutional neural network, both the generator and the discriminator need to select an appropriate optimization algorithm to ensure that the loss function converges at the maximum value while preventing the divergence of the loss function value. In the specific implementation, the generator and discriminator use Adam optimizer for parameter update. The learning rate Learning Rate=0.0003, to prevent the learning rate from being too large to cause oscillation.

步骤3-3,由所述训练好的生成器的网络参数生成生成器权重文件。Step 3-3, generating a generator weight file from the network parameters of the trained generator.

本实施例中,所述步骤3中利用步骤2中增广后的所有样本,通过生成对抗网络进行训练。其中,生成对抗网络的基本流程图如图1所示,判别器的神经网络架构如图2所示,生成器的神经网络架构如图3所示。运用所述生成器的神经网络架构,通过训练所有样本得到一组判别器和生成器,其中判别器网络参数如表1所示,生成器网络参数如表2所示。In this embodiment, in the step 3, all the samples augmented in the step 2 are used for training through a generative adversarial network. Among them, the basic flow chart of the generative adversarial network is shown in Figure 1, the neural network architecture of the discriminator is shown in Figure 2, and the neural network architecture of the generator is shown in Figure 3. Using the neural network architecture of the generator, a set of discriminators and generators are obtained by training all samples, wherein the discriminator network parameters are shown in Table 1, and the generator network parameters are shown in Table 2.

表1判别器网络参数Table 1 Discriminator network parameters

Figure BDA0002377249130000071
Figure BDA0002377249130000071

Figure BDA0002377249130000081
Figure BDA0002377249130000081

表2生成器网络参数Table 2 Generator network parameters

网络层类型network layer type网络输出尺寸network output size参数量parameter quantityLinear-1Linear-1[-1,1,249600][-1, 1, 249600]25,209,60025, 209, 600ReLU with BatchNorm2d-2ReLU with BatchNorm2d-2[-1,1,240,1040][-1, 1, 240, 1040]22Conv2d-3Conv2d-3[-1,50,240,1040][-1, 50, 240, 1040]500500ReLU with BatchNorm2d-4ReLU with BatchNorm2d-4[-1,50,240,1040][-1, 50, 240, 1040]100100Conv2d-5Conv2d-5[-1,25,240,1040][-1, 25, 240, 1040]11,72511,725ReLU with BatchNorm2d-6ReLU with BatchNorm2d-6[-1,25,240,1040,][-1, 25, 240, 1040,]5050Conv2d-7Conv2d-7[-1,1,120,520][-1, 1, 120, 520]226226Tanh-8Tanh-8[-1,1,120,520][-1, 1, 120, 520]00

本实施例所述的一种基于卷积神经网络的超声图像分类方法中,所述步骤4包括:In the method for classifying ultrasound images based on a convolutional neural network described in this embodiment, the step 4 includes:

步骤4-1,将所述步骤3中生成器网络架构直接导入生成器权重文件,进行推理;Step 4-1, the generator network architecture in the step 3 is directly imported into the generator weight file, and inference is performed;

步骤4-2,通过所述生成器生成图像,并给所述生成器生成的图像标定标签;具体的,本实施例中,可以根据患病的严重程度对生成器生成的图像标定标签。Step 4-2, generating an image by the generator, and labeling the image generated by the generator; specifically, in this embodiment, the image generated by the generator can be labeled according to the severity of the disease.

本实施例中,所述步骤4中利用步骤3得到的生成器模型进行推理,通过加入随机噪声,可循环迭代生成任意数量的关节炎患病部位的伪图像,增广样本数量,其中一组原始图像和生成后的图像如图4a和图4b所示。In this embodiment, in the step 4, the generator model obtained in the step 3 is used to perform inference, and by adding random noise, any number of pseudo images of the diseased part of arthritis can be generated cyclically and iteratively, and the number of samples can be increased. The original and generated images are shown in Fig. 4a and Fig. 4b.

本实施例所述的一种基于卷积神经网络的超声图像分类方法中,所述步骤5包括:In the method for classifying ultrasound images based on a convolutional neural network described in this embodiment, the step 5 includes:

步骤5-1,将所述步骤4已标记的生成图像与原数据集合并,作为残差分类网络的训练集;Step 5-1, combine the generated images marked in step 4 with the original data set as the training set of the residual classification network;

步骤5-2,所述残差分类网络的训练过程分为训练阶段和验证阶段,数据集完整一次迭代即进行一次验证,并且通过更新参数追踪表现最好的网络模型,所述表现最好的网络模型即验证准确率最高的模型,在训练结束时返回所述验证准确率最高的模型;在训练结束之后,在训练结束之后,通过将已做好标签的测试数据集输入到训练好的网络中,计算分类准确的样本数占所述测试数据集样本总数的比例,得到残差分类网络(ResNet)的准确率,所述准确率越高,网络性能越好;同时,所述输出召回率,即计算训练数据集经过所述残差分类网络后,分类准确的样本数占所述训练数据集样本总数的比例,所述召回率越高,网络性能越好。Step 5-2, the training process of the residual classification network is divided into a training phase and a verification phase. One complete iteration of the data set is performed once, and the network model with the best performance is tracked by updating the parameters. The network model is the model with the highest verification accuracy, and returns the model with the highest verification accuracy at the end of the training; after the training is over, after the training is over, by inputting the labeled test data set into the trained network , calculate the proportion of the number of accurately classified samples to the total number of samples in the test data set, and obtain the accuracy rate of the residual classification network (ResNet). The higher the accuracy rate, the better the network performance; at the same time, the output recall rate , that is, calculating the ratio of the number of accurately classified samples to the total number of samples in the training data set after the training data set passes through the residual classification network, and the higher the recall rate, the better the network performance.

本实施例中,所述步骤5中将原始图像、步骤2增广后的图像以及步骤4中通过生成器推理得到的图像作为总样本,并通过患病的严重程度将其分为0、1、2、3四个级别,其中0代表不患病,1代表患病程度轻微,2代表患病程度中等,3代表患病程度严重。运用残差分类网络进行分类训练,得到网络模型,并从医院获取新的关节炎患病部位超声图像,验证网络模型性能。In this embodiment, in the step 5, the original image, the augmented image in step 2, and the image obtained by the generator inference in step 4 are used as the total samples, and they are divided into 0 and 1 according to the severity of the disease. , 2, and 3 four levels, where 0 means no disease, 1 means mild disease, 2 means moderate disease, and 3 means severe disease. The residual classification network is used for classification training to obtain a network model, and new ultrasound images of the affected part of arthritis are obtained from the hospital to verify the performance of the network model.

其中残差分类网络模块如图5所示。残差分类网络的搭建中使用了常规的残差模块和改进之后的瓶颈残差模块,常规残差模块使用两个3×3的卷积模块连续堆叠构成,而经过改进之后的“瓶颈残差模块”由1×1、3×3、1×1的卷积模块依次堆叠构成,瓶颈残差模块中的1×1卷积模块也实现了降维的作用,使得3×3的卷积运算在较低维度进行,从而达到减少计算量、提高计算效率的目的。第一个1×1卷积模块将输入特征图的通道数从256降至64,从而使3×3卷积的运算量大大减小,而在最后添加的1×1卷积模块起到升维作用,将特征图的通道数从64还原为256。现有技术中,传统图像增广方法可以对现有数据集实现扩充,但是也会带来网络过拟合的问题,导致在新的形态有所差异的图像上的分类效果差。而采用前述方法,运用生成对抗网络生成样本图像,可以获得大量训练样本,从而解决图像训练样本数量不足的问题,同时,扩展了数据增广的方法。The residual classification network module is shown in Figure 5. The conventional residual module and the improved bottleneck residual module are used in the construction of the residual classification network. The conventional residual module is formed by stacking two 3×3 convolution modules continuously, and the improved “bottleneck residual” The module” is composed of 1×1, 3×3, and 1×1 convolution modules stacked in sequence. The 1×1 convolution module in the bottleneck residual module also realizes the function of dimensionality reduction, making the 3×3 convolution operation. It is performed in a lower dimension, so as to reduce the amount of calculation and improve the calculation efficiency. The first 1×1 convolution module reduces the number of channels of the input feature map from 256 to 64, thereby greatly reducing the computational complexity of the 3×3 convolution, while the 1×1 convolution module added at the end increases the Dimension effect, the number of channels of the feature map is reduced from 64 to 256. In the prior art, the traditional image augmentation method can expand the existing data set, but it also brings about the problem of network overfitting, resulting in poor classification effect on new images with different shapes. Using the aforementioned method, using the generative adversarial network to generate sample images, a large number of training samples can be obtained, thereby solving the problem of insufficient image training samples, and at the same time, the method of data augmentation is expanded.

具体的,本发明中,将生成的有效图像扩充进分类数据集中运用残差分类网络重新训练分类模型并验证测试,提高了分类精度和可靠性,因此相对于现有技术,本发明解决了仅利用现有图像样本进行深度学习的训练数据量不足的问题,并且避免了局限于传统增广方式而造成的网络过拟合问题;同时,将生成器生成的有效图像扩充进分类数据集中,运用残差分类网络重新训练分类模型,二者相结合,训练的网络分类精度提高,解决了先前研究仅在现有数据集上分类效果好的问题,提高了网络的泛化性能。Specifically, in the present invention, the generated effective images are expanded into the classification data set, and the residual classification network is used to retrain the classification model and verify the test, thereby improving the classification accuracy and reliability. Therefore, compared with the prior art, the present invention solves the problem of only Using existing image samples for deep learning requires insufficient training data, and avoids the problem of network overfitting caused by traditional augmentation methods. The residual classification network retrains the classification model. The combination of the two improves the classification accuracy of the trained network, solves the problem that the previous research only has good classification results on the existing data set, and improves the generalization performance of the network.

本发明提出了一种基于生成对抗网络提高神经网络分类性能的方法,应当指出,所需的超声设备种类不对本专利构成限制;所采集的超声图像分辨率不对本专利构成限制;所采集的图像内容不对本专利构成限制。应当指出,对于本技术领域的普通人员来说,在不脱离发明原理的前提下还可以做出若干改进和润饰,这些也应视为本发明的保护范围。另外,本实施例中未明确的各组成部分均可用现有技术加以实现。The present invention proposes a method for improving the classification performance of a neural network based on a generative adversarial network. It should be pointed out that the type of ultrasound equipment required does not limit the patent; the resolution of the collected ultrasound images does not limit the patent; the collected images The content does not limit this patent. It should be pointed out that for those skilled in the art, some improvements and modifications can be made without departing from the principles of the invention, and these should also be regarded as the protection scope of the present invention. In addition, each component that is not specified in this embodiment can be implemented by existing technologies.

Claims (6)

1. An ultrasound image classification method based on a convolutional neural network is characterized by comprising the following steps:
step 1, a region of interest is defined from an original image and is cut to obtain a cut image;
step 2, performing data augmentation on the cut image by adopting a Gaussian noise adding method and a histogram equalization method to obtain a data set after data augmentation;
step 3, training for generating a countermeasure network by using the augmented data set, and verifying and testing to obtain a trained generator;
step 4, loading the trained generator, reasoning out an image through noise, and calibrating a label for the generated image;
and 5, expanding the image generated by the generator into a classification data set, retraining the convolutional neural network to classify the ultrasonic image, outputting the accuracy and the recall rate, and evaluating the network performance.
2. The ultrasonic image classification method based on the convolutional neural network as claimed in claim 1, wherein said step 1 comprises: and selecting image sub-blocks containing a target area from the original image, and cutting the image sub-blocks to obtain a cut image, wherein the size of the cut image is uniform including the target area, and the image sub-blocks containing the target area are the interested area of the original image.
3. The ultrasonic image classification method based on the convolutional neural network as claimed in claim 1, wherein said step 2 comprises:
adding Gaussian white noise to the cut image to enable a histogram curve of the image added with the Gaussian white noise to be in accordance with one-dimensional Gaussian distribution;
and performing histogram equalization on the cut image to ensure that the pixel values of the mapped image accord with uniform distribution.
4. The ultrasonic image classification method based on the convolutional neural network as claimed in claim 1, wherein said step 3 comprises:
step 3-1, adding the image in the data set obtained in the step 2 to a real image data set, inputting the real image in the real image data set into a generation countermeasure network, and using the generated image inferred by a generator together as an input image of a discriminator, wherein the label of the real image is true, and the label of the generated image is false;
step 3-2, a discriminator is connected in series behind the generator, random noise is input, the generated image is input into the discriminator after passing through the generator, the label of the generated image is set to be true at the moment, the loss function value is returned, and only the network parameter of the generator is updated, but the network parameter of the discriminator is kept unchanged;
and 3-3, generating a generator weight file according to the trained network parameters of the generator.
5. The ultrasonic image classification method based on the convolutional neural network as claimed in claim 1, wherein said step 4 comprises:
step 4-1, directly importing the network parameters of the generator in the step 3 into a generator weight file for reasoning;
and 4-2, generating an image through the generator, and calibrating a label for the image generated by the generator.
6. The method for classifying ultrasonic images based on the convolutional neural network as claimed in claim 1, wherein said step 5 comprises:
step 5-1, merging the marked generated image in the step 4 with an original data set to serve as a training set of a residual error classification network;
step 5-2, the training process of the residual error classification network is divided into a training stage and a verification stage, a data set is completely iterated once, namely, verification is carried out once, a best-performing network model is tracked through updating parameters, the best-performing network model is a model with the highest verification accuracy, and the model with the highest verification accuracy is returned after training is finished;
after training is finished, inputting a test data set with labels into a trained network, and calculating the proportion of the number of accurately classified samples to the total number of samples in the test data set to obtain the accuracy of a residual error classification network, wherein the higher the accuracy is, the better the network performance is;
meanwhile, the recall rate is output, namely the proportion of the number of accurately classified samples to the total number of samples of the training data set after the training data set passes through the residual error classification network is calculated, and the higher the recall rate is, the better the network performance is.
CN202010070699.4A2020-01-212020-01-21Ultrasonic image classification method based on convolutional neural networkActiveCN111325236B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010070699.4ACN111325236B (en)2020-01-212020-01-21Ultrasonic image classification method based on convolutional neural network

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010070699.4ACN111325236B (en)2020-01-212020-01-21Ultrasonic image classification method based on convolutional neural network

Publications (2)

Publication NumberPublication Date
CN111325236Atrue CN111325236A (en)2020-06-23
CN111325236B CN111325236B (en)2023-04-18

Family

ID=71173236

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010070699.4AActiveCN111325236B (en)2020-01-212020-01-21Ultrasonic image classification method based on convolutional neural network

Country Status (1)

CountryLink
CN (1)CN111325236B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111861924A (en)*2020-07-232020-10-30成都信息工程大学 A data enhancement method for cardiac magnetic resonance images based on evolutionary GAN
CN111860664A (en)*2020-07-242020-10-30大连东软教育科技集团有限公司Ultrasonic plane wave composite imaging method, device and storage medium
CN111858351A (en)*2020-07-232020-10-30深圳慕智科技有限公司Deep learning inference engine test method based on differential evaluation
CN112336357A (en)*2020-11-062021-02-09山西三友和智慧信息技术股份有限公司RNN-CNN-based EMG signal classification system and method
CN112396110A (en)*2020-11-202021-02-23南京大学Method for generating anti-cascade network augmented image
CN112507881A (en)*2020-12-092021-03-16山西三友和智慧信息技术股份有限公司sEMG signal classification method and system based on time convolution neural network
CN112699885A (en)*2020-12-212021-04-23杭州反重力智能科技有限公司Semantic segmentation training data augmentation method and system based on antagonism generation network GAN
CN113361443A (en)*2021-06-212021-09-07广东电网有限责任公司Method and system for power transmission line image sample counterstudy augmentation
CN114201632A (en)*2022-02-182022-03-18南京航空航天大学 A method for augmenting noisy labeled datasets for multi-label target detection tasks
CN114529484A (en)*2022-04-252022-05-24征图新视(江苏)科技股份有限公司Deep learning sample enhancement method for direct current component change in imaging
CN115019117A (en)*2022-03-302022-09-06什维新智医疗科技(上海)有限公司 An Ultrasound Image Dataset Expansion Method Based on Neural Network
CN115019100A (en)*2022-06-142022-09-06新乡医学院 A method and system for automatic classification of biological tissue images based on generative adversarial networks
CN115349834A (en)*2022-10-182022-11-18合肥心之声健康科技有限公司Electrocardiogram screening method and system for asymptomatic severe coronary artery stenosis

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20190012768A1 (en)*2015-12-142019-01-10Motion Metrics International Corp.Method and apparatus for identifying fragmented material portions within an image
CN109614979A (en)*2018-10-112019-04-12北京大学 A data augmentation method and image classification method based on selection and generation
US20190197368A1 (en)*2017-12-212019-06-27International Business Machines CorporationAdapting a Generative Adversarial Network to New Data Sources for Image Classification
CN110516561A (en)*2019-08-052019-11-29西安电子科技大学 SAR image target recognition method based on DCGAN and CNN

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20190012768A1 (en)*2015-12-142019-01-10Motion Metrics International Corp.Method and apparatus for identifying fragmented material portions within an image
US20190197368A1 (en)*2017-12-212019-06-27International Business Machines CorporationAdapting a Generative Adversarial Network to New Data Sources for Image Classification
CN109614979A (en)*2018-10-112019-04-12北京大学 A data augmentation method and image classification method based on selection and generation
CN110516561A (en)*2019-08-052019-11-29西安电子科技大学 SAR image target recognition method based on DCGAN and CNN

Cited By (18)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111861924B (en)*2020-07-232023-09-22成都信息工程大学 A cardiac magnetic resonance image data enhancement method based on evolutionary GAN
CN111858351A (en)*2020-07-232020-10-30深圳慕智科技有限公司Deep learning inference engine test method based on differential evaluation
CN111861924A (en)*2020-07-232020-10-30成都信息工程大学 A data enhancement method for cardiac magnetic resonance images based on evolutionary GAN
CN111860664A (en)*2020-07-242020-10-30大连东软教育科技集团有限公司Ultrasonic plane wave composite imaging method, device and storage medium
CN111860664B (en)*2020-07-242024-04-26东软教育科技集团有限公司Ultrasonic plane wave composite imaging method, device and storage medium
CN112336357A (en)*2020-11-062021-02-09山西三友和智慧信息技术股份有限公司RNN-CNN-based EMG signal classification system and method
CN112396110A (en)*2020-11-202021-02-23南京大学Method for generating anti-cascade network augmented image
WO2022105308A1 (en)*2020-11-202022-05-27南京大学Method for augmenting image on the basis of generative adversarial cascaded network
CN112396110B (en)*2020-11-202024-02-02南京大学Method for generating augmented image of countermeasure cascade network
CN112507881A (en)*2020-12-092021-03-16山西三友和智慧信息技术股份有限公司sEMG signal classification method and system based on time convolution neural network
CN112699885A (en)*2020-12-212021-04-23杭州反重力智能科技有限公司Semantic segmentation training data augmentation method and system based on antagonism generation network GAN
CN113361443A (en)*2021-06-212021-09-07广东电网有限责任公司Method and system for power transmission line image sample counterstudy augmentation
CN114201632A (en)*2022-02-182022-03-18南京航空航天大学 A method for augmenting noisy labeled datasets for multi-label target detection tasks
CN115019117A (en)*2022-03-302022-09-06什维新智医疗科技(上海)有限公司 An Ultrasound Image Dataset Expansion Method Based on Neural Network
CN114529484B (en)*2022-04-252022-07-12征图新视(江苏)科技股份有限公司Deep learning sample enhancement method for direct current component change in imaging
CN114529484A (en)*2022-04-252022-05-24征图新视(江苏)科技股份有限公司Deep learning sample enhancement method for direct current component change in imaging
CN115019100A (en)*2022-06-142022-09-06新乡医学院 A method and system for automatic classification of biological tissue images based on generative adversarial networks
CN115349834A (en)*2022-10-182022-11-18合肥心之声健康科技有限公司Electrocardiogram screening method and system for asymptomatic severe coronary artery stenosis

Also Published As

Publication numberPublication date
CN111325236B (en)2023-04-18

Similar Documents

PublicationPublication DateTitle
CN111325236B (en)Ultrasonic image classification method based on convolutional neural network
WO2022105308A1 (en)Method for augmenting image on the basis of generative adversarial cascaded network
CN109035149B (en) A deep learning-based motion blurring method for license plate images
CN118711000B (en) Bearing surface defect detection method and system based on improved YOLOv10
CN116416441A (en) Hyperspectral image feature extraction method based on multi-level variational autoencoder
CN115565019B (en) A ground object classification method for single-channel high-resolution SAR images based on deep self-supervised generative adversarial model
CN116306813B (en) A method based on YOLOX lightweight and network optimization
CN112686822B (en) An Image Completion Method Based on Stacked Generative Adversarial Networks
CN112541555B (en) A training method for a classifier model based on deep learning
Jin et al.Defect identification of adhesive structure based on DCGAN and YOLOv5
CN118709022A (en) A multimodal content detection method and system based on multi-head attention mechanism
CN116051382B (en) A data enhancement method based on deep reinforcement learning generative adversarial neural network and super-resolution reconstruction
CN118429308A (en) No-reference image quality assessment method based on distortion sensitivity
CN118470525A (en) Leafy vegetable harvesting method, system, device and storage medium based on deep learning
CN117314751A (en)Remote sensing image super-resolution reconstruction method based on generation type countermeasure network
CN118396787A (en)Patent value prediction method, system, electronic equipment and storage medium based on machine learning model
CN114841887B (en) A method for image restoration quality evaluation based on multi-level difference learning
CN111160536B (en) Convolutional Embedding Representation Reasoning Method Based on Fragmented Knowledge
CN117372720B (en)Unsupervised anomaly detection method based on multi-feature cross mask repair
CN107797149A (en)A kind of ship classification method and device
CN117765041A (en)DSA image generation method based on registration enhancement and optimal transmission GAN
Li et al.P2GCN: Pixel-patch mutual enhancement graph convolutional network for sonar image super-resolution
CN117237782A (en) A training method and system for image processing models
CN114004295B (en) A small sample image data expansion method based on adversarial enhancement
CN118470368A (en)Transduction type small sample image classification method combining feature fusion and feature matching

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp