

技术领域technical field
本发明涉及医学影像检索与压缩技术领域以及单片机技术和5G网络通信技术,可以用在医学影像检索与压缩上,并实现不同医院间的医疗资源共享。The invention relates to the technical field of medical image retrieval and compression, as well as single-chip microcomputer technology and 5G network communication technology, which can be used in medical image retrieval and compression, and realize the sharing of medical resources among different hospitals.
背景技术Background technique
随着医学成像技术的快速发展,医院每天都会产生大量的医学影像数据,如何在海量的医学影像数据中进行有效快速地检索和压缩已经成为亟待解决的问题。在医学影像中进行检索可以起到辅助诊断医生的作用,可以在一定程度上提升医生的工作效率;对医学影像进行压缩,可以有效减少医学影像数据的存储空间,以便更好地进行医学影像数据传输,可以应用到医疗信息共享和远程医疗上。With the rapid development of medical imaging technology, hospitals generate a large amount of medical image data every day. How to effectively and quickly retrieve and compress the massive medical image data has become an urgent problem to be solved. Retrieval in medical images can play an auxiliary role in diagnosing doctors, and can improve the work efficiency of doctors to a certain extent; compressing medical images can effectively reduce the storage space of medical image data, so that medical image data can be better processed. Transmission can be applied to medical information sharing and telemedicine.
2014年xia等人提出一种基于卷积神经网络的哈希算法(Convolution NeuralNetwork Hashing,CNNH),该算法是CNN与哈希结合的一次全新尝试(参见Xia R,Pan Y,LaiH,et al.Supervised hashing for image retrieval via image representationlearning//Proceedings of the 23rd International Joint Conference onArtificial Intelligence.Quebec City,Canada,2014:2156-2162.)。2016年康奈尔大学的Gao Huang、清华大学的Zhuang Liu以及Facebook AI Research的Laurensvan derMaaten提出了密集卷积神经网络(Dense Convolutional Network,DenseNet)(参见HuangG,Liu Z,Kilian Q W,et al.Densely connected convolutional networks[EB/OL].arXiv preprint arXiv,1608.06993,2016.),本发明采用了基于DenseNet和监督哈希的图像检索方法。该方法的基本原理是:首先用训练优化好的DenseNet模型提取图像的高层语义特征,其次用改进的监督哈希编码对提取到的特征进行哈希编码,从而实现检索。In 2014, xia et al. proposed a convolutional neural network-based hashing algorithm (Convolution NeuralNetwork Hashing, CNNH), which is a new attempt to combine CNN and hashing (see Xia R, Pan Y, LaiH, et al. Supervised hashing for image retrieval via image representation learning//Proceedings of the 23rd International Joint Conference on Artificial Intelligence. Quebec City, Canada, 2014: 2156-2162.). In 2016, Gao Huang of Cornell University, Zhuang Liu of Tsinghua University, and Laurens van der Maaten of Facebook AI Research proposed Dense Convolutional Network (DenseNet) (see HuangG, Liu Z, Kilian Q W, et al.Densely connected convolutional networks[EB/OL].arXiv preprint arXiv, 1608.06993, 2016.), the present invention uses an image retrieval method based on DenseNet and supervised hashing. The basic principle of the method is: firstly, the high-level semantic features of the image are extracted with the trained and optimized DenseNet model, and then the extracted features are hash-coded with the improved supervised hash coding, so as to realize the retrieval.
1996年Said等人提出了SPIHT图像压缩方法(参见Said A,Pearlman W A.A New,Fast,and Efficient Image Code Based on Set Partitioning in Hierarchical Trees[J].IEEE Transactions on Circuits and Systems for Video Technology,1996,6(3):243-250.),但是该方法会损失图像的纹理和轮廓等高频信息,考虑到高频信息对医学影像诊断的重要性,本发明采用联合Canny(参见J.Canny.A computational approach to edgedetection.IEEE Transactions on Pattern Analysis and Machine Intelligence,1986,8(6):679-698.)和SPIHT的图像压缩方法。该方法的基本原理是:首先对图像进行Canny边缘检测,对提取的边缘图像进行Huffman编码及解码,得到边缘重构图像;其次用SPIHT算法对图像进行编码,并对编码后的码流进行Huffman编码及解码,经SPIHT算法解码及小波逆变换后得到一副重构图;最后将得到的两幅重构图进行相加以恢复原图像。该方法使重构图像的高频信息得到很好地保留,但也存在一部分高频信息冗余致使重构图像的视觉效果下降的缺点。In 1996, Said et al proposed the SPIHT image compression method (see Said A, Pearlman W A.A New, Fast, and Efficient Image Code Based on Set Partitioning in Hierarchical Trees[J]. IEEE Transactions on Circuits and Systems for Video Technology, 1996, 6 (3): 243-250.), but this method can lose high-frequency information such as texture and outline of the image, considering the importance of high-frequency information to medical imaging diagnosis, the present invention adopts joint Canny (referring to J.Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1986, 8(6):679-698.) and image compression method of SPIHT. The basic principle of this method is as follows: firstly, Canny edge detection is performed on the image, Huffman encoding and decoding are performed on the extracted edge image, and the edge reconstruction image is obtained; secondly, the image is encoded by the SPIHT algorithm, and Huffman encoding is performed on the encoded code stream. Encoding and decoding, after SPIHT algorithm decoding and wavelet inverse transformation, a reconstructed image is obtained; finally, the two reconstructed images obtained are added together to restore the original image. This method preserves the high-frequency information of the reconstructed image well, but it also has the disadvantage that part of the high-frequency information is redundant and the visual effect of the reconstructed image is reduced.
发明内容Contents of the invention
本发明的目的在于克服已有技术的缺点,借助于单片机技术和5G网络通信技术完成医学影像检索与压缩以及传输,从而实现不同医院间的资源共享。The purpose of the present invention is to overcome the shortcomings of the prior art, and complete medical image retrieval, compression and transmission by means of single-chip microcomputer technology and 5G network communication technology, so as to realize resource sharing between different hospitals.
本发明的一种医学影像的检索与压缩方法,包括以下步骤:A medical image retrieval and compression method of the present invention comprises the following steps:
步骤101,对医学影像进行检索:Step 101, searching the medical image:
步骤一、将由多幅医学图像构成的图像数据集中的每一幅医学图像进行预处理,步骤如下:Step 1. Preprocessing each medical image in the image data set composed of multiple medical images, the steps are as follows:
第一步,通过计算机程序将Dicom格式的医学图像转换为jpg格式,将医学图像分辨率大小转换为224x224;The first step is to convert the medical image in Dicom format to jpg format through a computer program, and convert the resolution of the medical image to 224x224;
第二步,通过计算机视觉库Opencv将单通道的医学图像转换为三通道图像;In the second step, the single-channel medical image is converted into a three-channel image through the computer vision library Opencv;
步骤二、对预处理过的图像数据集进行处理构建图像特征库,具体步骤为:Step 2. Process the preprocessed image data set to build an image feature library. The specific steps are:
第一步,在Python3.6的Spyder编辑器中搭建DenseNet模型;The first step is to build the DenseNet model in the Spyder editor of Python3.6;
第二步,将预处理过的图像数据集划分为训练集和测试集,其中70%的三通道图像数据作为训练集,然后使用BN算法对输入DenseNet网络模型的神经元做归一化处理作为输入DenseNet网络模型的神经元,用训练集对DenseNet模型进行训练优化,并用测试集进行DenseNet网络模型的测试;对DenseNet网络模型进行训练时,其中对DenseNet网络模型的损失函数的优化采用RMSProp算法;In the second step, the preprocessed image data set is divided into training set and test set, and 70% of the three-channel image data is used as the training set, and then the BN algorithm is used to normalize the neurons input to the DenseNet network model as Input the neurons of the DenseNet network model, use the training set to train and optimize the DenseNet model, and use the test set to test the DenseNet network model; when training the DenseNet network model, the optimization of the loss function of the DenseNet network model uses the RMSProp algorithm;
第三步,将图像数据集中的每一幅图像输入训练优化后的DenseNet模型,然后提取DenseNet模型最后一个池化层的输出作为每一幅图像的特征,构建形成图像数据集的图像特征库;The third step is to input each image in the image data set into the trained and optimized DenseNet model, and then extract the output of the last pooling layer of the DenseNet model as the feature of each image to construct an image feature library that forms the image data set;
步骤三、使用KPCA投影方法对图像特征库中数据分别进行降维处理;Step 3, using the KPCA projection method to perform dimensionality reduction processing on the data in the image feature library;
步骤四、对KPCA投影处理后的每一幅图像的图像特征数据分别进行KSH编码,得到图像特征的哈希码库;Step 4, KSH encoding is performed on the image feature data of each image after the KPCA projection processing to obtain a hash code library of image features;
步骤五、在优化好的DenseNet模型中输入一幅待检索图像,然后提取DenseNet模型最后一个池化层的输出作为待检索图像的特征,再使用KPCA投影方法对待检索图像的特征数据进行降维处理,并对降维处理后的待检索图像的特征数据进行KSH编码,得到待检索图像的哈希码,比较待检索图像的哈希码与哈希码库的汉明距离,进行相似性度量,将与待检索图像的汉明距离在设定范围内的哈希码库中的图像按照距离依次增大的顺序排列并作为检索结果保存;Step 5. Input an image to be retrieved in the optimized DenseNet model, then extract the output of the last pooling layer of the DenseNet model as the feature of the image to be retrieved, and then use the KPCA projection method to perform dimensionality reduction processing on the feature data of the image to be retrieved , and perform KSH encoding on the feature data of the image to be retrieved after dimensionality reduction processing to obtain the hash code of the image to be retrieved, compare the hash code of the image to be retrieved with the Hamming distance of the hash code library, and perform similarity measurement, Arrange the images in the hash code library whose Hamming distances with the images to be retrieved are within a set range in order of increasing distance and save them as retrieval results;
步骤201,对步骤101得到的每幅图像进行压缩,包括以下步骤:Step 201, compressing each image obtained in step 101, including the following steps:
步骤一、首先对检索得到的每幅图像进行Canny边缘检测,提取图像的高频信息;Step 1. First, perform Canny edge detection on each retrieved image to extract high-frequency information of the image;
步骤二、对Canny边缘检测提取的图像高频信息进行Huffman编码,再对得到的码流进行Huffman解码,得到边缘重构图像;Step 2: Perform Huffman encoding on the high-frequency information of the image extracted by Canny edge detection, and then perform Huffman decoding on the obtained code stream to obtain the edge reconstruction image;
步骤三、对检索得到的图像进行小波分解,分解级数为5级,对小波系数进行SPIHT编码,再对码流进行Huffman编码,得到优化后的压缩码流,再对优化后的压缩码流依次经过Huffman解码、SPIHT解码和小波逆变换得到一幅对高频信息有所损失的重构图像;Step 3: Perform wavelet decomposition on the retrieved image, the number of decomposition levels is 5, perform SPIHT encoding on the wavelet coefficients, and then perform Huffman encoding on the code stream to obtain the optimized compressed code stream, and then optimize the compressed code stream After Huffman decoding, SPIHT decoding and wavelet inverse transformation in turn, a reconstructed image with loss of high-frequency information is obtained;
步骤四、将步骤201中的步骤二得到边缘重构图像和步骤三得到的对高频信息有所损失的重构图像相加,恢复得到原图像;Step 4, adding the edge reconstructed image obtained in step 2 in step 201 to the reconstructed image obtained in step 3 with loss of high-frequency information, and recovering the original image;
步骤301,将步骤101和步骤201的程序导入单片机的RAM中,在所述的单片机中插有5G网络通信技术的芯片,以借助5G网络进行传输,所述的单片机具有USB接口,以插在医院电脑的主机上,实现医学影像的检索与压缩以及不同医院间的资源共享。Step 301, importing the programs of step 101 and step 201 into the RAM of the single-chip microcomputer, a chip of 5G network communication technology is inserted in the single-chip microcomputer, so as to transmit by means of a 5G network, and the single-chip microcomputer has a USB interface, so as to be plugged in On the host computer of the hospital, the retrieval and compression of medical images and resource sharing among different hospitals are realized.
本发明的有益效果如下:The beneficial effects of the present invention are as follows:
(1)可以实现医学影像的检索与压缩。在大量的医学影像中进行检索,可以有效地辅助诊断医生,从而提升工作效率。本发明的医学影像压缩方法,可以很好地保留医学影像的高频信息,从而确保不会丢失重要的诊断信息。(1) It can realize the retrieval and compression of medical images. Retrieval in a large number of medical images can effectively assist doctors in diagnosis, thereby improving work efficiency. The medical image compression method of the present invention can well preserve the high-frequency information of the medical image, thereby ensuring that important diagnostic information will not be lost.
(2)通过单片机来包装医学影像检索与压缩的软件,并采用5G网络通信技术进行传输,实现不同医院间的医学影像信息传输,从而达到医疗资源共享。(2) The medical image retrieval and compression software is packaged with a single-chip microcomputer, and the 5G network communication technology is used for transmission, so as to realize the transmission of medical image information between different hospitals, so as to achieve the sharing of medical resources.
附图说明Description of drawings
图1为医学影像检索流程图;Figure 1 is a flow chart of medical image retrieval;
图2为图像压缩流程图。Figure 2 is a flow chart of image compression.
具体实施方式Detailed ways
下面结合附图和具体实施例对本发明加以详细说明。The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments.
如附图所示本发明的一种医学影像的检索与压缩方法,包括以下步骤:As shown in the accompanying drawings, a method for retrieving and compressing medical images of the present invention comprises the following steps:
步骤101,对医学影像进行检索:Step 101, searching the medical image:
步骤一、为了使医学图像满足DenseNet模型的输入要求,需要先对由多幅医学图像构成的图像数据集中的每一幅医学图像进行预处理,步骤如下:Step 1. In order to make the medical image meet the input requirements of the DenseNet model, it is necessary to preprocess each medical image in the image data set composed of multiple medical images. The steps are as follows:
第一步,通过计算机程序将Dicom格式的医学图像转换为jpg格式,考虑到Dicom格式的医学图像分辨率大多是512x512大小,为满足模型的输入要求,需要将医学图像分辨率大小转换为224x224;The first step is to convert the medical image in Dicom format to jpg format through a computer program. Considering that the resolution of medical images in Dicom format is mostly 512x512, in order to meet the input requirements of the model, the resolution of medical images needs to be converted to 224x224;
第二步,通过计算机视觉库Opencv将单通道的医学图像转换为三通道图像;In the second step, the single-channel medical image is converted into a three-channel image through the computer vision library Opencv;
步骤二、对预处理过的图像数据集进行处理构建图像特征库,具体步骤为:Step 2. Process the preprocessed image data set to build an image feature library. The specific steps are:
第一步,在Python3.6的Spyder编辑器中搭建DenseNet模型;The first step is to build the DenseNet model in the Spyder editor of Python3.6;
第二步,将预处理过的图像数据集划分为训练集和测试集,其中70%的三通道图像数据作为训练集,然后使用BN算法(参见Ioffe S,Szegedy C.Batch normalization:Accelerating deep network training by reducing internal covariate shift[C].Proceedings of 32nd International Conference on Machine Learning,Piscataway,NJ:IEEE,2015:148-156.-Ioffe S,Szegedy C.批量归一化:通过减少内部协调变量移动来加速深度网络训练[C].第32届机器学习国际会议论文集,皮斯卡塔韦,NJ:IEEE,2015:148-156.)对输入DenseNet网络模型的神经元做归一化处理作为输入DenseNet网络模型的神经元,用训练集对DenseNet模型进行训练优化,并用测试集进行DenseNet网络模型的测试;对DenseNet网络模型进行训练时,其中对DenseNet网络模型的损失函数的优化采用RMSProp算法(参见Tielemant,Hinton G.RMSProp:Divide the gradient by a running averageof its recent magnitude[R].COURSERA:Neural Networks for Machine Learning,2012.-Tielemant,Hinton G.RMSProp:将梯度除以其最近梯度的移动平均值[R].COURERA:机器学习的神经网络,2012.),采用RMSProp算法优化损失函数,以解决在更新过程中DenseNet网络模型摆动幅度大的问题并加快DenseNet网络模型的收敛速度,从而实现模型的优化,增加模型的准确性和鲁棒性。In the second step, the preprocessed image data set is divided into a training set and a test set, and 70% of the three-channel image data is used as a training set, and then the BN algorithm is used (see Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift[C].Proceedings of 32nd International Conference on Machine Learning,Piscataway,NJ:IEEE,2015:148-156.-Ioffe S, Szegedy C. Batch normalization: by reducing internal covariate shift to Accelerating deep network training [C]. Proceedings of the 32nd International Conference on Machine Learning, Piscataway, NJ: IEEE, 2015:148-156.) Normalize the neurons input to the DenseNet network model as input The neurons of the DenseNet network model use the training set to train and optimize the DenseNet model, and use the test set to test the DenseNet network model; when the DenseNet network model is trained, the optimization of the loss function of the DenseNet network model uses the RMSProp algorithm (see Tielemant, Hinton G. RMSProp: Divide the gradient by a running average of its recent magnitude[R]. COURSERA: Neural Networks for Machine Learning, 2012. - Tielemant, Hinton G. RMSProp: Divide the gradient by the moving average of its recent gradient [R].COURERA: Neural Network for Machine Learning, 2012.), using the RMSProp algorithm to optimize the loss function to solve the problem of large swings in the DenseNet network model during the update process and speed up the convergence speed of the DenseNet network model, so as to achieve the model. Optimization, increasing the accuracy and robustness of the model.
本步骤中使用BN算法可以很好地解决输入数据发生偏移和增大的影响。Using the BN algorithm in this step can well solve the impact of offset and increase of input data.
作为本发明的一种实施方式,训练时DenseNet模型的参数设置为:学习率(learning rate)为1e-6,批大小(batch_size)为32,轮数(epochs)为70。As an embodiment of the present invention, the parameters of the DenseNet model during training are set as follows: the learning rate (learning rate) is 1e-6, the batch size (batch_size) is 32, and the number of rounds (epochs) is 70.
第三步,将图像数据集中的每一幅图像输入训练优化后的DenseNet模型,然后提取DenseNet模型最后一个池化层的输出作为每一幅图像的特征,构建形成图像数据集的图像特征库。In the third step, each image in the image data set is input into the trained and optimized DenseNet model, and then the output of the last pooling layer of the DenseNet model is extracted as the feature of each image to construct an image feature library that forms the image data set.
步骤三、使用KPCA投影方法对图像特征库中数据分别进行降维处理,通过KPCA投影充分挖掘数据中蕴含的非线性特征(参见H.Hotelling.Analysis of a complex ofstatistical variables into principal components[J].Journal of educationalpsychology,1993,24(6):417.-H.Hotelling.将复杂的统计变量分析为主要成分[J].教育心理学杂志,1993,24(6):417.),并减少投影误差。Step 3: Use the KPCA projection method to reduce the dimensionality of the data in the image feature library, and fully mine the nonlinear features contained in the data through the KPCA projection (see H.Hotelling.Analysis of a complex of statistical variables into principal components[J]. Journal of educational psychology, 1993,24(6):417.-H.Hotelling. Analyze complex statistical variables as principal components [J]. Journal of Educational Psychology, 1993,24(6):417.), and reduce projection error.
步骤四、对KPCA投影处理后的每一幅图像的图像特征数据分别进行KSH编码(参见Liu Wei,Wang Jun,Ji Rongrong,et al.Supervised hashing with kernels[C]//Procof IEEE Conference on Computer Vision and Pattern Recognition.Piscataway,NJ:IEEE Press,2012:2074-2081.-刘伟,王军,纪蓉蓉,等.监督核哈希[C]//IEEE计算机视觉和模式识别会议论文集.皮斯卡塔韦,NJ:IEEE Press,2012:2074-2081.),得到图像特征的哈希码库。Step 4: Carry out KSH encoding on the image feature data of each image after KPCA projection processing (see Liu Wei, Wang Jun, Ji Rongrong, et al.Supervised hashing with kernels[C]//Procof IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE Press, 2012: 2074-2081.-Liu Wei, Wang Jun, Ji Rongrong, et al. Supervised Kernel Hashing [C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Piscataway Taway, NJ: IEEE Press, 2012:2074-2081.), to obtain a hash code library of image features.
步骤五、在优化好的DenseNet模型中输入一幅待检索图像,然后提取DenseNet模型最后一个池化层的输出作为待检索图像的特征,再使用KPCA投影方法对待检索图像的特征数据进行降维处理,并对降维处理后的待检索图像的特征数据进行KSH编码,得到待检索图像的哈希码,比较待检索图像的哈希码与哈希码库的汉明距离,进行相似性度量,将与待检索图像的汉明距离在设定范围内的哈希码库中的图像(通常取20幅图像即可)按照距离依次增大的顺序排列并作为检索结果保存。Step 5. Input an image to be retrieved in the optimized DenseNet model, then extract the output of the last pooling layer of the DenseNet model as the feature of the image to be retrieved, and then use the KPCA projection method to perform dimensionality reduction processing on the feature data of the image to be retrieved , and perform KSH encoding on the feature data of the image to be retrieved after dimensionality reduction processing to obtain the hash code of the image to be retrieved, compare the hash code of the image to be retrieved with the Hamming distance of the hash code library, and perform similarity measurement, Arrange the images in the hash code library whose Hamming distance with the image to be retrieved is within the set range (usually 20 images are enough) in order of increasing distance and save them as the retrieval results.
步骤201,对步骤101得到的每幅图像进行压缩,包括以下步骤:Step 201, compressing each image obtained in step 101, including the following steps:
步骤一、首先对检索得到的每幅图像进行Canny边缘检测(参见J.Canny.Acomputational approach to edge detection[J].IEEE Transactions on PatternAnalysis and Machine Intelligence,1986,8(6):679-698.-J.Canny.边缘检测的一种计算方法[J].IEEE模式分析和机器智能学报,1986,8(6):679-698.),提取图像的高频信息。Step 1, first perform Canny edge detection on each image retrieved (see J.Canny.Acomputational approach to edge detection[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 1986,8(6):679-698.- J.Canny. A computational method for edge detection [J]. IEEE Journal of Pattern Analysis and Machine Intelligence, 1986, 8(6): 679-698.), extracting high-frequency information of images.
步骤二、对Canny边缘检测提取的图像高频信息进行Huffman编码,再对得到的码流进行Huffman解码,得到边缘重构图像。Step 2: Perform Huffman encoding on the high-frequency information of the image extracted by Canny edge detection, and then perform Huffman decoding on the obtained code stream to obtain the edge reconstruction image.
步骤三、对检索得到的图像进行小波分解,分解级数为5级,对小波系数进行SPIHT编码(参见Said A,Pearlman W A.A New,Fast,and Efficient Image Code Based on SetPartitioning in Hierarchical Trees[J].IEEE Transactions on Circuits andSystems for Video Technology,1996,6(3):243-250.-Said A,Pearlman W A.一种基于多级树集合分裂的新的、快速的、高效的图像编码[J].IEEE视频技术电路和系统学报,1996,6(3):243-250.),经SPIHT编码完小波系数后,为了优化所得码流的01空间,再对码流进行Huffman编码,得到优化后的压缩码流,再对优化后的压缩码流依次经过Huffman解码、SPIHT解码和小波逆变换得到一幅对高频信息有所损失的重构图像。Step 3: Perform wavelet decomposition on the retrieved image, the decomposition series is 5, and perform SPIHT coding on the wavelet coefficients (see Said A, Pearlman W A.A New, Fast, and Efficient Image Code Based on SetPartitioning in Hierarchical Trees[J] .IEEE Transactions on Circuits and Systems for Video Technology, 1996, 6(3): 243-250.-Said A, Pearlman W A. A new, fast and efficient image coding based on multilevel tree set splitting [J ]. IEEE Journal of Video Technology Circuits and Systems, 1996, 6(3): 243-250.), after encoding the wavelet coefficients by SPIHT, in order to optimize the 01 space of the code stream, Huffman coding is performed on the code stream to obtain an optimized After the compressed code stream, the optimized compressed code stream is sequentially subjected to Huffman decoding, SPIHT decoding and wavelet inverse transformation to obtain a reconstructed image with loss of high-frequency information.
步骤四、将步骤201中的步骤二得到边缘重构图像和步骤三得到的对高频信息有所损失的重构图像相加,恢复得到原图像。Step 4: Add the edge reconstructed image obtained in step 2 in step 201 to the reconstructed image obtained in step 3 with loss of high-frequency information, and restore the original image.
步骤301,将步骤101和步骤201的程序导入单片机的RAM中,在所述的单片机中插有5G网络通信技术的芯片,以借助5G网络进行传输,所述的单片机具有USB接口,以便可以插在医院电脑的主机上,从而实现医学影像的检索与压缩以及不同医院间的资源共享。Step 301, import the program of step 101 and step 201 into the RAM of the single-chip microcomputer, insert a chip of 5G network communication technology in the single-chip microcomputer, so as to carry out transmission by means of the 5G network, and the described single-chip microcomputer has a USB interface, so that it can be inserted On the host computer of the hospital, retrieval and compression of medical images and resource sharing among different hospitals are realized.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910907305.3ACN110633385B (en) | 2019-09-24 | 2019-09-24 | Medical image retrieval and compression method |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910907305.3ACN110633385B (en) | 2019-09-24 | 2019-09-24 | Medical image retrieval and compression method |
| Publication Number | Publication Date |
|---|---|
| CN110633385Atrue CN110633385A (en) | 2019-12-31 |
| CN110633385B CN110633385B (en) | 2023-05-12 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910907305.3AExpired - Fee RelatedCN110633385B (en) | 2019-09-24 | 2019-09-24 | Medical image retrieval and compression method |
| Country | Link |
|---|---|
| CN (1) | CN110633385B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111883234A (en)* | 2020-08-05 | 2020-11-03 | 杜翠 | Medical information system and image retrieval method thereof |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7085379B1 (en)* | 1999-11-26 | 2006-08-01 | Sharp Kabushiki Kaisha | Image compression device and image decompressing device, and computer-readable recorded medium on which program for allowing computer to execute image compressing method and image decompressing method |
| JP2007151134A (en)* | 2006-12-01 | 2007-06-14 | Sharp Corp | Image compression apparatus and image expansion apparatus, and computer-readable recording medium recording a program for causing a computer to execute the image compression method and the image expansion method |
| US20150003511A1 (en)* | 2010-11-26 | 2015-01-01 | Christopher Carmichael | WEAV Video Super Compression System |
| CN107220373A (en)* | 2017-06-19 | 2017-09-29 | 太原理工大学 | A kind of Lung neoplasm CT image Hash search methods based on medical science sign and convolutional neural networks |
| CN108898175A (en)* | 2018-06-26 | 2018-11-27 | 北京工业大学 | Area of computer aided model building method based on deep learning gastric cancer pathological section |
| CN109461495A (en)* | 2018-11-01 | 2019-03-12 | 腾讯科技(深圳)有限公司 | A kind of recognition methods of medical image, model training method and server |
| CN109741265A (en)* | 2018-11-20 | 2019-05-10 | 哈尔滨理工大学 | A Feature Selection-Based Bilateral Filtering and Denoising Method for Spine CT Images |
| CN109840290A (en)* | 2019-01-23 | 2019-06-04 | 北京航空航天大学 | A kind of skin lens image search method based on end-to-end depth Hash |
| CN110069644A (en)* | 2019-04-24 | 2019-07-30 | 南京邮电大学 | A kind of compression domain large-scale image search method based on deep learning |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7085379B1 (en)* | 1999-11-26 | 2006-08-01 | Sharp Kabushiki Kaisha | Image compression device and image decompressing device, and computer-readable recorded medium on which program for allowing computer to execute image compressing method and image decompressing method |
| JP2007151134A (en)* | 2006-12-01 | 2007-06-14 | Sharp Corp | Image compression apparatus and image expansion apparatus, and computer-readable recording medium recording a program for causing a computer to execute the image compression method and the image expansion method |
| US20150003511A1 (en)* | 2010-11-26 | 2015-01-01 | Christopher Carmichael | WEAV Video Super Compression System |
| CN107220373A (en)* | 2017-06-19 | 2017-09-29 | 太原理工大学 | A kind of Lung neoplasm CT image Hash search methods based on medical science sign and convolutional neural networks |
| CN108898175A (en)* | 2018-06-26 | 2018-11-27 | 北京工业大学 | Area of computer aided model building method based on deep learning gastric cancer pathological section |
| CN109461495A (en)* | 2018-11-01 | 2019-03-12 | 腾讯科技(深圳)有限公司 | A kind of recognition methods of medical image, model training method and server |
| CN109741265A (en)* | 2018-11-20 | 2019-05-10 | 哈尔滨理工大学 | A Feature Selection-Based Bilateral Filtering and Denoising Method for Spine CT Images |
| CN109840290A (en)* | 2019-01-23 | 2019-06-04 | 北京航空航天大学 | A kind of skin lens image search method based on end-to-end depth Hash |
| CN110069644A (en)* | 2019-04-24 | 2019-07-30 | 南京邮电大学 | A kind of compression domain large-scale image search method based on deep learning |
| Title |
|---|
| HUANGG,LIU Z,KILIAN Q W: "Dense Convolutional Network,DenseNet"* |
| XIA R,PAN Y,LAIH: "Supervised hashing for image retrieval via image representationlearning"* |
| 洪睿: "基于复杂网络描述的图像深度卷积分类方法"* |
| 王亚鸽 康晓东: "一种联合Canny边缘检测和SPIHT的图像压缩方法"* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111883234A (en)* | 2020-08-05 | 2020-11-03 | 杜翠 | Medical information system and image retrieval method thereof |
| Publication number | Publication date |
|---|---|
| CN110633385B (en) | 2023-05-12 |
| Publication | Publication Date | Title |
|---|---|---|
| CN109409222A (en) | A kind of multi-angle of view facial expression recognizing method based on mobile terminal | |
| Tong et al. | Facial expression recognition algorithm using LGC based on horizontal and diagonal prior principle | |
| CN109389171B (en) | Medical image classification method based on multi-granularity convolutional denoising autoencoder technology | |
| CN111951281A (en) | Image segmentation method, device, equipment and storage medium | |
| CN114581965B (en) | Finger vein recognition model training method and recognition method, system and terminal | |
| CN117523202A (en) | A fundus blood vessel image segmentation method based on visual attention fusion network | |
| CN114885170B (en) | Video transmission device and method | |
| CN113688799B (en) | A Facial Expression Recognition Method Based on Improved Deep Convolutional Generative Adversarial Networks | |
| CN114863013A (en) | A method for reconstructing 3D model of target object | |
| CN118038054A (en) | MRI tumor segmentation method in missing modality based on feature-modality dual-level fusion | |
| Medi et al. | Skinaid: a gan-based automatic skin lesion monitoring method for iomt frameworks | |
| CN110633385B (en) | Medical image retrieval and compression method | |
| CN117558397A (en) | Report generation system for analysis of deterioration in renal patients | |
| CN116385259A (en) | Image style transfer method based on GAN network | |
| CN114708281B (en) | Image compressed sensing reconstruction method based on self-adaptive non-local feature fusion network | |
| CN119091496A (en) | A behavior recognition method, device, system, and storage medium | |
| CN117975110A (en) | Fake human body posture fake identification method and device based on diffusion model | |
| CN116631619A (en) | Postoperative leg bending training monitoring system and method thereof | |
| CN118918336A (en) | Image change description method based on visual language model | |
| CN114373227A (en) | Skeleton key point coding method and device, electronic equipment and storage medium | |
| Ke et al. | Research on pet recognition algorithm with dual attention ghostnet-ssd and edge devices | |
| CN116416667B (en) | Facial action unit detection method based on dynamic correlation information embedding | |
| CN118522427B (en) | Intelligent medical intelligent diagnosis guiding method and system based on blockchain and multi-mode data | |
| CN116509414B (en) | An ECG signal denoising classification system and method | |
| CN115841589B (en) | Unsupervised image translation method based on generation type self-attention mechanism |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee | Granted publication date:20230512 | |
| CF01 | Termination of patent right due to non-payment of annual fee |