Movatterモバイル変換


[0]ホーム

URL:


CN109493359A - A kind of skin injury picture segmentation method based on depth network - Google Patents

A kind of skin injury picture segmentation method based on depth network
Download PDF

Info

Publication number
CN109493359A
CN109493359ACN201811393429.6ACN201811393429ACN109493359ACN 109493359 ACN109493359 ACN 109493359ACN 201811393429 ACN201811393429 ACN 201811393429ACN 109493359 ACN109493359 ACN 109493359A
Authority
CN
China
Prior art keywords
picture
segmentation
skin
convolutional neural
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811393429.6A
Other languages
Chinese (zh)
Inventor
杨猛
罗文锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen UniversityfiledCriticalSun Yat Sen University
Priority to CN201811393429.6ApriorityCriticalpatent/CN109493359A/en
Publication of CN109493359ApublicationCriticalpatent/CN109493359A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The present invention relates to artificial intelligence fields, more specifically, it is related to a kind of skin injury picture segmentation method based on depth network, the present invention is split task without manual extraction skin picture feature, but goes voluntarily to learn to be suitable for the depth convolution feature of segmentation task using training data;Pretreatment of the invention is very simple, only carries out the normalization of picture pixels value;In addition, solving the problems, such as that illumination and contrast change greatly using the pretreatment mode of wave filter compared to TDLS and Jafari, the present invention enriches training data in such a way that data enhance, and model is allowed to learn optimal character representation voluntarily to be split;The present invention has been more than existing method in the index of true positive rate, and the runing time on GPU and CPU is all far below existing model, can accomplish real-time skin image segmentation;Invention also uses the condition random fields connected entirely as post-processing approach, and the texture color feature of low level can be effectively utilized, sharpen the segmentation of fringe region.

Description

A kind of skin injury picture segmentation method based on depth network
Technical field
The present invention relates to artificial intelligence fields, more particularly to a kind of skin injury picture based on depth network pointSegmentation method.
Background technique
Current skin image, which is divided, can be divided into two major classes according to the skin image classification used: based on skin lens imageMethod and based on general camera shooting image method.For the segmentation problem of skin lens image, have many research worksIt can achieve good result.But the acquisition of skin lens image can relatively complex and costly become the bottleneck of the relevant technologies.InstituteWith current cutting techniques are all more likely to the skin picture shot using general camera.As the mobile devices such as mobile phone are taken picturesFunction it is perfect, it is easy to obtain skin picture high-definition.Since these common skin pictures are illuminated by the light, shooting angle etc.Factor is influenced and is differed greatly, so higher requirements are also raised to cutting techniques.
Have many research achievements for the skin picture of general camera shooting, if Jeffrey was proposed in 2012,Using the TDLS dividing method of the texture conspicuousness of skin picture, Jafari etc. was proposed in 2016 based on convolutional neural networksParted pattern.But feature of the TDLS method based on manual extraction cannot effectively be directed to current segmentation task, thus leadIt causes the accuracy rate of segmentation lower, and the segmentation inefficiency of this method, needs could provide a skin picture completely within 1 minuteSegmentation result, it is poor in terms of user experience.On this basis, Jafari proposes the dividing method based on depth convolutional network,By the segmentation feature for learning to need automatically from training sample, segmentation performance is effectively promoted.But since this method is eachThe picture for needing to extract fixed window when predicting a location of pixels, be then input in network exported as a result, thus divideThe total time cut is approximately equal to: picture pixels number × network operation time.Certainly, in the case of considering by input is criticized, operationSpeed is slightly promoted.Runing time of the method that Jafari is proposed on GPU is substantially improved, but the operation on CPURate is still undesirable, does not accomplish to divide in real time.In addition, the method based on depth convolutional network has one intrinsic to askTopic: the segmentation result of output is more coarse, cannot completely keep the marginal information of original picture.
Summary of the invention
The present invention is split task without manual extraction skin picture feature, but is gone voluntarily using training dataStudy is suitable for the depth convolution feature of segmentation task;Pretreatment of the invention is very simple, only carries out picture pixels valueNormalization;In addition, compared to TDLS and Jafari using wave filter pretreatment mode solve illumination and contrast variation compared withBig problem, the present invention data enhance by way of enrich training data, allow model voluntarily learn optimal character representation withIt is split;The present invention has been more than existing method in the index of true positive rate, and the runing time on GPU and CPUIt is all far below existing model, can accomplish real-time skin image segmentation;Invention also uses the condition randoms connected entirelyField is used as post-processing approach, and the texture color feature of low level can be effectively utilized, sharpen the segmentation of fringe region.
To realize the above goal of the invention, the technical solution adopted is that:
A kind of skin injury picture segmentation method based on depth network, comprising the following steps:
Step S1: test image is enhanced and is pre-processed;
Step S2: it will be trained, obtain preliminary in pretreated test image is input to convolutional neural networksSegmentation result and probability output carry out parameter adjustment to convolutional neural networks according to preliminary segmentation result and probability output;
Step S3: training image is enhanced and is pre-processed;
Step S4: pretreated training image is input in the convolutional neural networks of training completion and is trained, obtainedTo preliminary segmentation result and probability output;
Step S5: segmentation result and probability output are iterated processing in the condition random field connected entirely;It obtains mostWhole segmentation result.
Preferably, the step S1 specifically includes the following steps:
Step S101: a compact No.1 rectangle frame is intercepted in picture, which surrounds in picture just damagesSkin area;
Step S102: random interception one includes No. two rectangle frames of No.1 rectangle frame;
Step S103: the picture re-scaling intercepted at random to fixed picture size;
Step S104: after scaling, random noise is introduced to picture, including change picture luminance and contrast at random;
Step S105: doing normalization operation to picture pixels value, so that treated, picture mean value is 0, variance 1.
Preferably, the picture size of the fixation of the step S103 is 224 × 224.
Preferably, the step S2 specifically includes the following steps:
Step S201: the energy function of setting condition random field is defined as follows:
Here y refers to the prediction result of full convolutional neural networks, and subscript i shows location of pixels, the first of energy functionItem is single potential-energy function ψu(yi)=- log P (yi), P (y herei) indicate neural network forecast location of pixels i classification yiProbability it is bigIt is small;
Step S202: the Section 2 of energy function is set is defined as:
Wherein, μ is the compatible function of label, fiAnd fjFor the picture feature of location of pixels i, κ(m)For m-th of kernel function andIts weight ω(m),
Step S203: following two kernel functions are used, are respectively as follows:
Wherein μ (yi, yj)=[yi≠yj], the feature input of kernel function includes location of pixels and RGB color information, i.e. public affairsP in formulai, pj, Ii, Ij
Preferably, the training of convolutional neural networks is trained using the cross entropy loss function of two classification in step S2.
Compared with prior art, the beneficial effects of the present invention are:
1. the present invention proposes effective data enhanced scheme, existing data enhancing is random interception window, is causedThe picture of some interceptions cannot be guaranteed the integrality for damaging skin.Contrastingly, data enhanced scheme of the invention calculates firstThen compact damage skin area intercepts the rectangle frame comprising entire damage skin area again, is effectively guaranteed damageThe integrality of skin, thus accomplished that training is consistent with test data distribution unified.
2. the present invention learns a full convolutional neural networks to skin picture collection, therefore can be before only running primary networkTo in the case where propagation all over obtaining the result of all pixels position.Compared to the model based on window, full convolutional network of the inventionComputing repeatedly for convolution feature can be effectively avoided, thus the runing time on CPU and GPU greatly reduces, and can doTo real-time segmentation.
3. segmentation performance of the invention is good.
4. the present invention uses the condition random field connected entirely as post-processing approach, the segmentation knot of fringe region can be sharpenedFruit.Whether the model based on window or full convolutional network all fail the characteristics of image for considering low level, thus its segmentation knotFruit is not able to maintain the structure (such as texture, color) of these low levels, and the condition random field connected entirely can as a kind of graph modelThe segmentation in damage skin edge region is sharpened to make full use of these information, and gets rid of the erroneous segmentation area of small areaDomain.
Detailed description of the invention
Fig. 1 is flow chart of the invention.
Fig. 2 is influence of the data enhancing to segmentation result.
Fig. 3 is the segmentation result of different dividing methods.
Fig. 4 is that the time efficiency of different dividing methods compares.
Specific embodiment
The attached figures are only used for illustrative purposes and cannot be understood as limitating the patent;
Below in conjunction with drawings and examples, the present invention is further elaborated.
Embodiment 1
As shown in Figure 1, a kind of skin injury picture segmentation method based on depth network, comprising the following steps:
A kind of skin injury picture segmentation method based on depth network, comprising the following steps:
Step S1: test image is enhanced and is pre-processed;
Step S2: it will be trained, obtain preliminary in pretreated test image is input to convolutional neural networksSegmentation result and probability output carry out parameter adjustment to convolutional neural networks according to preliminary segmentation result and probability output;
Step S3: training image is enhanced and is pre-processed;
Step S4: pretreated training image is input in the convolutional neural networks of training completion and is trained, obtainedTo preliminary segmentation result and probability output;
Step S5: segmentation result and probability output are iterated processing in the condition random field connected entirely, obtained mostWhole segmentation result.
Preferably, the step S1 specifically includes the following steps:
Step S101: a compact No.1 rectangle frame is intercepted in picture, which surrounds in picture just damagesSkin area;
Step S102: random interception one includes No. two rectangle frames of No.1 rectangle frame;
Step S103: the picture re-scaling intercepted at random to fixed picture size;
Step S104: after scaling, random noise is introduced to picture, including change picture luminance and contrast at random;
Step S105: doing normalization operation to picture pixels value, so that treated, picture mean value is 0, variance 1.
Preferably, the picture size of the fixation of the step S103 is 224 × 224.
Preferably, the step S2 specifically includes the following steps:
Step S201: the energy function of setting condition random field is defined as follows:
Here y refers to the prediction result of full convolutional neural networks, and subscript i shows location of pixels, the first of energy functionItem is single potential-energy function ψu(yi)=- log P (yi), P (y herei) indicate neural network forecast location of pixels i classification yiProbability it is bigIt is small;
Step S202: the Section 2 of energy function is set is defined as:
Wherein, μ is the compatible function of label, fiAnd fjFor the picture feature of location of pixels i, κ(m)For m-th of kernel function andIts weight ω(m),
Step S203: following two kernel functions are used, are respectively as follows:
Wherein μ (yi, yj)=[yi≠yj], the feature input of kernel function includes location of pixels and RGB color information, i.e. public affairsP in formulai, pj, Ii, Ij
Preferably, the training of convolutional neural networks is trained using the cross entropy loss function of two classification in step S2.
Embodiment 2
The present invention and existing TDLS and Jafari method are split result and model running rate by the present embodimentCompare.
For the fairness compared, the present embodiment is provided with identical experimental situation, and the training stage of model all usesFor 126 pictures of DermQuest database as training data, the inside includes 66 melanoma pictures and 60 non-melanoma figuresPiece.Since data are limited, the experimental program of cross validation is taken, training data 4 parts of sizes such as is randomly divided into, soThe 3 parts therein training for model are successively chosen afterwards, and remaining 1 part collects as evaluation and test, finally takes the flat of 4 experimental resultsMean value.In terms of evaluation metrics, three true positive rate, true negative rate and accuracy rate indexs are used.
Before comparison, it is first tested to verify the necessity of data enhancing module in the present invention, experimental result such as Fig. 2It is shown.Wherein, data enhancing one column × indicate without use data enhancement operations, and √ expression used the present invention to mentionData enhancement operations out.It can be seen that data enhancing influences the result of true positive rate obviously, to improve more than 12 percentagesPoint.
Fig. 3 gives the segmentation result of distinct methods.It can be seen that segmentation result of the invention is in true positive rate indexIt is higher than TDLS and Jafari method.
Fig. 4 gives the runing time comparison of different dividing methods.In order to accurately evaluate and test the different model running times,Different methods is run on identical machine.Since Jafari method segmentation accuracy rate is better than TDLS method, compare at thisThe method of the present invention and Jafari.Every kind of method is run 10 times when test, using this 10 runing time mean values as methodRuning time.The segmentation result of a 400*600 size picture in order to obtain, Jafari method is the case where batch size is 128Under circulate beyond 1800 results that can just obtain each location of pixels.But the present invention has been due to having used full convolutional neural networks,Only primary network, which need to be run, can obtain the result of whole picture.Finally, when operation on either CPU or GPUBetween, the present invention will be significantly faster than Jafari method.
Obviously, the above embodiment of the present invention be only to clearly illustrate example of the present invention, and not be pairThe restriction of embodiments of the present invention.For those of ordinary skill in the art, may be used also on the basis of the above descriptionTo make other variations or changes in different ways.There is no necessity and possibility to exhaust all the enbodiments.It is all thisMade any modifications, equivalent replacements, and improvements etc., should be included in the claims in the present invention within the spirit and principle of inventionProtection scope within.

Claims (5)

Translated fromChinese
1.一种基于深度网络的皮肤损伤图片分割方法,其特征在于,包括以下步骤:1. a kind of skin damage picture segmentation method based on deep network, is characterized in that, comprises the following steps:步骤S1:对测试图像进行增强和预处理;Step S1: Enhance and preprocess the test image;步骤S2:将预处理后的测试图像输入到的卷积神经网络中进行训练,得到初步的分割结果和概率输出,根据初步的分割结果以及概率输出对卷积神经网络进行参数调整;Step S2: input the preprocessed test image into the convolutional neural network for training, obtain preliminary segmentation results and probability output, and adjust the parameters of the convolutional neural network according to the preliminary segmentation results and probability output;步骤S3:对训练图像进行增强和预处理;Step S3: Enhance and preprocess the training image;步骤S4:将预处理后的训练图像输入到训练完成的卷积神经网络中进行训练,得到初步的分割结果以及概率输出;Step S4: input the preprocessed training image into the trained convolutional neural network for training, and obtain a preliminary segmentation result and a probability output;步骤S5:将分割结果和概率输出在全连接的条件随机场中进行迭代处理,得到最终的分割结果。Step S5: Perform iterative processing on the segmentation result and the probability output in the fully connected conditional random field to obtain the final segmentation result.2.根据权利要求1所述的一种基于深度网络的皮肤损伤图片分割方法,其特征在于,所述的步骤S1具体包括以下步骤:2. a kind of skin damage picture segmentation method based on deep network according to claim 1, is characterized in that, described step S1 specifically comprises the following steps:步骤S101:在图片中截取一个紧凑的一号矩形框,该矩形框恰好包围图片中损伤皮肤区域;Step S101: intercepting a compact No. 1 rectangular frame in the picture, the rectangular frame just surrounds the damaged skin area in the picture;步骤S102:随机截取一个包含一号矩形框的二号矩形框;Step S102: randomly intercepting a No. 2 rectangular frame containing No. 1 rectangular frame;步骤S103:把随机截取的图片重新缩放到固定的图片大小;Step S103: rescale the randomly intercepted picture to a fixed picture size;步骤S104:缩放后,对图片引入随机噪声,包括随机改变图片亮度和对比度;Step S104: after scaling, introduce random noise to the picture, including randomly changing the brightness and contrast of the picture;步骤S105:对图片像素值做归一化操作,使得处理后的图片均值为0,方差为1。Step S105 : perform a normalization operation on the pixel values of the picture, so that the mean value of the processed picture is 0, and the variance is 1.3.根据权利要求2所述的一种基于深度网络的皮肤损伤图片分割方法,其特征在于,所述的步骤S103的固定的图片大小为224×224。3 . The deep network-based skin damage picture segmentation method according to claim 2 , wherein the fixed picture size of the step S103 is 224×224. 4 .4.根据权利要求2所述的一种基于深度网络的皮肤损伤图片分割方法,其特征在于,所述的步骤S2具体包括以下步骤:4. a kind of skin damage picture segmentation method based on deep network according to claim 2, is characterized in that, described step S2 specifically comprises the following steps:步骤S201:设定条件随机场的能量函数定义如下:Step S201: The energy function of the set conditional random field is defined as follows:这里y指的是全卷积神经网络的预测结果,下标i表明像素位置,能量函数的第一项为单一势能函数ψu(yi)=-log P(yi),这里P(yi)表示网络预测像素位置i类别yi的概率大小;Here y refers to the prediction result of the fully convolutional neural network, the subscript i indicates the pixel position, and the first term of the energy function is a single potential energy function ψu (yi )=-log P(yi ), where P(yi ) represents the probability that the network predicts the pixel positioni category yi;步骤S202:设能量函数的第二项定义为:Step S202: Let the second term of the energy function be defined as:其中,μ为标签兼容函数,fi和fj为像素位置i的图片特征,κ(m)为第m个核函数以及其权值ω〔m〕Among them, μ is the label compatible function, fi and fj are the image features of pixel position i, κ(m) is the mth kernel function and its weight ω[m] ,步骤S203:使用如下两个核函数,分别为:Step S203: Use the following two kernel functions, which are:其中μ(yi,yj)=[yi≠yj],核函数的特征输入包括像素位置和RGB颜色信息,即公式中的pi,pj,Ii,IjWhere μ(yi , yj )=[yi ≠yj ], the feature input of the kernel function includes pixel position and RGB color information, ie pi , pj , Ii , Ij in the formula.5.根据权利要求1所述的一种基于深度网络的皮肤损伤图片分割方法,其特征在于,步骤S2中卷积神经网络的训练使用二分类的交叉熵损失函数进行训练。5 . The deep network-based skin damage image segmentation method according to claim 1 , wherein the training of the convolutional neural network in step S2 is performed using a two-class cross-entropy loss function. 6 .
CN201811393429.6A2018-11-212018-11-21A kind of skin injury picture segmentation method based on depth networkPendingCN109493359A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201811393429.6ACN109493359A (en)2018-11-212018-11-21A kind of skin injury picture segmentation method based on depth network

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201811393429.6ACN109493359A (en)2018-11-212018-11-21A kind of skin injury picture segmentation method based on depth network

Publications (1)

Publication NumberPublication Date
CN109493359Atrue CN109493359A (en)2019-03-19

Family

ID=65697278

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201811393429.6APendingCN109493359A (en)2018-11-212018-11-21A kind of skin injury picture segmentation method based on depth network

Country Status (1)

CountryLink
CN (1)CN109493359A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114757951A (en)*2022-06-152022-07-15深圳瀚维智能医疗科技有限公司Sign data fusion method, data fusion equipment and readable storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20170039704A1 (en)*2015-06-172017-02-09Stoecker & Associates, LLCDetection of Borders of Benign and Malignant Lesions Including Melanoma and Basal Cell Carcinoma Using a Geodesic Active Contour (GAC) Technique
CN107203999A (en)*2017-04-282017-09-26北京航空航天大学A kind of skin lens image automatic division method based on full convolutional neural networks
US20180061046A1 (en)*2016-08-312018-03-01International Business Machines CorporationSkin lesion segmentation using deep convolution networks guided by local unsupervised learning
CN107767380A (en)*2017-12-062018-03-06电子科技大学A kind of compound visual field skin lens image dividing method of high-resolution based on global empty convolution
CN107862695A (en)*2017-12-062018-03-30电子科技大学A kind of modified image segmentation training method based on full convolutional neural networks
CN107958271A (en)*2017-12-062018-04-24电子科技大学The cutaneous lesions deep learning identifying system of Analysis On Multi-scale Features based on expansion convolution
US20180122072A1 (en)*2016-02-192018-05-03International Business Machines CorporationStructure-preserving composite model for skin lesion segmentation
CN108062756A (en)*2018-01-292018-05-22重庆理工大学Image, semantic dividing method based on the full convolutional network of depth and condition random field
CN108256527A (en)*2018-01-232018-07-06深圳市唯特视科技有限公司A kind of cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network
CN108510502A (en)*2018-03-082018-09-07华南理工大学Melanoma picture tissue segmentation methods based on deep neural network and system
CN108830853A (en)*2018-07-202018-11-16东北大学A kind of melanoma aided diagnosis method based on artificial intelligence

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20170039704A1 (en)*2015-06-172017-02-09Stoecker & Associates, LLCDetection of Borders of Benign and Malignant Lesions Including Melanoma and Basal Cell Carcinoma Using a Geodesic Active Contour (GAC) Technique
US20180122072A1 (en)*2016-02-192018-05-03International Business Machines CorporationStructure-preserving composite model for skin lesion segmentation
US20180061046A1 (en)*2016-08-312018-03-01International Business Machines CorporationSkin lesion segmentation using deep convolution networks guided by local unsupervised learning
CN107203999A (en)*2017-04-282017-09-26北京航空航天大学A kind of skin lens image automatic division method based on full convolutional neural networks
CN107767380A (en)*2017-12-062018-03-06电子科技大学A kind of compound visual field skin lens image dividing method of high-resolution based on global empty convolution
CN107862695A (en)*2017-12-062018-03-30电子科技大学A kind of modified image segmentation training method based on full convolutional neural networks
CN107958271A (en)*2017-12-062018-04-24电子科技大学The cutaneous lesions deep learning identifying system of Analysis On Multi-scale Features based on expansion convolution
CN108256527A (en)*2018-01-232018-07-06深圳市唯特视科技有限公司A kind of cutaneous lesions multiclass semantic segmentation method based on end-to-end full convolutional network
CN108062756A (en)*2018-01-292018-05-22重庆理工大学Image, semantic dividing method based on the full convolutional network of depth and condition random field
CN108510502A (en)*2018-03-082018-09-07华南理工大学Melanoma picture tissue segmentation methods based on deep neural network and system
CN108830853A (en)*2018-07-202018-11-16东北大学A kind of melanoma aided diagnosis method based on artificial intelligence

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HE, XINZI ET AL: "Skin Lesion Segmentation via Deep RefineNet", 《 LECTURE NOTES IN COMPUTER SCIENCE》*
MENG YANG ET AL: "Fast Skin Lesion Segmentation via Fully Convolutional Network with Residual Architecture and CRF", 《2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR)》*
YUAN YADING ET AL: "Automatic Skin Lesion Segmentation Using Deep Fully Convolutional Networks With Jaccard Distance", 《IEEE TRANSACTIONS ON MEDICAL IMAGING》*

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114757951A (en)*2022-06-152022-07-15深圳瀚维智能医疗科技有限公司Sign data fusion method, data fusion equipment and readable storage medium

Similar Documents

PublicationPublication DateTitle
CN111401516B (en)Searching method for neural network channel parameters and related equipment
Isola et al.Crisp boundary detection using pointwise mutual information
US10186040B2 (en)Systems and methods for detection of significant and attractive components in digital images
CN111311520B (en) Image processing method, device, terminal and storage medium
CN112508094A (en)Junk picture identification method, device and equipment
KR20180065889A (en)Method and apparatus for detecting target
Zhang et al.Learning of structured graph dictionaries
CN110874604A (en)Model training method and terminal equipment
EP3932086B1 (en)Scalable architecture for automatic generation of content distribution images
CN106204482B (en)Based on the mixed noise minimizing technology that weighting is sparse
CN109472193A (en)Method for detecting human face and device
CN116258651B (en)Image processing method and related device
CN111583259B (en)Document image quality evaluation method
CN111090778A (en) A picture generation method, device, device and storage medium
CN114092827B (en) A method for generating image dataset
CN108319672A (en)Mobile terminal malicious information filtering method and system based on cloud computing
CN109348287A (en) Video summary generation method, device, storage medium and electronic device
Wang et al.Detection of glands and villi by collaboration of domain knowledge and deep learning
CN113538304A (en)Training method and device of image enhancement model, and image enhancement method and device
CN109493359A (en)A kind of skin injury picture segmentation method based on depth network
Chen et al.Enhancement of edge-based surveillance videos based on bilateral filtering
CN110490876B (en)Image segmentation method based on lightweight neural network
CN107315985B (en) A kind of iris recognition method and terminal
CN115439863A (en)Deep learning-based ancient seal character recognition method and system
CN114004974A (en) Method and device for optimizing images captured in low light environment

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication

Application publication date:20190319

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp