Movatterモバイル変換


[0]ホーム

URL:


CN109509187B - An Efficient Inspection Algorithm for Small Defects in Large Resolution Cloth Images - Google Patents

An Efficient Inspection Algorithm for Small Defects in Large Resolution Cloth Images
Download PDF

Info

Publication number
CN109509187B
CN109509187BCN201811336000.3ACN201811336000ACN109509187BCN 109509187 BCN109509187 BCN 109509187BCN 201811336000 ACN201811336000 ACN 201811336000ACN 109509187 BCN109509187 BCN 109509187B
Authority
CN
China
Prior art keywords
image
feature
model
training
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201811336000.3A
Other languages
Chinese (zh)
Other versions
CN109509187A (en
Inventor
陈楚城
戴宪华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen UniversityfiledCriticalSun Yat Sen University
Priority to CN201811336000.3ApriorityCriticalpatent/CN109509187B/en
Publication of CN109509187ApublicationCriticalpatent/CN109509187A/en
Application grantedgrantedCritical
Publication of CN109509187BpublicationCriticalpatent/CN109509187B/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明涉及一种针对大分辨率布匹图像中的小瑕疵的高效检验算法,包括:(1)通过摄像头采集图像,然后使用labelImg工具图像进行标注;(2)将处理后图像拆分成训练集和测试集,训练集用来训练检验模型,测试集用来评估检验模型性能;(3)将训练集图像和对应的类别信息位置信息等输入到改进的se‑resnext101检验模型中,训练检验模型;(4)采用训练后的检验模型处理测试集中的图像,获取瑕疵的大致位置和对应的类别。本发明的方法可以对单分辨率的输入图像实现多尺度特征图的处理,从而处理多尺度图像块,这样可以适应多种不同大小的瑕疵,大幅度提高检测精度和速度;同时该算法实现在图像分类框架上获取瑕疵大致位置和处理图像中存在多种瑕疵的情况。

Figure 201811336000

The present invention relates to a high-efficiency inspection algorithm for small defects in large-resolution cloth images, including: (1) collecting images through a camera, and then using labelImg tool images to label; (2) splitting processed images into training sets and test set, the training set is used to train the inspection model, and the test set is used to evaluate the performance of the inspection model; (3) input the training set image and the corresponding category information position information into the improved se-resnext101 inspection model, and train the inspection model ; (4) Process the images in the test set by using the trained inspection model to obtain the approximate location and corresponding category of the defect. The method of the present invention can realize the processing of multi-scale feature maps for single-resolution input images, thereby processing multi-scale image blocks, which can adapt to various defects of different sizes, and greatly improve detection accuracy and speed; at the same time, the algorithm is implemented in On the image classification framework, the approximate position of the defect is obtained and the situation where there are multiple defects in the image is dealt with.

Figure 201811336000

Description

Translated fromChinese
一种针对大分辨率布匹图像中的小瑕疵的高效检验算法An Efficient Inspection Algorithm for Small Defects in Large Resolution Cloth Images

技术领域technical field

本发明涉及图像分类领域,具体涉及了一种针对大分辨率布匹图像中的小瑕疵的高效检验算法。The invention relates to the field of image classification, in particular to an efficient inspection algorithm for small defects in large-resolution cloth images.

背景技术Background technique

早先的图像分类主要通过传统的机器学习的方法,通常分为两个部分:基于特征提取方法和基于模板匹配的方法,而基于特征提取方法主要有基于统计的方法、基于谱的方法、基于纹理模型方法、基于学习的方法和基于结构的方法。这些方法都需要人为地选择特征,同时泛化性不强。The previous image classification mainly used traditional machine learning methods, which are usually divided into two parts: feature extraction-based methods and template-matching-based methods, and feature extraction-based methods mainly include statistics-based methods, spectrum-based methods, and texture-based methods. Model methods, learning-based methods, and structure-based methods. These methods all need to select features artificially, and the generalization is not strong at the same time.

随着深度学习的发展,特别是卷积神经网络在图像分类、图像检测和图像分割等方面的应用,所取得的效果是过往传统算法无法比拟的。但是基于深度学习的分类算法需要训练的数据集中目标区域占图像面积的比重较大,才能有较好的效果,如果图像是高分辨率图像但是分类的目标却很小,有可能分类目标只占不到图像总面积1%,那么直接使用传统的基于深度学习的分类算法进行图像分类,准确率是很低的。同时,如果图像中存在着多个类别的目标,我们也无法获取这些目标的类别信息。With the development of deep learning, especially the application of convolutional neural network in image classification, image detection and image segmentation, the effect achieved is unmatched by traditional algorithms in the past. However, the classification algorithm based on deep learning requires that the target area in the training data set accounts for a large proportion of the image area in order to have a better effect. If the image is a high-resolution image but the classification target is small, it is possible that the classification target only takes up If it is less than 1% of the total area of the image, the accuracy rate is very low if the traditional classification algorithm based on deep learning is directly used for image classification. At the same time, if there are multiple categories of objects in the image, we cannot obtain the category information of these objects.

发明内容Contents of the invention

为了克服现有技术存在的不足,本发明提出了一种针对大分辨率布匹图像中的小瑕疵的高效检验算法,该方法可以对单分辨率的输入图像实现多尺度特征图的处理,从而处理多尺度图像块,这样可以有效处理多种不同大小的瑕疵,大幅度提高检测精度和速度;同时该方法可以在图像分类框架上获取瑕疵的大致区域和处理图片中存在多种瑕疵的情况。In order to overcome the shortcomings of the existing technology, the present invention proposes an efficient inspection algorithm for small defects in large-resolution cloth images. This method can realize the processing of multi-scale feature maps for single-resolution input images, thereby processing Multi-scale image blocks, which can effectively deal with a variety of defects of different sizes, greatly improving the detection accuracy and speed; at the same time, this method can obtain the approximate area of the defect in the image classification framework and deal with the situation that there are multiple defects in the picture.

为了实现以上目的,本发明提出的方法具体步骤如下:In order to achieve the above object, the concrete steps of the method proposed by the present invention are as follows:

(1)图像采集,利用分辨率为2560*1920的摄像头对布匹图像进行拍摄,获取相关的数据集并对图像重命名,如1.jpg,2.jpg,3.jpg,…,M.jpg等,接着将图像缩放到1024*768大小,并采用labelImg工具对拍摄的图像进行标注,获取图像中关于瑕疵的标签,瑕疵的标签包含了瑕疵在图像中左上角的坐标(x1,y1),右下角的坐标(x2,y2)和瑕疵的类别defectN,其中N表示数字,如1,2,3,…等,特别的,如果拍摄的图像中没有瑕疵,我们不会用labelImg进行处理,只记录其类别信息norm;(1) Image collection, use a camera with a resolution of 2560*1920 to shoot the cloth image, obtain the relevant data set and rename the image, such as 1.jpg, 2.jpg, 3.jpg, ..., M.jpg etc., and then scale the image to a size of 1024*768, and use the labelImg tool to label the captured image to obtain the label of the defect in the image. The label of the defect contains the coordinates (x1, y1) of the upper left corner of the defect in the image, The coordinates (x2, y2) of the lower right corner and the defect category defectN, where N represents a number, such as 1, 2, 3, ..., etc. In particular, if there is no defect in the captured image, we will not use labelImg for processing, only Record its category information norm;

(2)图像划分,将图像划分成训练集和测试集两部分,两个部分不存在相同的图像,训练集用来训练检验模型,测试集用来评估检验模型的性能;(2) Image division, the image is divided into two parts, the training set and the test set, the two parts do not have the same image, the training set is used to train the test model, and the test set is used to evaluate the performance of the test model;

(3)图像预处理,包括随机上下翻转、随机左右翻转和随机光照改变等,其中随机上下翻转、随机左右翻转和随机光照改变只针对训练集,特别的,当进行随机上下翻转和随机左右翻转的时候,瑕疵的坐标信息也需要做出相应的变化;(3) Image preprocessing, including random up-down flipping, random left-right flipping, and random lighting changes, among which random up-down flipping, random left-right flipping, and random lighting changes are only for the training set. In particular, when performing random up-down flipping and random left-right flipping When , the coordinate information of the defect also needs to be changed accordingly;

(4)训练检验模型,将经过图像预处理后的训练集中的图像和标签信息输入到检验模型中进行训练,检验模型是在se-resnext101的基础上进行改进的,使得网络可以针对单分辨率的输入图像在模型上获取多尺度特征图,通过检验模型的前向传播获取每个特征图上每个特征点的类别概率值,通过Focal Loss函数计算分类损失,利用带动量的梯度下降算法反向传播训练模型;(4) Train the inspection model, input the image and label information in the training set after image preprocessing into the inspection model for training, the inspection model is improved on the basis of se-resnext101, so that the network can target single resolution The multi-scale feature map of the input image is obtained on the model, the category probability value of each feature point on each feature map is obtained through the forward propagation of the inspection model, the classification loss is calculated by the Focal Loss function, and the gradient descent algorithm with momentum is used to reverse To propagate the training model;

(5)布匹图像检验,将测试集中的图像输入到训练好的检验模型中提取特征并获取多尺度特征图上每个特征点的类别概率值;如果三张特征图中有两张及以上的特征图中所有特征点都判为norm,则认为该图像类别为norm,其他情况则认为图像中存在瑕疵;对于判别为存在瑕疵的图像,我们利用每个特征点都对应原图中的某个图像块,通过特征点的预测类别转换对应图像块的像素值获取相关的热力图,叠加多种特征图对应的热力图获得最终的热力图,由最终的热力图得到瑕疵的大致位置,对瑕疵附近的图像块取概率均值得到该瑕疵的类别,特别的,该算法可以处理图像中存在多瑕疵的情况,并得到各个瑕疵的类别和大致位置。(5) Cloth image inspection, input the images in the test set into the trained inspection model to extract features and obtain the category probability value of each feature point on the multi-scale feature map; if there are two or more in the three feature maps If all the feature points in the feature map are judged as norm, then the image category is considered norm, and in other cases, it is considered that there are defects in the image; for the image judged as having defects, we use each feature point to correspond to a certain For the image block, convert the pixel value of the corresponding image block through the predicted category of the feature point to obtain the relevant heat map, superimpose the heat map corresponding to various feature maps to obtain the final heat map, and obtain the approximate position of the defect from the final heat map. The average value of the probability of nearby image blocks is used to obtain the category of the defect. In particular, the algorithm can deal with the situation where there are multiple defects in the image, and obtain the category and approximate location of each defect.

所述步骤(4)中训练包括基于改进的se-resnext101模型进行训练步骤、迁移学习步骤、二阶段学习速率调整步骤、卷积网络提取特征步骤、自适应调整特征权重步骤、多尺度图像块处理步骤、计算Focal Loss步骤和利用带动量的梯度下降算法反向传播训练模型步骤。Training in said step (4) includes training steps based on the improved se-resnext101 model, migration learning steps, two-stage learning rate adjustment steps, convolution network extraction feature steps, adaptive adjustment feature weight steps, multi-scale image block processing step, calculate Focal Loss step and use the gradient descent algorithm with momentum to backpropagate the training model step.

如图1和图2所示,所述步骤(4)具体为:As shown in Fig. 1 and Fig. 2, described step (4) is specifically:

(4.1)将原始的se-resnext101模型最后的全局池化层代替为3个由并行的特征块全局池化层和特征块最大池化层构成的特征块池化小模块,每个小模块都是处于并行关系,每个小模块中池化层的尺寸相同但是不同模块之间的池化层的尺寸不同,另外将最后的全连接层用1个1*1大小的,步长为1的卷积操作代替,利用该改进的se-resnext101模型作为检验模型;(4.1) Replace the last global pooling layer of the original se-resnext101 model with three small feature block pooling modules consisting of parallel feature block global pooling layers and feature block maximum pooling layers. Each small module has It is in a parallel relationship. The size of the pooling layer in each small module is the same but the size of the pooling layer between different modules is different. In addition, the last fully connected layer uses a 1*1 size with a step size of 1. The convolution operation is replaced, and the improved se-resnext101 model is used as the test model;

(4.2)使用se-resnext101模型在ImageNets图像集上训练得到的权重来初始化改进版的se-resnext101模型,即检验模型,我们只保留了除所有的偏置权重、最后的全局池化层、最后的全连接层和softmax层外的权重;(4.2) Use the weights of the se-resnext101 model trained on the ImageNets image set to initialize the improved version of the se-resnext101 model, that is, the test model. We only retain all the bias weights, the last global pooling layer, and finally The fully connected layer and the weight outside the softmax layer;

(4.3)训练模型时候采用二阶段学习速率调整网络的学习速率,即在初始阶段以某个学习速率训练包括特征块池化模块在内的模型的最后三层而保持模型其他层的权重不变,在训练若干个迭代周期(每个迭代周期遍历训练集中所有的图像)后对模型的最后三层使用一个较大的学习速率,其他层使用一个较小的学习速率,并按照一定的规律降低学习速率;(4.3) When training the model, use the two-stage learning rate to adjust the learning rate of the network, that is, train the last three layers of the model including the feature block pooling module at a certain learning rate in the initial stage while keeping the weights of other layers of the model unchanged , after training several iteration cycles (each iteration cycle traverses all the images in the training set), use a larger learning rate for the last three layers of the model, and use a smaller learning rate for other layers, and reduce it according to a certain rule learning rate;

(4.4)将训练图像输入改进的se-resnext101模型中,利用卷积操作提取特征,增大特征图的感受野,同时利用原始se-resnext101模型中包含的挤压和激励子模块使得网络能够自适应调整特征权重,突出有效特征,抑制无效特征,使得特征空间和特征通道两个维度得到改善;(4.4) Input the training image into the improved se-resnext101 model, use the convolution operation to extract features, increase the receptive field of the feature map, and use the extrusion and excitation sub-modules contained in the original se-resnext101 model to enable the network to automatically Adaptively adjust feature weights, highlight effective features, and suppress invalid features, so that the two dimensions of feature space and feature channel are improved;

(4.5)对最后卷积层输出的特征图利用3个并行的特征块池化子模块,如图3所示,每个小模块中池化层的尺寸相同但是不同模块之间的池化层的尺寸不同,从而获取3种不同大小的特征图;对获取的特征图利用大小为1*1,步长为1的卷积操作,紧接着利用softmax计算出3种不同大小的特征图上每个特征点对应的类别概率值;(4.5) Use 3 parallel feature block pooling sub-modules for the feature map output by the last convolutional layer, as shown in Figure 3, the size of the pooling layer in each small module is the same but the pooling layer between different modules The size of the feature map is different, so as to obtain 3 different sizes of feature maps; use the convolution operation with a size of 1*1 and a step size of 1 for the obtained feature maps, and then use softmax to calculate each of the feature maps of 3 different sizes. The category probability value corresponding to each feature point;

(4.6)由于感受野的存在,特征图上的特征点对应原图的图像块,我们根据瑕疵在图像中的位置可以获知每个图像块的真实类别,从而得到对应特征点的真实类别,根据预测的类别概率值和真实的类别信息,利用Focal Loss函数计算分类损失,最后利用带动量的梯度下降算法反向传播更新检测模型参数。(4.6) Due to the existence of the receptive field, the feature points on the feature map correspond to the image blocks of the original image. We can know the real category of each image block according to the position of the defect in the image, so as to obtain the real category of the corresponding feature point. According to The predicted category probability value and the real category information are used to calculate the classification loss using the Focal Loss function, and finally the gradient descent algorithm with momentum is used to backpropagate to update the detection model parameters.

所述步骤(5)具体为:将测试图像输入到检验模型中,通过前向传播,获取感受野不断增大,分辨率不断降低的特征图,对最后的卷积层输出的特征图利用3个并行的特征块池化小模块进行处理,获取3种不同大小的特征图,对着3种不同大小的特征图进行大小为1*1,步长为1的卷积操作,最后利用softmax操作获取每个特征图上每个特征点对应的类别概率值。如果三张特征图中有两张及以上的特征图中所有特征点都判为norm,则认为该图像类别为norm,对于其他情况则认为图像中存在瑕疵。对于存在瑕疵的图像,我们将特征图中判别为norm的特征点映射回原图的图像块的像素值赋为0,对特征图中其他的特征点映射回原图的图像块的像素值赋为1,通过这种方式获得3个热力图,将3个热力图叠加获得最终的热力图,从最终的热力图获取瑕疵的大致位置,瑕疵的类别由判别为瑕疵的若干个特征点的类别概率均值决定。The step (5) is specifically: input the test image into the inspection model, through forward propagation, obtain the feature map with increasing receptive field and decreasing resolution, and use 3 A parallel feature block pooling small module is processed to obtain feature maps of 3 different sizes, perform a convolution operation with a size of 1*1 and a step size of 1 on the feature maps of 3 different sizes, and finally use the softmax operation Obtain the class probability value corresponding to each feature point on each feature map. If all feature points in two or more of the three feature maps are judged as norm, the image category is considered norm, and in other cases, it is considered that there are defects in the image. For images with flaws, we assign the pixel value of the image block that is judged as norm in the feature map to be mapped back to the original image as 0, and assign the pixel value of the image block that is mapped back to the original image to other feature points in the feature map as It is 1. In this way, 3 heat maps are obtained, and the final heat map is obtained by superimposing the 3 heat maps. The approximate position of the defect is obtained from the final heat map. Probability mean decision.

与现有技术相比,本发明的有益效果是:Compared with prior art, the beneficial effect of the present invention is:

本发明基于改进的se-resnext101模型实现对单分辨率的输入图像实现多尺度特征图的处理,从而处理多尺度图像块,这样可以适应多种不同大小的瑕疵,大幅度提高检测精度和速度;基于分类模型的框架,不仅可以给出预测图片是否存在瑕疵,还可以针对存在瑕疵的图像给出瑕疵的大致位置及对应的类别,处理存在多种类别瑕疵的图像;利用迁移学习和二阶段学习速率调整可以使得模型具有更强的泛化性能,提高对新样本的检验性能;利用Focal Loss函数代替传统的交叉熵损失函数来计算分类损失,有效解决样本不平衡的问题,提高检验模型对难分样本的关注,提高检验模型的性能。Based on the improved se-resnext101 model, the present invention realizes the processing of multi-scale feature maps for single-resolution input images, thereby processing multi-scale image blocks, which can adapt to various defects of different sizes, and greatly improve detection accuracy and speed; Based on the framework of the classification model, it can not only predict whether the picture is flawed, but also give the approximate location and corresponding category of the flawed image, and process images with multiple categories of flaws; use transfer learning and two-stage learning Speed adjustment can make the model have stronger generalization performance and improve the test performance of new samples; the Focal Loss function is used to replace the traditional cross-entropy loss function to calculate the classification loss, which can effectively solve the problem of sample imbalance and improve the accuracy of the test model. Sub-sample attention is used to improve the performance of the test model.

附图说明Description of drawings

图1为se-resnext101模型示意图Figure 1 is a schematic diagram of the se-resnext101 model

图2为改进版se-resnext101模型示意图Figure 2 is a schematic diagram of the improved se-resnext101 model

图3为特征块池化模块结构示意图Figure 3 is a schematic diagram of the structure of the feature block pooling module

图4为se-resnext模型和改进版se-resnext101模型在测试集上二分类准确率对比图Figure 4 is a comparison of the accuracy of the two classifications of the se-resnext model and the improved version of the se-resnext101 model on the test set

图5为改进版se-resnext101模型在测试集中瑕疵种类识别准确率柱状图Figure 5 is a histogram of the recognition accuracy of the improved se-resnext101 model in the test set

具体实施方式detailed description

下面对本发明进行进一步说明。The present invention is further described below.

本发明实施过程及实施例如下:Implementation process of the present invention and embodiment are as follows:

(1)图像采集,我们采用分辨率为2560*1920的摄像头对布匹图像进行拍摄,一共获取5000张布匹图像,并将它们重命名为1.jpg、2.jpg,…,5000.jpg,接着将图片缩放到1024*768大小,并采用labelImg工具对拍摄的图像进行标注,获取图像中关于瑕疵的标签。瑕疵的标签包含了瑕疵在图像中左上角的坐标(x1,y1),右下角的坐标(x2,y2)和瑕疵的类别defectN,其中N∈{1,2,3,…,9},表示数据集中一共有9种类型的瑕疵,瑕疵分别为油污、跳花、缺经、吊经、织稀、毛洞、擦洞、毛斑和扎洞,瑕疵次序与defectN中N的数字一一对应。为了方便处理,我们将labelImg处理得到的xml文件中的信息转换到txt文件中,只保存图片中瑕疵的类别和对应的位置,每张图片对应一个txt文件,且图像和txt文件的命名相同。特别的,如果拍摄的图像中没有瑕疵,我们不会用labelImg进行处理,只记录其类别信息norm,并存放在txt文件中;(1) Image collection. We use a camera with a resolution of 2560*1920 to shoot cloth images, and obtain a total of 5000 cloth images, and rename them as 1.jpg, 2.jpg, ..., 5000.jpg, and then Scale the picture to a size of 1024*768, and use the labelImg tool to label the captured image to obtain the label about the defect in the image. The label of the defect includes the coordinates (x1, y1) of the upper left corner of the defect in the image, the coordinates (x2, y2) of the lower right corner and the category defectN of the defect, where N ∈ {1, 2, 3, ..., 9}, means There are a total of 9 types of defects in the data set. The defects are oil stains, jumping flowers, missing warps, hanging warps, thin weaves, hair holes, scratch holes, hair spots, and prick holes. The order of defects corresponds to the number of N in defectN. . For the convenience of processing, we convert the information in the xml file processed by labelImg into a txt file, and only save the category and corresponding position of the defect in the picture. Each picture corresponds to a txt file, and the image and the txt file have the same name. In particular, if there is no flaw in the captured image, we will not use labelImg to process it, but only record its category information norm and store it in a txt file;

(2)图像拆分,将图像划分成训练集和测试集两部分,训练集用于训练检验模型,测试集用于评估检测模型性能,其中训练集是数据集中的前4500张图像,测试集是数据集中的后500张图像;(2) Image splitting, the image is divided into two parts, training set and test set, the training set is used to train the test model, and the test set is used to evaluate the performance of the detection model, wherein the training set is the first 4500 images in the data set, and the test set is the last 500 images in the dataset;

(3)图像预处理,以训练集中的图像I为例,将图像I和对应的txt文件中的内容输入到图像预处理模块中进行线上数据增强和内容变换,其中txt文件的内容存放在列表中;对于图像中存在瑕疵的图像,列表内容如下:(3) Image preprocessing, taking the image I in the training set as an example, input the content in the image I and the corresponding txt file into the image preprocessing module to perform online data enhancement and content transformation, wherein the content of the txt file is stored in list; for images with artifacts in the image, the list is as follows:

Figure BSA0000173746160000061
Figure BSA0000173746160000061

对于图像中不存在瑕疵的图像,列表内容如下:For images with no artifacts in the image, the list is as follows:

[norm][norm]

线上数据增强包括随机上下翻转、随机左右翻转和随机光照改变等,特别的,当进行随机上下翻转和随机左右翻转的时候,瑕疵的坐标信息也需要做出相应的变化;Online data enhancement includes random up and down flips, random left and right flips, and random lighting changes. In particular, when performing random up and down flips and random left and right flips, the coordinate information of defects also needs to be changed accordingly;

(4)构建检验模型,如图1和图2所示,将原始的se-resnext101模型最后的全局池化层代替为3个由并行的特征块全局池化层和特征块最大池化层构成的特征块池化小模块,特征块池化小模块的结构如图3所示,每个小模块都是处于并行关系,每个小模块中池化层的尺寸相同但是不同模块之间的池化层的尺寸不同,另外将最后的全连接层用1个1*1大小的,步长为1的卷积操作代替,利用该改进的se-resnext101模型作为检验模型;(4) Build a test model, as shown in Figure 1 and Figure 2, replace the last global pooling layer of the original se-resnext101 model with three parallel feature block global pooling layers and feature block maximum pooling layers The feature block pooling small module, the structure of the feature block pooling small module is shown in Figure 3, each small module is in a parallel relationship, the size of the pooling layer in each small module is the same but the pooling between different modules In addition, the final fully connected layer is replaced by a convolution operation with a size of 1*1 and a step size of 1, and the improved se-resnext101 model is used as the test model;

具体实施中各特征块池化模块的参数设置如表1所示:The parameter settings of each feature block pooling module in the specific implementation are shown in Table 1:

表1 特征块池化模块的参数设置Table 1 Parameter settings of feature block pooling module

Figure BSA0000173746160000062
Figure BSA0000173746160000062

分辨率1024*768的图像I经过改进版se-resnext101模型的前向传播后,改进版se-resnext101模块5输出的特征图的分辨率为32*24,经过3个并行的特征块池化模块处理后的特征图的分辨率分别为10*6、12*8和14*10,其中分辨率为10*6的特征图上的特征点对应原图中的图像块的大小为448*448,滑动步长为64;分辨率为12*8的特征图上的特征点对应原图中的图像块的大小为320*320,滑动步长为64;分辨率为14*6的特征图上的特征点对应原图中的图像块大小为192*192,滑动步长为64。通过这种方式可以获得3种不同大小的图像块,从而有效解决由于小瑕疵在原始图像中占比过小所导致的问题,也可以处理多种尺度的瑕疵,使得模型的检验性能更加强大。After the image I with a resolution of 1024*768 passes through the forward propagation of the improved version of the se-resnext101 model, the resolution of the feature map output by the improved version of the se-resnext101 module 5 is 32*24, and after three parallel feature block pooling modules The resolutions of the processed feature maps are 10*6, 12*8, and 14*10, respectively, and the feature points on the feature map with a resolution of 10*6 correspond to the size of the image block in the original image is 448*448, The sliding step is 64; the feature points on the feature map with a resolution of 12*8 correspond to the size of the image block in the original image is 320*320, and the sliding step is 64; the feature points on the feature map with a resolution of 14*6 The image block size corresponding to the feature points in the original image is 192*192, and the sliding step is 64. In this way, image blocks of three different sizes can be obtained, which can effectively solve the problem caused by the small proportion of small defects in the original image, and can also deal with defects of various scales, making the inspection performance of the model more powerful.

(5)训练检验模型,将训练集中的图像和对应的标签信息输入到检验模型中,通过前向传播得到每个特征图上每个特征点对应的类别概率值,通过Focal Loss损失函数计算分类损失,利用带动量的梯度下降算法训练检验模型,获得检验模型的网络参数;(5) Train the inspection model, input the images in the training set and the corresponding label information into the inspection model, obtain the category probability value corresponding to each feature point on each feature map through forward propagation, and calculate the classification through the Focal Loss loss function Loss, use the gradient descent algorithm with momentum to train the test model, and obtain the network parameters of the test model;

具体实施中,动力设置为0.9,每次输进1张图像,4500步为1个迭代周期,共设置50个迭代周期,前10个迭代周期设置模型最后三层的学习速率为0.001,其他层的学习速率为0;之后的迭代周期设置模型最后三层的学习速率为0.0005,其他层的学习速率为0.00005,同时每4个迭代周期,学习速率变成原来学习速率的0.94;Focal Loss函数中α设置为0.25,β设置为2。训练结束后,保存检验模型的参数。In the specific implementation, the power is set to 0.9, one image is input each time, 4500 steps are one iteration cycle, a total of 50 iteration cycles are set, the learning rate of the last three layers of the model is set to 0.001 in the first 10 iteration cycles, and the other layers The learning rate of the model is 0; the learning rate of the last three layers of the model is set to 0.0005 in the subsequent iteration cycle, and the learning rate of other layers is 0.00005. At the same time, every 4 iteration cycles, the learning rate becomes 0.94 of the original learning rate; in the Focal Loss function α is set to 0.25 and β to 2. After training, save the parameters of the test model.

(6)布匹图像检验,将测试集中的图像输入到训练好的检验模型中提取特征并获取多尺度特征图上每个特征点的类别概率值;如果三张特征图中有两张及以上的特征图中所有特征点都判为norm,则认为该图像类别为norm,其他情况则认为图像中存在瑕疵;对于判别为存在瑕疵的图像,我们利用每个特征点都对应原图中的某个图像块,通过特征点的预测类别转换对应图像块的像素值获取相关的热力图,叠加多种特征图对应的热力图获得最终的热力图,由最终的热力图得到瑕疵的大致位置,对瑕疵附近的图像块取概率均值得到该瑕疵的类别,特别的,该算法可以处理图像中存在多瑕疵的情况,并得到各个瑕疵的类别和大致位置。(6) Cloth image inspection, input the images in the test set into the trained inspection model to extract features and obtain the category probability value of each feature point on the multi-scale feature map; if there are two or more in the three feature maps If all the feature points in the feature map are judged as norm, then the image category is considered norm, and in other cases, it is considered that there are defects in the image; for the image judged as having defects, we use each feature point to correspond to a certain For the image block, convert the pixel value of the corresponding image block through the predicted category of the feature point to obtain the relevant heat map, superimpose the heat map corresponding to various feature maps to obtain the final heat map, and obtain the approximate position of the defect from the final heat map. The average value of the probability of nearby image blocks is used to obtain the category of the defect. In particular, the algorithm can deal with the situation where there are multiple defects in the image, and obtain the category and approximate location of each defect.

本实施例最后在布匹测试集上进行测试,图4展示了se-resnext101模型和改进版se-resnext101模型在二分类检验结果上准确率,即将图像判别为正常样本和瑕疵样本的准确率,可以看到改进版的se-resnext101模型的准确度明显高于se-resnext101模型,从测试结果可以看出本方法对于大分辨率图像中的小瑕疵可以进行高效检验。图5展示了改进版resnext101模型在测试集上瑕疵分类的准确率,可以看到本专利的方法可以对存在多种瑕疵的图像检测出图像子模块中存在的瑕疵类别。Finally, this embodiment is tested on the cloth test set. Figure 4 shows the accuracy of the se-resnext101 model and the improved version of the se-resnext101 model on the binary classification test results, that is, the accuracy of distinguishing the image as a normal sample or a defective sample. Seeing that the accuracy of the improved version of the se-resnext101 model is significantly higher than that of the se-resnext101 model, it can be seen from the test results that this method can efficiently inspect small defects in large-resolution images. Figure 5 shows the accuracy of the improved version of the resnext101 model on the defect classification on the test set. It can be seen that the method of this patent can detect the defect categories in the image sub-module for images with multiple defects.

Claims (2)

1. An efficient inspection algorithm for small defects in large resolution cloth images, comprising the steps of:
(1) Acquiring an image, shooting a cloth image by using a camera with the resolution of 2560 × 1920, acquiring a related data set, renaming the image, zooming the image to 1024 × 768, labeling the shot image by using a label Img tool, and acquiring a label about a flaw in the image, wherein the label about the flaw comprises coordinates (x 1, y 1) of the flaw at the upper left corner, coordinates (x 2, y 2) of the lower right corner and a category deffectN of the flaw, wherein N represents a number, and if the shot image has no flaw, only recording category information norm of the shot image without processing by using the label Img;
(2) Dividing the image into a training set and a test set, wherein the two parts do not have the same image, the training set is used for training the test model, and the test set is used for evaluating the performance of the test model;
(3) Image preprocessing, including random up-down turning, random left-right turning and random illumination change, wherein the random up-down turning, the random left-right turning and the random illumination change only aim at a training set, and when the random up-down turning and the random left-right turning are carried out, coordinate information of flaws also needs to be changed correspondingly;
(4) Training a test model, inputting images and label information in a training set after image preprocessing into the test model for training, wherein the test model is improved on the basis of se-next 101, so that a network can obtain a multi-scale feature map on the model aiming at an input image with single resolution, obtain the class probability value of each feature point on each feature map through forward propagation of the test model, calculate classification Loss through a Focal local function, and reversely propagate the test model by utilizing a gradient descent algorithm with momentum;
(5) Performing cloth image inspection, inputting the images in the test set into a trained inspection model to extract features and acquiring the class probability value of each feature point on the multi-scale feature map; if all the feature points in two or more feature maps in the three feature maps are judged to be norm, the image type is considered to be norm, and the defects exist in the image under other conditions; for the image judged to have the defect, each feature point corresponds to a certain image block in an original image, a related thermodynamic diagram is obtained by converting the pixel value of the corresponding image block through the prediction type of the feature point, the thermodynamic diagrams corresponding to a plurality of feature diagrams are superposed to obtain a final thermodynamic diagram, the final thermodynamic diagram is used for obtaining the approximate position of the defect, the probability mean value is taken for the image blocks near the defect to obtain the type of the defect, and the algorithm can process the condition that the image has a plurality of defects and obtain the type and the approximate position of each defect;
the training in the step (4) comprises a training step based on an improved se-next 101 model, a transfer learning step, a two-stage learning rate adjusting step, a convolution network feature extracting step, a self-adaptive feature weight adjusting step, a multi-scale feature map processing step, a Focal local calculating step and a model back propagation training step by utilizing a gradient descent algorithm with momentum;
the step (4) is specifically as follows:
(4.1) replacing the last global pooling layer of the original se-resnext101 model with 3 feature block pooling small modules consisting of parallel feature block global pooling layers and feature block maximum pooling layers, wherein each small module is in a parallel relation, the pooling layers in each small module have the same size but different sizes, in addition, the last fully-connected layer is replaced by 1 convolution operation with the size of 1 × 1 and the step length of 1, and the improved se-resnext101 model is used as an inspection model;
(4.2) initializing the improved se-resnext101 model using the weights trained by the se-resnext101 model on the ImageNets image set, i.e. verifying the model, only preserving the weights except all bias weights, the last global pooling layer, the last full-link layer and the softmax layer;
(4.3) adjusting the learning rate of the network by adopting two-stage learning rate during model training, namely training the last three layers of the model including the feature block pooling module at a certain learning rate in an initial stage and keeping the weights of other layers of the model unchanged, using a larger learning rate for the last three layers of the model after training a plurality of iteration cycles, using a smaller learning rate for other layers, reducing the learning rate according to a certain rule, and traversing all images in a training set in each iteration cycle;
(4.4) inputting the training image into an improved se-rescext 101 model, extracting features by using convolution operation, increasing the receptive field of a feature map, and enabling a network to adaptively adjust feature weight by using extrusion and excitation sub-modules contained in the original se-rescext 101 model, highlighting effective features, inhibiting ineffective features and improving two dimensions of a feature space and a feature channel;
(4.5) utilizing 3 parallel feature block pooling sub-modules for the feature map output by the last convolutional layer, wherein the pooling layers in each small module have the same size but different sizes, so as to obtain 3 feature maps with different sizes; performing convolution operation with the size of 1 x1 and the step length of 1 on the obtained feature maps, and then calculating category probability values corresponding to each feature point on the 3 feature maps with different sizes by using softmax;
(4.6) due to the existence of the receptive field, the feature points on the feature map correspond to image blocks of the original image, the real category of each image block can be obtained according to the positions of flaws in the image, so that the real category of the corresponding feature points is obtained, the classification Loss is calculated by using a Focal Loss function according to the predicted category probability value and the real category information, and finally, the detection model parameters are updated by using a gradient descent algorithm with momentum.
2. An efficient inspection algorithm for small defects in large-resolution cloth images as claimed in claim 1, wherein said step (5) is specifically: inputting a test image into an inspection model, acquiring a feature map with an increasing receptive field and a decreasing resolution ratio through forward propagation, processing the feature map output by the last convolutional layer by using 3 parallel feature block pooling small modules, acquiring 3 feature maps with different sizes, performing convolution operation with the size of 1 × 1 and the step length of 1 on the 3 feature maps with different sizes, and finally acquiring a category probability value corresponding to each feature point on each feature map by using softmax operation; if all the feature points in two or more feature maps in the three feature maps are judged to be norm, the image type is considered to be norm, and the defects exist in the image under other conditions; the method comprises the steps of obtaining 3 thermodynamic diagrams by assigning the pixel value of an image block which is obtained by mapping a feature point which is judged to be norm back to the original image in a feature diagram to 0 and assigning the pixel value of an image block which is obtained by mapping other feature points in the feature diagram back to the original image to 1, superposing the 3 thermodynamic diagrams to obtain a final thermodynamic diagram, and obtaining the approximate position of a defect from the final thermodynamic diagram, wherein the type of the defect is determined by the probability mean value of the types of a plurality of feature points which are judged to be defects.
CN201811336000.3A2018-11-052018-11-05 An Efficient Inspection Algorithm for Small Defects in Large Resolution Cloth ImagesExpired - Fee RelatedCN109509187B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201811336000.3ACN109509187B (en)2018-11-052018-11-05 An Efficient Inspection Algorithm for Small Defects in Large Resolution Cloth Images

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201811336000.3ACN109509187B (en)2018-11-052018-11-05 An Efficient Inspection Algorithm for Small Defects in Large Resolution Cloth Images

Publications (2)

Publication NumberPublication Date
CN109509187A CN109509187A (en)2019-03-22
CN109509187Btrue CN109509187B (en)2022-12-13

Family

ID=65748135

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201811336000.3AExpired - Fee RelatedCN109509187B (en)2018-11-052018-11-05 An Efficient Inspection Algorithm for Small Defects in Large Resolution Cloth Images

Country Status (1)

CountryLink
CN (1)CN109509187B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110009629A (en)*2019-04-122019-07-12北京天明创新数据科技有限公司A kind of pneumoconiosis screening system and its data training method
CN110321805B (en)*2019-06-122021-08-10华中科技大学Dynamic expression recognition method based on time sequence relation reasoning
CN110610482A (en)*2019-08-122019-12-24浙江工业大学 A Resnet-based Workpiece Defect Detection Method
CN110504029B (en)*2019-08-292022-08-19腾讯医疗健康(深圳)有限公司Medical image processing method, medical image identification method and medical image identification device
CN111028250A (en)*2019-12-272020-04-17创新奇智(广州)科技有限公司Real-time intelligent cloth inspecting method and system
CN111340783A (en)*2020-02-272020-06-26创新奇智(广州)科技有限公司Real-time cloth defect detection method
CN111325744A (en)*2020-03-062020-06-23成都数之联科技有限公司Sample processing method in panel defect detection
CN112053357A (en)*2020-09-272020-12-08同济大学FPN-based steel surface flaw detection method
CN112348776A (en)*2020-10-162021-02-09上海布眼人工智能科技有限公司Fabric flaw detection method based on EfficientDet
CN113689381B (en)*2021-07-212024-02-27航天晨光股份有限公司Corrugated pipe inner wall flaw detection model and detection method
CN113808035B (en)*2021-08-252024-04-26厦门微图软件科技有限公司Flaw detection method based on semi-supervised learning
CN116128788A (en)*2021-11-122023-05-16先进半导体材料(深圳)有限公司Lead frame shipment method
CN115410024A (en)*2022-05-272022-11-29福建亿榕信息技术有限公司Power image defect detection method based on dynamic activation thermodynamic diagram
CN115797349B (en)*2023-02-072023-07-07广东奥普特科技股份有限公司Defect detection method, device and equipment
CN116304811B (en)*2023-02-282024-01-16王宇轩Dynamic sample weight adjustment method and system based on focus loss function

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102331425A (en)*2011-06-282012-01-25合肥工业大学Textile defect detection method based on defect enhancement
CN103456021A (en)*2013-09-242013-12-18苏州大学Piece goods blemish detecting method based on morphological analysis
CN107316295A (en)*2017-07-022017-11-03苏州大学A kind of fabric defects detection method based on deep neural network
CN108647723A (en)*2018-05-112018-10-12湖北工业大学A kind of image classification method based on deep learning network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20010012381A1 (en)*1997-09-262001-08-09Hamed Sari-SarrafVision-based, on-loom fabric inspection system
GB201704373D0 (en)*2017-03-202017-05-03Rolls-Royce LtdSurface defect detection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102331425A (en)*2011-06-282012-01-25合肥工业大学Textile defect detection method based on defect enhancement
CN103456021A (en)*2013-09-242013-12-18苏州大学Piece goods blemish detecting method based on morphological analysis
CN107316295A (en)*2017-07-022017-11-03苏州大学A kind of fabric defects detection method based on deep neural network
CN108647723A (en)*2018-05-112018-10-12湖北工业大学A kind of image classification method based on deep learning network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于稀疏编码的双尺度布匹瑕疵检测;张龙剑 等;《计算机应用》;20141010;第34卷(第10期);第3009-3013页*

Also Published As

Publication numberPublication date
CN109509187A (en)2019-03-22

Similar Documents

PublicationPublication DateTitle
CN109509187B (en) An Efficient Inspection Algorithm for Small Defects in Large Resolution Cloth Images
CN113724231B (en)Industrial defect detection method based on semantic segmentation and target detection fusion model
Wu et al.Automatic fabric defect detection using a wide-and-light network
CN111489334B (en) A Defective Artifact Image Recognition Method Based on Convolutional Attention Neural Network
CN112348787B (en)Training method of object defect detection model, object defect detection method and device
CN109035233B (en)Visual attention network system and workpiece surface defect detection method
CN114549507B (en) Improved Scaled-YOLOv4 fabric defect detection method
WO2023077404A1 (en)Defect detection method, apparatus and system
CN111402226A (en) A Surface Defect Detection Method Based on Cascaded Convolutional Neural Networks
WO2022236876A1 (en)Cellophane defect recognition method, system and apparatus, and storage medium
CN112102224B (en) A cloth defect recognition method based on deep convolutional neural network
CN115937143A (en)Fabric defect detection method
CN108416774A (en) A Fabric Type Recognition Method Based on Fine-grained Neural Network
CN108827969A (en)Metal parts surface defects detection and recognition methods and device
CN116205876B (en)Unsupervised notebook appearance defect detection method based on multi-scale standardized flow
CN116012291A (en)Industrial part image defect detection method and system, electronic equipment and storage medium
CN110929795B (en)Method for quickly identifying and positioning welding spot of high-speed wire welding machine
CN112991281B (en)Visual detection method, system, electronic equipment and medium
CN110245697B (en)Surface contamination detection method, terminal device and storage medium
CN114897802B (en) A metal surface defect detection method based on improved Faster RCNN algorithm
CN111340019A (en) Detection method of granary pests based on Faster R-CNN
CN113807378A (en) Training data increment method, electronic device and computer-readable recording medium
CN109977834B (en)Method and device for segmenting human hand and interactive object from depth image
CN114266750A (en) A method of daily object material recognition based on attention mechanism neural network
CN115564713A (en)Fabric image flaw detection method based on Laplacian-strengthened pyramid

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
DD01Delivery of document by public notice
DD01Delivery of document by public notice

Addressee:Chen Chucheng

Document name:Notification of Qualified Procedures

CF01Termination of patent right due to non-payment of annual fee
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20221213


[8]ページ先頭

©2009-2025 Movatter.jp