




技术领域technical field
本发明涉及图像分类领域,具体涉及了一种针对大分辨率布匹图像中的小瑕疵的高效检验算法。The invention relates to the field of image classification, in particular to an efficient inspection algorithm for small defects in large-resolution cloth images.
背景技术Background technique
早先的图像分类主要通过传统的机器学习的方法,通常分为两个部分:基于特征提取方法和基于模板匹配的方法,而基于特征提取方法主要有基于统计的方法、基于谱的方法、基于纹理模型方法、基于学习的方法和基于结构的方法。这些方法都需要人为地选择特征,同时泛化性不强。The previous image classification mainly used traditional machine learning methods, which are usually divided into two parts: feature extraction-based methods and template-matching-based methods, and feature extraction-based methods mainly include statistics-based methods, spectrum-based methods, and texture-based methods. Model methods, learning-based methods, and structure-based methods. These methods all need to select features artificially, and the generalization is not strong at the same time.
随着深度学习的发展,特别是卷积神经网络在图像分类、图像检测和图像分割等方面的应用,所取得的效果是过往传统算法无法比拟的。但是基于深度学习的分类算法需要训练的数据集中目标区域占图像面积的比重较大,才能有较好的效果,如果图像是高分辨率图像但是分类的目标却很小,有可能分类目标只占不到图像总面积1%,那么直接使用传统的基于深度学习的分类算法进行图像分类,准确率是很低的。同时,如果图像中存在着多个类别的目标,我们也无法获取这些目标的类别信息。With the development of deep learning, especially the application of convolutional neural network in image classification, image detection and image segmentation, the effect achieved is unmatched by traditional algorithms in the past. However, the classification algorithm based on deep learning requires that the target area in the training data set accounts for a large proportion of the image area in order to have a better effect. If the image is a high-resolution image but the classification target is small, it is possible that the classification target only takes up If it is less than 1% of the total area of the image, the accuracy rate is very low if the traditional classification algorithm based on deep learning is directly used for image classification. At the same time, if there are multiple categories of objects in the image, we cannot obtain the category information of these objects.
发明内容Contents of the invention
为了克服现有技术存在的不足,本发明提出了一种针对大分辨率布匹图像中的小瑕疵的高效检验算法,该方法可以对单分辨率的输入图像实现多尺度特征图的处理,从而处理多尺度图像块,这样可以有效处理多种不同大小的瑕疵,大幅度提高检测精度和速度;同时该方法可以在图像分类框架上获取瑕疵的大致区域和处理图片中存在多种瑕疵的情况。In order to overcome the shortcomings of the existing technology, the present invention proposes an efficient inspection algorithm for small defects in large-resolution cloth images. This method can realize the processing of multi-scale feature maps for single-resolution input images, thereby processing Multi-scale image blocks, which can effectively deal with a variety of defects of different sizes, greatly improving the detection accuracy and speed; at the same time, this method can obtain the approximate area of the defect in the image classification framework and deal with the situation that there are multiple defects in the picture.
为了实现以上目的,本发明提出的方法具体步骤如下:In order to achieve the above object, the concrete steps of the method proposed by the present invention are as follows:
(1)图像采集,利用分辨率为2560*1920的摄像头对布匹图像进行拍摄,获取相关的数据集并对图像重命名,如1.jpg,2.jpg,3.jpg,…,M.jpg等,接着将图像缩放到1024*768大小,并采用labelImg工具对拍摄的图像进行标注,获取图像中关于瑕疵的标签,瑕疵的标签包含了瑕疵在图像中左上角的坐标(x1,y1),右下角的坐标(x2,y2)和瑕疵的类别defectN,其中N表示数字,如1,2,3,…等,特别的,如果拍摄的图像中没有瑕疵,我们不会用labelImg进行处理,只记录其类别信息norm;(1) Image collection, use a camera with a resolution of 2560*1920 to shoot the cloth image, obtain the relevant data set and rename the image, such as 1.jpg, 2.jpg, 3.jpg, ..., M.jpg etc., and then scale the image to a size of 1024*768, and use the labelImg tool to label the captured image to obtain the label of the defect in the image. The label of the defect contains the coordinates (x1, y1) of the upper left corner of the defect in the image, The coordinates (x2, y2) of the lower right corner and the defect category defectN, where N represents a number, such as 1, 2, 3, ..., etc. In particular, if there is no defect in the captured image, we will not use labelImg for processing, only Record its category information norm;
(2)图像划分,将图像划分成训练集和测试集两部分,两个部分不存在相同的图像,训练集用来训练检验模型,测试集用来评估检验模型的性能;(2) Image division, the image is divided into two parts, the training set and the test set, the two parts do not have the same image, the training set is used to train the test model, and the test set is used to evaluate the performance of the test model;
(3)图像预处理,包括随机上下翻转、随机左右翻转和随机光照改变等,其中随机上下翻转、随机左右翻转和随机光照改变只针对训练集,特别的,当进行随机上下翻转和随机左右翻转的时候,瑕疵的坐标信息也需要做出相应的变化;(3) Image preprocessing, including random up-down flipping, random left-right flipping, and random lighting changes, among which random up-down flipping, random left-right flipping, and random lighting changes are only for the training set. In particular, when performing random up-down flipping and random left-right flipping When , the coordinate information of the defect also needs to be changed accordingly;
(4)训练检验模型,将经过图像预处理后的训练集中的图像和标签信息输入到检验模型中进行训练,检验模型是在se-resnext101的基础上进行改进的,使得网络可以针对单分辨率的输入图像在模型上获取多尺度特征图,通过检验模型的前向传播获取每个特征图上每个特征点的类别概率值,通过Focal Loss函数计算分类损失,利用带动量的梯度下降算法反向传播训练模型;(4) Train the inspection model, input the image and label information in the training set after image preprocessing into the inspection model for training, the inspection model is improved on the basis of se-resnext101, so that the network can target single resolution The multi-scale feature map of the input image is obtained on the model, the category probability value of each feature point on each feature map is obtained through the forward propagation of the inspection model, the classification loss is calculated by the Focal Loss function, and the gradient descent algorithm with momentum is used to reverse To propagate the training model;
(5)布匹图像检验,将测试集中的图像输入到训练好的检验模型中提取特征并获取多尺度特征图上每个特征点的类别概率值;如果三张特征图中有两张及以上的特征图中所有特征点都判为norm,则认为该图像类别为norm,其他情况则认为图像中存在瑕疵;对于判别为存在瑕疵的图像,我们利用每个特征点都对应原图中的某个图像块,通过特征点的预测类别转换对应图像块的像素值获取相关的热力图,叠加多种特征图对应的热力图获得最终的热力图,由最终的热力图得到瑕疵的大致位置,对瑕疵附近的图像块取概率均值得到该瑕疵的类别,特别的,该算法可以处理图像中存在多瑕疵的情况,并得到各个瑕疵的类别和大致位置。(5) Cloth image inspection, input the images in the test set into the trained inspection model to extract features and obtain the category probability value of each feature point on the multi-scale feature map; if there are two or more in the three feature maps If all the feature points in the feature map are judged as norm, then the image category is considered norm, and in other cases, it is considered that there are defects in the image; for the image judged as having defects, we use each feature point to correspond to a certain For the image block, convert the pixel value of the corresponding image block through the predicted category of the feature point to obtain the relevant heat map, superimpose the heat map corresponding to various feature maps to obtain the final heat map, and obtain the approximate position of the defect from the final heat map. The average value of the probability of nearby image blocks is used to obtain the category of the defect. In particular, the algorithm can deal with the situation where there are multiple defects in the image, and obtain the category and approximate location of each defect.
所述步骤(4)中训练包括基于改进的se-resnext101模型进行训练步骤、迁移学习步骤、二阶段学习速率调整步骤、卷积网络提取特征步骤、自适应调整特征权重步骤、多尺度图像块处理步骤、计算Focal Loss步骤和利用带动量的梯度下降算法反向传播训练模型步骤。Training in said step (4) includes training steps based on the improved se-resnext101 model, migration learning steps, two-stage learning rate adjustment steps, convolution network extraction feature steps, adaptive adjustment feature weight steps, multi-scale image block processing step, calculate Focal Loss step and use the gradient descent algorithm with momentum to backpropagate the training model step.
如图1和图2所示,所述步骤(4)具体为:As shown in Fig. 1 and Fig. 2, described step (4) is specifically:
(4.1)将原始的se-resnext101模型最后的全局池化层代替为3个由并行的特征块全局池化层和特征块最大池化层构成的特征块池化小模块,每个小模块都是处于并行关系,每个小模块中池化层的尺寸相同但是不同模块之间的池化层的尺寸不同,另外将最后的全连接层用1个1*1大小的,步长为1的卷积操作代替,利用该改进的se-resnext101模型作为检验模型;(4.1) Replace the last global pooling layer of the original se-resnext101 model with three small feature block pooling modules consisting of parallel feature block global pooling layers and feature block maximum pooling layers. Each small module has It is in a parallel relationship. The size of the pooling layer in each small module is the same but the size of the pooling layer between different modules is different. In addition, the last fully connected layer uses a 1*1 size with a step size of 1. The convolution operation is replaced, and the improved se-resnext101 model is used as the test model;
(4.2)使用se-resnext101模型在ImageNets图像集上训练得到的权重来初始化改进版的se-resnext101模型,即检验模型,我们只保留了除所有的偏置权重、最后的全局池化层、最后的全连接层和softmax层外的权重;(4.2) Use the weights of the se-resnext101 model trained on the ImageNets image set to initialize the improved version of the se-resnext101 model, that is, the test model. We only retain all the bias weights, the last global pooling layer, and finally The fully connected layer and the weight outside the softmax layer;
(4.3)训练模型时候采用二阶段学习速率调整网络的学习速率,即在初始阶段以某个学习速率训练包括特征块池化模块在内的模型的最后三层而保持模型其他层的权重不变,在训练若干个迭代周期(每个迭代周期遍历训练集中所有的图像)后对模型的最后三层使用一个较大的学习速率,其他层使用一个较小的学习速率,并按照一定的规律降低学习速率;(4.3) When training the model, use the two-stage learning rate to adjust the learning rate of the network, that is, train the last three layers of the model including the feature block pooling module at a certain learning rate in the initial stage while keeping the weights of other layers of the model unchanged , after training several iteration cycles (each iteration cycle traverses all the images in the training set), use a larger learning rate for the last three layers of the model, and use a smaller learning rate for other layers, and reduce it according to a certain rule learning rate;
(4.4)将训练图像输入改进的se-resnext101模型中,利用卷积操作提取特征,增大特征图的感受野,同时利用原始se-resnext101模型中包含的挤压和激励子模块使得网络能够自适应调整特征权重,突出有效特征,抑制无效特征,使得特征空间和特征通道两个维度得到改善;(4.4) Input the training image into the improved se-resnext101 model, use the convolution operation to extract features, increase the receptive field of the feature map, and use the extrusion and excitation sub-modules contained in the original se-resnext101 model to enable the network to automatically Adaptively adjust feature weights, highlight effective features, and suppress invalid features, so that the two dimensions of feature space and feature channel are improved;
(4.5)对最后卷积层输出的特征图利用3个并行的特征块池化子模块,如图3所示,每个小模块中池化层的尺寸相同但是不同模块之间的池化层的尺寸不同,从而获取3种不同大小的特征图;对获取的特征图利用大小为1*1,步长为1的卷积操作,紧接着利用softmax计算出3种不同大小的特征图上每个特征点对应的类别概率值;(4.5) Use 3 parallel feature block pooling sub-modules for the feature map output by the last convolutional layer, as shown in Figure 3, the size of the pooling layer in each small module is the same but the pooling layer between different modules The size of the feature map is different, so as to obtain 3 different sizes of feature maps; use the convolution operation with a size of 1*1 and a step size of 1 for the obtained feature maps, and then use softmax to calculate each of the feature maps of 3 different sizes. The category probability value corresponding to each feature point;
(4.6)由于感受野的存在,特征图上的特征点对应原图的图像块,我们根据瑕疵在图像中的位置可以获知每个图像块的真实类别,从而得到对应特征点的真实类别,根据预测的类别概率值和真实的类别信息,利用Focal Loss函数计算分类损失,最后利用带动量的梯度下降算法反向传播更新检测模型参数。(4.6) Due to the existence of the receptive field, the feature points on the feature map correspond to the image blocks of the original image. We can know the real category of each image block according to the position of the defect in the image, so as to obtain the real category of the corresponding feature point. According to The predicted category probability value and the real category information are used to calculate the classification loss using the Focal Loss function, and finally the gradient descent algorithm with momentum is used to backpropagate to update the detection model parameters.
所述步骤(5)具体为:将测试图像输入到检验模型中,通过前向传播,获取感受野不断增大,分辨率不断降低的特征图,对最后的卷积层输出的特征图利用3个并行的特征块池化小模块进行处理,获取3种不同大小的特征图,对着3种不同大小的特征图进行大小为1*1,步长为1的卷积操作,最后利用softmax操作获取每个特征图上每个特征点对应的类别概率值。如果三张特征图中有两张及以上的特征图中所有特征点都判为norm,则认为该图像类别为norm,对于其他情况则认为图像中存在瑕疵。对于存在瑕疵的图像,我们将特征图中判别为norm的特征点映射回原图的图像块的像素值赋为0,对特征图中其他的特征点映射回原图的图像块的像素值赋为1,通过这种方式获得3个热力图,将3个热力图叠加获得最终的热力图,从最终的热力图获取瑕疵的大致位置,瑕疵的类别由判别为瑕疵的若干个特征点的类别概率均值决定。The step (5) is specifically: input the test image into the inspection model, through forward propagation, obtain the feature map with increasing receptive field and decreasing resolution, and use 3 A parallel feature block pooling small module is processed to obtain feature maps of 3 different sizes, perform a convolution operation with a size of 1*1 and a step size of 1 on the feature maps of 3 different sizes, and finally use the softmax operation Obtain the class probability value corresponding to each feature point on each feature map. If all feature points in two or more of the three feature maps are judged as norm, the image category is considered norm, and in other cases, it is considered that there are defects in the image. For images with flaws, we assign the pixel value of the image block that is judged as norm in the feature map to be mapped back to the original image as 0, and assign the pixel value of the image block that is mapped back to the original image to other feature points in the feature map as It is 1. In this way, 3 heat maps are obtained, and the final heat map is obtained by superimposing the 3 heat maps. The approximate position of the defect is obtained from the final heat map. Probability mean decision.
与现有技术相比,本发明的有益效果是:Compared with prior art, the beneficial effect of the present invention is:
本发明基于改进的se-resnext101模型实现对单分辨率的输入图像实现多尺度特征图的处理,从而处理多尺度图像块,这样可以适应多种不同大小的瑕疵,大幅度提高检测精度和速度;基于分类模型的框架,不仅可以给出预测图片是否存在瑕疵,还可以针对存在瑕疵的图像给出瑕疵的大致位置及对应的类别,处理存在多种类别瑕疵的图像;利用迁移学习和二阶段学习速率调整可以使得模型具有更强的泛化性能,提高对新样本的检验性能;利用Focal Loss函数代替传统的交叉熵损失函数来计算分类损失,有效解决样本不平衡的问题,提高检验模型对难分样本的关注,提高检验模型的性能。Based on the improved se-resnext101 model, the present invention realizes the processing of multi-scale feature maps for single-resolution input images, thereby processing multi-scale image blocks, which can adapt to various defects of different sizes, and greatly improve detection accuracy and speed; Based on the framework of the classification model, it can not only predict whether the picture is flawed, but also give the approximate location and corresponding category of the flawed image, and process images with multiple categories of flaws; use transfer learning and two-stage learning Speed adjustment can make the model have stronger generalization performance and improve the test performance of new samples; the Focal Loss function is used to replace the traditional cross-entropy loss function to calculate the classification loss, which can effectively solve the problem of sample imbalance and improve the accuracy of the test model. Sub-sample attention is used to improve the performance of the test model.
附图说明Description of drawings
图1为se-resnext101模型示意图Figure 1 is a schematic diagram of the se-resnext101 model
图2为改进版se-resnext101模型示意图Figure 2 is a schematic diagram of the improved se-resnext101 model
图3为特征块池化模块结构示意图Figure 3 is a schematic diagram of the structure of the feature block pooling module
图4为se-resnext模型和改进版se-resnext101模型在测试集上二分类准确率对比图Figure 4 is a comparison of the accuracy of the two classifications of the se-resnext model and the improved version of the se-resnext101 model on the test set
图5为改进版se-resnext101模型在测试集中瑕疵种类识别准确率柱状图Figure 5 is a histogram of the recognition accuracy of the improved se-resnext101 model in the test set
具体实施方式detailed description
下面对本发明进行进一步说明。The present invention is further described below.
本发明实施过程及实施例如下:Implementation process of the present invention and embodiment are as follows:
(1)图像采集,我们采用分辨率为2560*1920的摄像头对布匹图像进行拍摄,一共获取5000张布匹图像,并将它们重命名为1.jpg、2.jpg,…,5000.jpg,接着将图片缩放到1024*768大小,并采用labelImg工具对拍摄的图像进行标注,获取图像中关于瑕疵的标签。瑕疵的标签包含了瑕疵在图像中左上角的坐标(x1,y1),右下角的坐标(x2,y2)和瑕疵的类别defectN,其中N∈{1,2,3,…,9},表示数据集中一共有9种类型的瑕疵,瑕疵分别为油污、跳花、缺经、吊经、织稀、毛洞、擦洞、毛斑和扎洞,瑕疵次序与defectN中N的数字一一对应。为了方便处理,我们将labelImg处理得到的xml文件中的信息转换到txt文件中,只保存图片中瑕疵的类别和对应的位置,每张图片对应一个txt文件,且图像和txt文件的命名相同。特别的,如果拍摄的图像中没有瑕疵,我们不会用labelImg进行处理,只记录其类别信息norm,并存放在txt文件中;(1) Image collection. We use a camera with a resolution of 2560*1920 to shoot cloth images, and obtain a total of 5000 cloth images, and rename them as 1.jpg, 2.jpg, ..., 5000.jpg, and then Scale the picture to a size of 1024*768, and use the labelImg tool to label the captured image to obtain the label about the defect in the image. The label of the defect includes the coordinates (x1, y1) of the upper left corner of the defect in the image, the coordinates (x2, y2) of the lower right corner and the category defectN of the defect, where N ∈ {1, 2, 3, ..., 9}, means There are a total of 9 types of defects in the data set. The defects are oil stains, jumping flowers, missing warps, hanging warps, thin weaves, hair holes, scratch holes, hair spots, and prick holes. The order of defects corresponds to the number of N in defectN. . For the convenience of processing, we convert the information in the xml file processed by labelImg into a txt file, and only save the category and corresponding position of the defect in the picture. Each picture corresponds to a txt file, and the image and the txt file have the same name. In particular, if there is no flaw in the captured image, we will not use labelImg to process it, but only record its category information norm and store it in a txt file;
(2)图像拆分,将图像划分成训练集和测试集两部分,训练集用于训练检验模型,测试集用于评估检测模型性能,其中训练集是数据集中的前4500张图像,测试集是数据集中的后500张图像;(2) Image splitting, the image is divided into two parts, training set and test set, the training set is used to train the test model, and the test set is used to evaluate the performance of the detection model, wherein the training set is the first 4500 images in the data set, and the test set is the last 500 images in the dataset;
(3)图像预处理,以训练集中的图像I为例,将图像I和对应的txt文件中的内容输入到图像预处理模块中进行线上数据增强和内容变换,其中txt文件的内容存放在列表中;对于图像中存在瑕疵的图像,列表内容如下:(3) Image preprocessing, taking the image I in the training set as an example, input the content in the image I and the corresponding txt file into the image preprocessing module to perform online data enhancement and content transformation, wherein the content of the txt file is stored in list; for images with artifacts in the image, the list is as follows:
对于图像中不存在瑕疵的图像,列表内容如下:For images with no artifacts in the image, the list is as follows:
[norm][norm]
线上数据增强包括随机上下翻转、随机左右翻转和随机光照改变等,特别的,当进行随机上下翻转和随机左右翻转的时候,瑕疵的坐标信息也需要做出相应的变化;Online data enhancement includes random up and down flips, random left and right flips, and random lighting changes. In particular, when performing random up and down flips and random left and right flips, the coordinate information of defects also needs to be changed accordingly;
(4)构建检验模型,如图1和图2所示,将原始的se-resnext101模型最后的全局池化层代替为3个由并行的特征块全局池化层和特征块最大池化层构成的特征块池化小模块,特征块池化小模块的结构如图3所示,每个小模块都是处于并行关系,每个小模块中池化层的尺寸相同但是不同模块之间的池化层的尺寸不同,另外将最后的全连接层用1个1*1大小的,步长为1的卷积操作代替,利用该改进的se-resnext101模型作为检验模型;(4) Build a test model, as shown in Figure 1 and Figure 2, replace the last global pooling layer of the original se-resnext101 model with three parallel feature block global pooling layers and feature block maximum pooling layers The feature block pooling small module, the structure of the feature block pooling small module is shown in Figure 3, each small module is in a parallel relationship, the size of the pooling layer in each small module is the same but the pooling between different modules In addition, the final fully connected layer is replaced by a convolution operation with a size of 1*1 and a step size of 1, and the improved se-resnext101 model is used as the test model;
具体实施中各特征块池化模块的参数设置如表1所示:The parameter settings of each feature block pooling module in the specific implementation are shown in Table 1:
表1 特征块池化模块的参数设置Table 1 Parameter settings of feature block pooling module
分辨率1024*768的图像I经过改进版se-resnext101模型的前向传播后,改进版se-resnext101模块5输出的特征图的分辨率为32*24,经过3个并行的特征块池化模块处理后的特征图的分辨率分别为10*6、12*8和14*10,其中分辨率为10*6的特征图上的特征点对应原图中的图像块的大小为448*448,滑动步长为64;分辨率为12*8的特征图上的特征点对应原图中的图像块的大小为320*320,滑动步长为64;分辨率为14*6的特征图上的特征点对应原图中的图像块大小为192*192,滑动步长为64。通过这种方式可以获得3种不同大小的图像块,从而有效解决由于小瑕疵在原始图像中占比过小所导致的问题,也可以处理多种尺度的瑕疵,使得模型的检验性能更加强大。After the image I with a resolution of 1024*768 passes through the forward propagation of the improved version of the se-resnext101 model, the resolution of the feature map output by the improved version of the se-resnext101 module 5 is 32*24, and after three parallel feature block pooling modules The resolutions of the processed feature maps are 10*6, 12*8, and 14*10, respectively, and the feature points on the feature map with a resolution of 10*6 correspond to the size of the image block in the original image is 448*448, The sliding step is 64; the feature points on the feature map with a resolution of 12*8 correspond to the size of the image block in the original image is 320*320, and the sliding step is 64; the feature points on the feature map with a resolution of 14*6 The image block size corresponding to the feature points in the original image is 192*192, and the sliding step is 64. In this way, image blocks of three different sizes can be obtained, which can effectively solve the problem caused by the small proportion of small defects in the original image, and can also deal with defects of various scales, making the inspection performance of the model more powerful.
(5)训练检验模型,将训练集中的图像和对应的标签信息输入到检验模型中,通过前向传播得到每个特征图上每个特征点对应的类别概率值,通过Focal Loss损失函数计算分类损失,利用带动量的梯度下降算法训练检验模型,获得检验模型的网络参数;(5) Train the inspection model, input the images in the training set and the corresponding label information into the inspection model, obtain the category probability value corresponding to each feature point on each feature map through forward propagation, and calculate the classification through the Focal Loss loss function Loss, use the gradient descent algorithm with momentum to train the test model, and obtain the network parameters of the test model;
具体实施中,动力设置为0.9,每次输进1张图像,4500步为1个迭代周期,共设置50个迭代周期,前10个迭代周期设置模型最后三层的学习速率为0.001,其他层的学习速率为0;之后的迭代周期设置模型最后三层的学习速率为0.0005,其他层的学习速率为0.00005,同时每4个迭代周期,学习速率变成原来学习速率的0.94;Focal Loss函数中α设置为0.25,β设置为2。训练结束后,保存检验模型的参数。In the specific implementation, the power is set to 0.9, one image is input each time, 4500 steps are one iteration cycle, a total of 50 iteration cycles are set, the learning rate of the last three layers of the model is set to 0.001 in the first 10 iteration cycles, and the other layers The learning rate of the model is 0; the learning rate of the last three layers of the model is set to 0.0005 in the subsequent iteration cycle, and the learning rate of other layers is 0.00005. At the same time, every 4 iteration cycles, the learning rate becomes 0.94 of the original learning rate; in the Focal Loss function α is set to 0.25 and β to 2. After training, save the parameters of the test model.
(6)布匹图像检验,将测试集中的图像输入到训练好的检验模型中提取特征并获取多尺度特征图上每个特征点的类别概率值;如果三张特征图中有两张及以上的特征图中所有特征点都判为norm,则认为该图像类别为norm,其他情况则认为图像中存在瑕疵;对于判别为存在瑕疵的图像,我们利用每个特征点都对应原图中的某个图像块,通过特征点的预测类别转换对应图像块的像素值获取相关的热力图,叠加多种特征图对应的热力图获得最终的热力图,由最终的热力图得到瑕疵的大致位置,对瑕疵附近的图像块取概率均值得到该瑕疵的类别,特别的,该算法可以处理图像中存在多瑕疵的情况,并得到各个瑕疵的类别和大致位置。(6) Cloth image inspection, input the images in the test set into the trained inspection model to extract features and obtain the category probability value of each feature point on the multi-scale feature map; if there are two or more in the three feature maps If all the feature points in the feature map are judged as norm, then the image category is considered norm, and in other cases, it is considered that there are defects in the image; for the image judged as having defects, we use each feature point to correspond to a certain For the image block, convert the pixel value of the corresponding image block through the predicted category of the feature point to obtain the relevant heat map, superimpose the heat map corresponding to various feature maps to obtain the final heat map, and obtain the approximate position of the defect from the final heat map. The average value of the probability of nearby image blocks is used to obtain the category of the defect. In particular, the algorithm can deal with the situation where there are multiple defects in the image, and obtain the category and approximate location of each defect.
本实施例最后在布匹测试集上进行测试,图4展示了se-resnext101模型和改进版se-resnext101模型在二分类检验结果上准确率,即将图像判别为正常样本和瑕疵样本的准确率,可以看到改进版的se-resnext101模型的准确度明显高于se-resnext101模型,从测试结果可以看出本方法对于大分辨率图像中的小瑕疵可以进行高效检验。图5展示了改进版resnext101模型在测试集上瑕疵分类的准确率,可以看到本专利的方法可以对存在多种瑕疵的图像检测出图像子模块中存在的瑕疵类别。Finally, this embodiment is tested on the cloth test set. Figure 4 shows the accuracy of the se-resnext101 model and the improved version of the se-resnext101 model on the binary classification test results, that is, the accuracy of distinguishing the image as a normal sample or a defective sample. Seeing that the accuracy of the improved version of the se-resnext101 model is significantly higher than that of the se-resnext101 model, it can be seen from the test results that this method can efficiently inspect small defects in large-resolution images. Figure 5 shows the accuracy of the improved version of the resnext101 model on the defect classification on the test set. It can be seen that the method of this patent can detect the defect categories in the image sub-module for images with multiple defects.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811336000.3ACN109509187B (en) | 2018-11-05 | 2018-11-05 | An Efficient Inspection Algorithm for Small Defects in Large Resolution Cloth Images |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811336000.3ACN109509187B (en) | 2018-11-05 | 2018-11-05 | An Efficient Inspection Algorithm for Small Defects in Large Resolution Cloth Images |
| Publication Number | Publication Date |
|---|---|
| CN109509187A CN109509187A (en) | 2019-03-22 |
| CN109509187Btrue CN109509187B (en) | 2022-12-13 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201811336000.3AExpired - Fee RelatedCN109509187B (en) | 2018-11-05 | 2018-11-05 | An Efficient Inspection Algorithm for Small Defects in Large Resolution Cloth Images |
| Country | Link |
|---|---|
| CN (1) | CN109509187B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110009629A (en)* | 2019-04-12 | 2019-07-12 | 北京天明创新数据科技有限公司 | A kind of pneumoconiosis screening system and its data training method |
| CN110321805B (en)* | 2019-06-12 | 2021-08-10 | 华中科技大学 | Dynamic expression recognition method based on time sequence relation reasoning |
| CN110610482A (en)* | 2019-08-12 | 2019-12-24 | 浙江工业大学 | A Resnet-based Workpiece Defect Detection Method |
| CN110504029B (en)* | 2019-08-29 | 2022-08-19 | 腾讯医疗健康(深圳)有限公司 | Medical image processing method, medical image identification method and medical image identification device |
| CN111028250A (en)* | 2019-12-27 | 2020-04-17 | 创新奇智(广州)科技有限公司 | Real-time intelligent cloth inspecting method and system |
| CN111340783A (en)* | 2020-02-27 | 2020-06-26 | 创新奇智(广州)科技有限公司 | Real-time cloth defect detection method |
| CN111325744A (en)* | 2020-03-06 | 2020-06-23 | 成都数之联科技有限公司 | Sample processing method in panel defect detection |
| CN112053357A (en)* | 2020-09-27 | 2020-12-08 | 同济大学 | FPN-based steel surface flaw detection method |
| CN112348776A (en)* | 2020-10-16 | 2021-02-09 | 上海布眼人工智能科技有限公司 | Fabric flaw detection method based on EfficientDet |
| CN113689381B (en)* | 2021-07-21 | 2024-02-27 | 航天晨光股份有限公司 | Corrugated pipe inner wall flaw detection model and detection method |
| CN113808035B (en)* | 2021-08-25 | 2024-04-26 | 厦门微图软件科技有限公司 | Flaw detection method based on semi-supervised learning |
| CN116128788A (en)* | 2021-11-12 | 2023-05-16 | 先进半导体材料(深圳)有限公司 | Lead frame shipment method |
| CN115410024A (en)* | 2022-05-27 | 2022-11-29 | 福建亿榕信息技术有限公司 | Power image defect detection method based on dynamic activation thermodynamic diagram |
| CN115797349B (en)* | 2023-02-07 | 2023-07-07 | 广东奥普特科技股份有限公司 | Defect detection method, device and equipment |
| CN116304811B (en)* | 2023-02-28 | 2024-01-16 | 王宇轩 | Dynamic sample weight adjustment method and system based on focus loss function |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102331425A (en)* | 2011-06-28 | 2012-01-25 | 合肥工业大学 | Textile defect detection method based on defect enhancement |
| CN103456021A (en)* | 2013-09-24 | 2013-12-18 | 苏州大学 | Piece goods blemish detecting method based on morphological analysis |
| CN107316295A (en)* | 2017-07-02 | 2017-11-03 | 苏州大学 | A kind of fabric defects detection method based on deep neural network |
| CN108647723A (en)* | 2018-05-11 | 2018-10-12 | 湖北工业大学 | A kind of image classification method based on deep learning network |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20010012381A1 (en)* | 1997-09-26 | 2001-08-09 | Hamed Sari-Sarraf | Vision-based, on-loom fabric inspection system |
| GB201704373D0 (en)* | 2017-03-20 | 2017-05-03 | Rolls-Royce Ltd | Surface defect detection |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102331425A (en)* | 2011-06-28 | 2012-01-25 | 合肥工业大学 | Textile defect detection method based on defect enhancement |
| CN103456021A (en)* | 2013-09-24 | 2013-12-18 | 苏州大学 | Piece goods blemish detecting method based on morphological analysis |
| CN107316295A (en)* | 2017-07-02 | 2017-11-03 | 苏州大学 | A kind of fabric defects detection method based on deep neural network |
| CN108647723A (en)* | 2018-05-11 | 2018-10-12 | 湖北工业大学 | A kind of image classification method based on deep learning network |
| Title |
|---|
| 基于稀疏编码的双尺度布匹瑕疵检测;张龙剑 等;《计算机应用》;20141010;第34卷(第10期);第3009-3013页* |
| Publication number | Publication date |
|---|---|
| CN109509187A (en) | 2019-03-22 |
| Publication | Publication Date | Title |
|---|---|---|
| CN109509187B (en) | An Efficient Inspection Algorithm for Small Defects in Large Resolution Cloth Images | |
| CN113724231B (en) | Industrial defect detection method based on semantic segmentation and target detection fusion model | |
| Wu et al. | Automatic fabric defect detection using a wide-and-light network | |
| CN111489334B (en) | A Defective Artifact Image Recognition Method Based on Convolutional Attention Neural Network | |
| CN112348787B (en) | Training method of object defect detection model, object defect detection method and device | |
| CN109035233B (en) | Visual attention network system and workpiece surface defect detection method | |
| CN114549507B (en) | Improved Scaled-YOLOv4 fabric defect detection method | |
| WO2023077404A1 (en) | Defect detection method, apparatus and system | |
| CN111402226A (en) | A Surface Defect Detection Method Based on Cascaded Convolutional Neural Networks | |
| WO2022236876A1 (en) | Cellophane defect recognition method, system and apparatus, and storage medium | |
| CN112102224B (en) | A cloth defect recognition method based on deep convolutional neural network | |
| CN115937143A (en) | Fabric defect detection method | |
| CN108416774A (en) | A Fabric Type Recognition Method Based on Fine-grained Neural Network | |
| CN108827969A (en) | Metal parts surface defects detection and recognition methods and device | |
| CN116205876B (en) | Unsupervised notebook appearance defect detection method based on multi-scale standardized flow | |
| CN116012291A (en) | Industrial part image defect detection method and system, electronic equipment and storage medium | |
| CN110929795B (en) | Method for quickly identifying and positioning welding spot of high-speed wire welding machine | |
| CN112991281B (en) | Visual detection method, system, electronic equipment and medium | |
| CN110245697B (en) | Surface contamination detection method, terminal device and storage medium | |
| CN114897802B (en) | A metal surface defect detection method based on improved Faster RCNN algorithm | |
| CN111340019A (en) | Detection method of granary pests based on Faster R-CNN | |
| CN113807378A (en) | Training data increment method, electronic device and computer-readable recording medium | |
| CN109977834B (en) | Method and device for segmenting human hand and interactive object from depth image | |
| CN114266750A (en) | A method of daily object material recognition based on attention mechanism neural network | |
| CN115564713A (en) | Fabric image flaw detection method based on Laplacian-strengthened pyramid |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| DD01 | Delivery of document by public notice | ||
| DD01 | Delivery of document by public notice | Addressee:Chen Chucheng Document name:Notification of Qualified Procedures | |
| CF01 | Termination of patent right due to non-payment of annual fee | ||
| CF01 | Termination of patent right due to non-payment of annual fee | Granted publication date:20221213 |