Movatterモバイル変換


[0]ホーム

URL:


CN112200762A - Diode glass bulb defect detection method - Google Patents

Diode glass bulb defect detection method
Download PDF

Info

Publication number
CN112200762A
CN112200762ACN202010633906.2ACN202010633906ACN112200762ACN 112200762 ACN112200762 ACN 112200762ACN 202010633906 ACN202010633906 ACN 202010633906ACN 112200762 ACN112200762 ACN 112200762A
Authority
CN
China
Prior art keywords
glass bulb
diode glass
diode
industrial camera
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010633906.2A
Other languages
Chinese (zh)
Inventor
刘桂华
向伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mianyang Keruite Robot Co ltd
Southwest University of Science and Technology
Original Assignee
Mianyang Keruite Robot Co ltd
Southwest University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mianyang Keruite Robot Co ltd, Southwest University of Science and TechnologyfiledCriticalMianyang Keruite Robot Co ltd
Priority to CN202010633906.2ApriorityCriticalpatent/CN112200762A/en
Publication of CN112200762ApublicationCriticalpatent/CN112200762A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The invention relates to the field of computer vision networks, and aims to provide a method for detecting defects of a diode glass bulb, which comprises the following steps of 1: placing a diode glass bulb to be detected on an industrial camera platform, and starting a backlight source at the bottom of the industrial camera platform; step 2: an industrial camera is arranged on the industrial camera platform, and a diode glass bulb image is captured through a telecentric lens on the industrial camera; and step 3: the method comprises the steps of taking the collected diode glass shell image as input and sending the input to a trained defect identification model in a computer, wherein the output of the defect identification model is the positioned diode glass shell image, and the defects on the positioned diode glass shell image are marked and displayed.

Description

Diode glass bulb defect detection method
Technical Field
The invention relates to the field of computer vision, in particular to a method for detecting defects of a diode glass shell.
Background
Because the diode glass bulb is transparent and has internal structure and outline characteristics, the interference with defect characteristics is easily formed in the imaging process, in addition, the interference of external environment is avoided, the detection is very difficult, the classification accuracy is difficult to improve, and the application to the industry needs very high indexes as guarantee.
The invention discloses CN201510053437.6, a cylindrical diode surface defect detection device based on machine vision, and discloses hardware and software algorithm design of the cylindrical diode surface defect detection device. The hardware design comprises the following steps: selecting the type of an industrial camera, selecting the type of a lens and building an optical platform; the design of the software defect detection algorithm comprises the following steps: the method comprises the steps of tube body segmentation, tube body pretreatment, defect ROI segmentation, feature extraction and decision tree classifier design. The invention aims at the design of an optical platform, and tests out a reasonable lighting mode and a light source placing mode through an optical principle and the self structural characteristics of an object. Aiming at the design of a defect detection operator, the difficulty lies in the segmentation and texture feature extraction of a defect ROI, and improved stroke width conversion and a method for extracting the features of a patterned gradient histogram are respectively provided; and finally, classifying the defects through a decision tree classifier, wherein the defect identification rate is close to 100%, the classification rate reaches 96.2%, and a better identification and classification effect is obtained.
Therefore, a method capable of identifying the defects on the surface of the diode glass bulb is needed, which can rapidly detect the defects of the diode glass bulb and accurately realize the positioning of the defects.
Disclosure of Invention
The invention aims to provide a method for detecting the defects of the diode glass bulb, which can accurately position the defect detection of the diode glass bulb, has reasonable structure and ingenious design and is suitable for popularization;
the technical scheme adopted by the invention is as follows: the method for detecting the defects of the diode glass bulb comprises the following steps:
step 1: placing a diode glass bulb to be detected on an industrial camera platform, and starting a backlight source at the bottom of the industrial camera platform; step 2: an industrial camera is arranged on the industrial camera platform, and a diode glass bulb image is captured through a telecentric lens on the industrial camera;
and step 3: and sending the collected diode glass shell image as input to a trained defect identification model in a computer, wherein the output of the defect identification model is the positioned diode glass shell image, and the defect on the positioned diode glass shell image is marked and displayed.
Preferably, instep 2, the defect identification model is a YOLOv3 model.
Preferably, instep 2, the industrial camera platform further includes a light source controller, the light source controller is respectively connected to the industrial camera and the backlight source, and the industrial camera is connected to the computer through the light source controller.
Preferably, in thestep 3, the training process of the defect recognition model includes the following steps,
step 11: 4000 defective diode glass shell images of different types are obtained, and thestep 2 is carried out;
step 22: expanding the obtained diode glass shell image to obtain a 50000 sample set, and marking the image of the processed sample set;
step 33: and (3) carrying out YOLOv3 model training on the marked diode glass bulb crack image, wherein 42000 sheets are used as a training set, 8000 sheets are used as a verification set, and the trained defect identification model is obtained.
Preferably, in the step 11, the acquired image is flipped left and right to obtain a flipped image; performing different-size cutting to obtain images of various sizes; carrying out multi-scale scaling to obtain a multi-size scaled image; and the turning image, the images with various sizes and the scaling images with various sizes form a processed sample set.
Preferably, the number of images of the processed sample set is a multiple of the number of envelope samples.
Preferably, the anchor frame size is obtained by performing a plurality of iterations of the K-means algorithm on the VOC data set, and when the input image size is 416 × 416, the YOLOv3 anchor frame size is { [10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326 }.
Preferably, the loss function integrates anchor frame center coordinate loss, width and height loss, confidence coefficient loss and classification, the anchor frame loss is calculated by a sum of squares, and the classification error and the confidence coefficient error are calculated by a binary cross loss entropy, and the specific formula is as follows:
Figure 7
wherein
Figure BDA0002567074960000031
Indicating a certain real target contained in the jth anchor frame of the ith grid. Theparts 1 and 2 are anchor frame loss, theparts 3 and 4 are confidence loss, the confidence error comprises a target part and a non-target part, and the number of the anchor frames without the target is far more than that of the anchor frames with the target, so that the anchor frames without the target have a coefficient lambda before the targetnoobj0.5, reducing the contribution weight; part 5 is the classification error.
Preferably, the training process of the training diagram set on the YOLOv3 model is as follows:
dividing the images of the input training atlas into S-S grids;
generating 3 bounding boxes for each grid in the S-S grid, wherein the attributes comprise a central coordinate, a width, a height, a confidence coefficient and a probability of belonging to a workpiece crack target; and (3) eliminating a candidate frame which does not contain the target through the object confidence coefficient being less than the threshold th1, and then utilizing a non-maximum value to inhibit and select a candidate frame which has the maximum intersection ratio (IoU) with the real frame for target prediction, wherein the prediction is as follows:
bx=σ(tx)+cx(1)
by=σ(ty)+cy(2)
Figure 2
Figure 3
bx,by,bw,bhi.e. the center coordinates, width and height of the final predicted bounding box for the network. Wherein c isx,cyIs the coordinate offset of the grid; p is a radical ofw,phIs the width and height of the anchor box mapped into the feature map; t is tx,ty,tw,thIs a parameter to be learned in the network training process, tw,thRepresenting the degree of scaling of the prediction box, tx,tyRepresents the degree of center coordinate shift of the prediction box, and σ represents the sigmoid function. Updating t by continuous learningx,ty,tw,thParameters are set so that the prediction box is closer to the real box, and the training is stopped when the network loss is less than the set threshold th2 or the training number reaches the maximum iteration number N.
It is worth noting that the training of the data set on the YOLOv3 model uses 3 scales to perform 3 bounding box predictions:
scale 1, adding convolution layers after the feature extraction network, wherein the down-sampling proportion is 32, the scale of an output feature graph is 13 x 13, and the feature graph is suitable for detecting small targets;
thescale 2 is used for sampling the last-but-one convolution layer (namely, 2) in the scale 1, the down-sampling proportion is 16, and the up-sampling proportion is connected with a characteristic diagram with the scale of 26 and 26 in series, is increased by 2 times compared with the scale 1, and is suitable for detecting a medium-scale target;
dimension 3: analogy toscale 2, a 52 x 52 size signature was obtained, suitable for detecting larger targets.
Compared with the prior art, the invention has the beneficial effects that:
1. machine identification, high accuracy and high recall rate;
2. the YOLOv3 model is used for measurement and calculation, the training process is fast, and the allocation is convenient.
Drawings
FIG. 1 is a schematic view of the apparatus of the present invention in use;
FIG. 2 is a schematic diagram of a method for detecting defects of a diode glass bulb according to an embodiment of the present invention;
FIG. 3 is a diagram of the YOLOv3 model in an embodiment of the invention;
fig. 4 is a diagram illustrating the effect of diode glass envelope defects in an embodiment of the present invention.
Description of reference numerals: 1. an industrial camera stand; 2. an industrial camera; 3. a telecentric lens; 4. a glass envelope; 5. a backlight light source; 6. A light source controller; 7. a computer.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to fig. 1 to 4 of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, but not all embodiments. All other implementations made by those of ordinary skill in the art based on the embodiments of the present invention are obtained without inventive efforts.
Referring to fig. 1, the method for detecting defects of a diode glass bulb includes the following steps:
step 1: placing a diode glass bulb to be detected on an industrial camera platform, and starting a backlight source at the bottom of the industrial camera platform;
step 2: an industrial camera is arranged on the industrial camera platform, and a diode glass bulb image is captured through a telecentric lens on the industrial camera;
and step 3: and sending the collected diode glass shell image as input to a trained defect identification model in a computer, wherein the output of the defect identification model is the positioned diode glass shell image, and the defect on the positioned diode glass shell image is marked and displayed.
It should be noted that, referring to fig. 3, instep 2, the defect recognition model is a YOLOv3 model, instep 2, the industrial camera platform further includes a light source controller, the light source controller is respectively connected to the industrial camera and the backlight light source, the industrial camera is connected to the computer through the light source controller, instep 3, the training process of the defect recognition model includes the following steps,
step 11: acquiring 4000 glass shell images of diodes with different types of defects, and entering thestep 2;
step 22: expanding the image of the diode glass shell sample to obtain 50000 sample sets, and marking the image of the processed sample set;
step 33: and carrying out YOLOv3 model training on the marked diode glass bulb crack image to obtain a trained defect identification model.
It should be noted that, in the step 11, the acquired image is flipped left and right to obtain a flipped image; performing different-size cutting to obtain images of various sizes; carrying out multi-scale scaling to obtain a multi-size scaled image; and the turning image, the images with various sizes and the scaling images with various sizes form a processed sample set.
It is worth mentioning that the number of images of the processed sample set is a multiple of the number of samples of the diode bulb,
it is worth mentioning that the anchor frame size is obtained by performing a plurality of iterations of the K-means algorithm on the VOC data set, and when the input image size is 416 x 416, the YOLOv3 anchor frame size is { [10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198], [373, 326 }.
It is worth to be noted that the loss function integrates the anchor frame center coordinate loss, the width and height loss, the confidence coefficient loss and the classification, the anchor frame loss is calculated by the sum of squares, and the classification error and the confidence coefficient error are calculated by the binary cross loss entropy, and the specific formula is as follows:
Figure 6
wherein
Figure BDA0002567074960000052
Indicating a certain real target contained in the jth anchor frame of the ith grid. Theparts 1 and 2 are anchor frame loss, theparts 3 and 4 are confidence loss, the confidence error comprises a target part and a non-target part, and the number of the anchor frames without the target is far more than that of the anchor frames with the target, so that the anchor frames without the target have a coefficient lambda before the targetnoobj0.5, reducing the contribution weight; part 5 is the classification error.
It should be noted that the training process of the training diagram set on the YOLOv3 model is as follows:
dividing the images of the input training atlas into S-S grids;
generating 3 bounding boxes for each grid in the S-S grid, wherein the attributes comprise a central coordinate, a width, a height, a confidence coefficient and a probability of belonging to a workpiece crack target; and (3) eliminating a candidate frame which does not contain the target through the object confidence coefficient being less than the threshold th1, and then utilizing a non-maximum value to inhibit and select a candidate frame which has the maximum intersection ratio (IoU) with the real frame for target prediction, wherein the prediction is as follows:
bx=σ(tx)+cx(1)
by=σ(ty)+cy(2)
Figure BDA0002567074960000061
Figure BDA0002567074960000062
bx,by,bw,bhi.e. the center coordinates, width and height of the final predicted bounding box for the network. Wherein c isx,cyIs the coordinate offset of the grid; p is a radical ofw,phIs the width and height of the anchor box mapped into the feature map; t is tx,ty,tw,thIs a parameter to be learned in the network training process, tw,thRepresenting the degree of scaling of the prediction box, tx,tyRepresents the degree of center coordinate shift of the prediction box, and σ represents the sigmoid function. Updating t by continuous learningx,ty,tw,thParameters are set so that the prediction box is closer to the real box, and the training is stopped when the network loss is less than the set threshold th2 or the training number reaches the maximum iteration number N.
It is worth noting that the training of the data set on the YOLOv3 model uses 3 scales to perform 3 bounding box predictions:
scale 1, adding convolution layers after the feature extraction network, wherein the down-sampling proportion is 32, the scale of an output feature graph is 13 x 13, and the feature graph is suitable for detecting small targets;
thescale 2 is used for sampling the last-but-one convolution layer (namely, 2) in the scale 1, the down-sampling proportion is 16, and the up-sampling proportion is connected with a characteristic diagram with the scale of 26 and 26 in series, is increased by 2 times compared with the scale 1, and is suitable for detecting a medium-scale target;
dimension 3: analogy toscale 2, a 52 x 52 size signature was obtained, suitable for detecting larger targets.
In the specific embodiment, after sample expansion, 50000 diode glass shell samples with defects are selected as training sets at random, and 5000 samples are selected as test sets. The training times are 80000 times in total, the weight is automatically saved every 5000 times of training, the basic learning rate is 0.001, the batch size is 32, the momentum is 0.9, the weight attenuation coefficient is 0.0005, and overfitting is reduced by adopting L2 regularization.
The method is characterized in that the detection performance of the YOLOv3 target detection model on the tubular workpiece is measured by adopting Accuracy (Accuracy), Recall rate (Recall) and video test frame number (FPS), the higher the Accuracy and Recall rate are, the better the detection effect is represented, and the practical application can be better met, and the larger the FPS value is, the better the real-time detection effect is represented by the YOLOv3 target detection model.
It is worth noting that one type of defect on the diode envelope is shown in fig. 4.
The FPS of the YOLOv3 target detection model operated on a 1080TI video card computer is 18.3f/s, the Accuracy (Accuracy) and the Recall (Recall) of defect sample (bad) detection are shown in Table 1, and the Table 1 shows the detection condition of the YOLOv3 target detection model on the diode glass shell:
TABLE 1
Type (B)Rate of accuracyRecall rate
Diode glass shell98.2196.38
As can be seen from table 1, the accuracy of the YOLOv3 target detection model on the diode glass bulb detection is 98.21%, and the recall rate is 96.38%. The method has high accuracy and recall rate for detecting the tubular workpiece, and detects defects by using a Yolov3 target detection model, and tests the diode glass shell image as shown in figure 2. Fig. 2 is a schematic diagram of a diode glass bulb detection by a YOLOv3 target detection model, and the YOLOv3 target detection model can meet the actual requirements of diode glass bulb detection in industrial production and has a good application prospect.
In summary, the implementation principle of the invention is as follows: comprises the following steps of 1: placing a diode glass bulb to be detected on an industrial camera platform, and starting a backlight source at the bottom of the industrial camera platform; step 2: an industrial camera is arranged on the industrial camera platform, and a diode glass bulb image is captured through a telecentric lens on the industrial camera; and step 3: the method comprises the steps of taking the collected diode glass shell image as input and sending the input to a trained defect identification model in a computer, wherein the output of the defect identification model is the positioned diode glass shell image, and the defects on the positioned diode glass shell image are marked and displayed.

Claims (10)

Translated fromChinese
1.二极管玻壳缺陷检测方法,其特征在于,包括以下步骤:1. a diode glass bulb defect detection method, is characterized in that, comprises the following steps:步骤1:将待检测的二极管玻壳放置在工业相机台上,启动工业相机台底部的背光光源;Step 1: Place the diode glass bulb to be detected on the industrial camera stage, and activate the backlight source at the bottom of the industrial camera stage;步骤2:所述工业相机台上设置有工业相机,通过工业相机上的远心镜头捕捉二极管玻壳图像;Step 2: an industrial camera is arranged on the industrial camera stage, and an image of the diode glass bulb is captured by a telecentric lens on the industrial camera;步骤3:将采集后的二极管玻壳图像作为输入发送至电脑内的已训练完成的缺陷识别模型,所述缺陷识别模型的输出为定位后的二极管玻壳图像,定位后的二极管玻壳图像上的缺陷被标记显示。Step 3: Send the collected diode glass bulb image as input to the trained defect recognition model in the computer, the output of the defect recognition model is the positioned diode glass bulb image, and the positioned diode glass bulb image is on the The defects are marked for display.2.根据权利要求1所述的二极管玻壳缺陷检测方法,其特征在于,所述步骤2中,所述缺陷识别模型为YOLOv3模型。2. The diode glass bulb defect detection method according to claim 1, wherein in the step 2, the defect identification model is a YOLOv3 model.3.根据权利要求2所述的二极管玻壳缺陷检测方法,其特征在于,所述步骤2中,所述工业相机台还包括有光源控制器,所述光源控制器分别与所述工业相机和所述背光光源连接,所述工业相机通过所述光源控制器与所述电脑连接。3 . The method for detecting the defect of a diode glass bulb according to claim 2 , wherein in the step 2, the industrial camera stage further comprises a light source controller, and the light source controller is respectively connected with the industrial camera and the industrial camera. 4 . The backlight light source is connected, and the industrial camera is connected with the computer through the light source controller.4.根据权利要求3所述的二极管玻壳缺陷检测方法,其特征在于,所述步骤3中,所述缺陷识别模型的训练过程包括下面步骤,4. The diode glass bulb defect detection method according to claim 3, wherein in the step 3, the training process of the defect identification model comprises the following steps:步骤11:得到4000张具有不同类型缺陷的二极管玻壳图像,进入步骤2;Step 11: Get 4000 images of diode glass bulbs with different types of defects, and go to Step 2;步骤22:对缺陷二极管玻壳图像进行扩充得到50000张样本集,将处理后样本集的图像进行标记;Step 22: Expand the image of the defective diode glass bulb to obtain 50,000 sample sets, and mark the images of the processed sample sets;步骤33:将标记后的二极管玻壳裂纹图像进行YOLOv3模型训练,得到训练完成的缺陷识别模型。Step 33: Perform YOLOv3 model training on the marked diode bulb crack image to obtain a trained defect recognition model.5.根据权利要求4所述的二极管玻壳缺陷检测方法,其特征在于,所述步骤11中,对采集到的图像进行左右翻转,得到翻转图像;进行不同尺寸裁剪,得到多种尺寸的图像;进行多尺度缩放,得到多尺寸的缩放图像;所述的翻转图像、多种尺寸的图像、多尺寸的缩放图像组成处理后样本集。5 . The method for detecting defects of a diode glass bulb according to claim 4 , wherein in the step 11 , the collected images are flipped left and right to obtain flipped images; and images of various sizes are obtained by cropping different sizes. 6 . Perform multi-scale scaling to obtain multi-sized scaled images; the flipped images, multi-sized images, and multi-sized scaled images form a processed sample set.6.根据权利要求5所述的二极管玻壳缺陷检测方法,其特征在于,所述处理后的样本集的图像数量是所述玻壳样本数量的倍数。6 . The diode glass bulb defect detection method according to claim 5 , wherein the number of images of the processed sample set is a multiple of the number of glass bulb samples. 7 .7.根据权利要求6所述的二极管玻壳缺陷检测方法,其特征在于,锚框尺寸由K-means算法在VOC数据集上进行多次迭代获得,当输入图像尺寸为416*416,YOLOv3锚框大小为{[10,13],[16,30],[33,23],[30,61],[62,45],[59,119],[116,90],[156,198],[373,326]}。7. The diode glass bulb defect detection method according to claim 6, wherein the anchor frame size is obtained by performing multiple iterations on the VOC data set by the K-means algorithm. When the input image size is 416*416, the YOLOv3 anchor Box sizes are {[10, 13], [16, 30], [33, 23], [30, 61], [62, 45], [59, 119], [116, 90], [156, 198 ], [373, 326]}.8.根据权利要求7所述的二极管玻壳缺陷检测方法,其特征在于,损失函数整合了锚框中心坐标损失、宽高损失、置信度误差和分类误差,锚框中心坐标损失通过平方和计算,分类误差和置信度误差通过二值交叉损失熵计算。8. The method for detecting defects in diode bulbs according to claim 7, wherein the loss function integrates anchor frame center coordinate loss, width and height loss, confidence error and classification error, and the anchor frame center coordinate loss is calculated by sum of squares , the classification error and confidence error are computed by binary cross-loss entropy.9.根据权利要求1所述的二极管玻壳缺陷检测方法,其特征在于,所述的训练图集对YOLOv3模型的训练过程通过将输入的训练图集的图像分成S*S网格,S*S网格中的每个格子生成3个边界框,属性包括中心坐标、宽、高、置信度及属于工件裂纹目标的概率;通过对象置信度小于阈值th1剔除不包含目标的候选框,其次利用非极大值抑制选择与真实框交并比IoU最大的候选框进行目标预测,目标预测的公式如下:9. The diode glass bulb defect detection method according to claim 1, wherein the training process of the training atlas to the YOLOv3 model is divided into S*S grids by dividing the image of the input training atlas into S*S grids, S* Each grid in the S grid generates 3 bounding boxes, and the attributes include center coordinates, width, height, confidence, and the probability of belonging to the workpiece crack target; the candidate frame that does not contain the target is eliminated by the object confidence less than the threshold th1, and then use Non-maximum suppression selects the candidate frame that intersects with the real frame and has the largest IoU for target prediction. The formula for target prediction is as follows:bx=σ(tx)+cxbx =σ(tx )+cxby=σ(ty)+cyby =σ(ty )+cy
Figure FDA0002567074950000021
Figure FDA0002567074950000021
Figure FDA0002567074950000022
Figure FDA0002567074950000022
bx,by为S*S网络最终预测边界框的中心坐标、,bw和bh分别为S*S网络最终预测边界框的宽和高,其中cx和cy是网格的坐标偏移量;pw和ph分别是锚框映射到特征图中的宽和高;tx,ty,tw,th是网络训练过程中需要学习的参数,tw,th表示预测框的尺度缩放程度,tx,ty表示预测框的中心坐标偏移程度,σ表示sigmoid函数,通过更新tx,ty、,tw,、th参数,使得预测框与真实框越来越接近,当网络损失函数值小于设定阈值th2或者训练次数达到最大迭代次数N停止训练。bx , by are the center coordinates of the final predicted bounding box of the S*S network, bw and bh are the width and height of the final predicted bounding box of the S*S network, respectively, where cx and cy are the coordinates of the grid Offset;pw and ph are the width and height of the anchor box mapped to the feature map respectively; tx ,ty , tw ,th are the parameters that need to be learned during the network training process, tw ,th represent The scale scaling degree of the prediction frame, tx , ty represents the degree of deviation of the center coordinates of the prediction frame, σ represents the sigmoid function, and by updating the tx ,ty , , tw , andth parameters, the prediction frame and the real frame are made Getting closer and closer, when the value of the network loss function is less than the set threshold th2 or the number of training reaches the maximum number of iterations N, the training is stopped.10.根据权利要求1所述的二极管玻壳缺陷检测方法,其特征在于,所述的数据集对YOLOv3模型的训练,采用3种尺度进行3个边界框预测:10. The diode glass bulb defect detection method according to claim 1, wherein the training of the YOLOv3 model by the data set adopts 3 scales to predict 3 bounding boxes:尺度1的设置方法为,在特征提取网络后添加若干卷积层,通过降采样比例为32,输出特征图尺度为13*13,用以检测小缺陷尺度目标;The setting method of scale 1 is to add several convolutional layers after the feature extraction network, and the scale of the output feature map is 13*13 by downsampling ratio of 32 to detect small defect scale targets;尺度2的设置方法为,对尺度1的倒数第二层卷积层上采样*2,设置降采样比例为16,再与尺度为26*26特征图进行串联,比尺度1增加2倍,用以检测中等缺陷尺度目标;The setting method of scale 2 is to upsample the penultimate convolution layer of scale 1*2, set the downsampling ratio to 16, and then concatenate it with the feature map of scale 26*26, which is 2 times larger than scale 1. To detect medium defect scale targets;尺度3的设置方法为,类比尺度2,获得52*52大小的特征图,用以检测较大缺陷目标。The setting method of scale 3 is, analogous to scale 2, to obtain a feature map of size 52*52 to detect larger defect targets.
CN202010633906.2A2020-07-022020-07-02 Diode glass bulb defect detection methodPendingCN112200762A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010633906.2ACN112200762A (en)2020-07-022020-07-02 Diode glass bulb defect detection method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010633906.2ACN112200762A (en)2020-07-022020-07-02 Diode glass bulb defect detection method

Publications (1)

Publication NumberPublication Date
CN112200762Atrue CN112200762A (en)2021-01-08

Family

ID=74006519

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010633906.2APendingCN112200762A (en)2020-07-022020-07-02 Diode glass bulb defect detection method

Country Status (1)

CountryLink
CN (1)CN112200762A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113504238A (en)*2021-06-042021-10-15广东华中科技大学工业技术研究院Glass surface defect collecting device and detection method
CN115047006A (en)*2022-05-202022-09-13如皋市联拓电子有限公司Diode surface quality detection method

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108961235A (en)*2018-06-292018-12-07山东大学A kind of disordered insulator recognition methods based on YOLOv3 network and particle filter algorithm
CN109636772A (en)*2018-10-252019-04-16同济大学The defect inspection method on the irregular shape intermetallic composite coating surface based on deep learning
CN110310261A (en)*2019-06-192019-10-08河南辉煌科技股份有限公司 A catenary suspension string defect detection model training method and defect detection method
CN110838112A (en)*2019-11-082020-02-25上海电机学院 An insulator defect detection method based on Hough transform and YOLOv3 network
WO2020068784A1 (en)*2018-09-242020-04-02Schlumberger Technology CorporationActive learning framework for machine-assisted tasks
US20200134810A1 (en)*2018-10-262020-04-30Taiwan Semiconductor Manufacturing Company Ltd.Method and system for scanning wafer
CN111292305A (en)*2020-01-222020-06-16重庆大学Improved YOLO-V3 metal processing surface defect detection method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108961235A (en)*2018-06-292018-12-07山东大学A kind of disordered insulator recognition methods based on YOLOv3 network and particle filter algorithm
WO2020068784A1 (en)*2018-09-242020-04-02Schlumberger Technology CorporationActive learning framework for machine-assisted tasks
CN109636772A (en)*2018-10-252019-04-16同济大学The defect inspection method on the irregular shape intermetallic composite coating surface based on deep learning
US20200134810A1 (en)*2018-10-262020-04-30Taiwan Semiconductor Manufacturing Company Ltd.Method and system for scanning wafer
CN110310261A (en)*2019-06-192019-10-08河南辉煌科技股份有限公司 A catenary suspension string defect detection model training method and defect detection method
CN110838112A (en)*2019-11-082020-02-25上海电机学院 An insulator defect detection method based on Hough transform and YOLOv3 network
CN111292305A (en)*2020-01-222020-06-16重庆大学Improved YOLO-V3 metal processing surface defect detection method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
FEI ZHOU 等: "A Generic Automated Surface Defect Detection Based on a Bilinear Model", 《APPLIED SCIENCES》*
张广世 等: "基于改进YOLOv3网络的齿轮缺陷检测", 《激光与光电子学进展》*
牛乾: "二极管玻壳表面缺陷检测技术研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》*
郭毅强: "晶圆表面缺陷视觉检测研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》*

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113504238A (en)*2021-06-042021-10-15广东华中科技大学工业技术研究院Glass surface defect collecting device and detection method
CN113504238B (en)*2021-06-042023-12-22广东华中科技大学工业技术研究院Glass surface defect acquisition device and detection method
CN115047006A (en)*2022-05-202022-09-13如皋市联拓电子有限公司Diode surface quality detection method
CN115047006B (en)*2022-05-202024-12-03如皋市联拓电子有限公司Diode surface quality detection method

Similar Documents

PublicationPublication DateTitle
CN109977808B (en)Wafer surface defect mode detection and analysis method
CN108765412B (en) A method for classifying surface defects of strip steel
CN109509187B (en) An Efficient Inspection Algorithm for Small Defects in Large Resolution Cloth Images
CN110310259A (en) A Wood Knot Defect Detection Method Based on Improved YOLOv3 Algorithm
CN113643228B (en)Nuclear power station equipment surface defect detection method based on improved CenterNet network
CN114973002A (en)Improved YOLOv 5-based ear detection method
WO2022236876A1 (en)Cellophane defect recognition method, system and apparatus, and storage medium
CN113393438B (en) A resin lens defect detection method based on convolutional neural network
CN111652853A (en) A detection method of magnetic particle flaw detection based on deep convolutional neural network
CN108960245A (en)The detection of tire-mold character and recognition methods, device, equipment and storage medium
CN110853015A (en)Aluminum profile defect detection method based on improved Faster-RCNN
CN111429418A (en)Industrial part detection method based on YO L O v3 neural network
CN112233090A (en) Thin film defect detection method based on improved attention mechanism
CN107328787A (en)A kind of metal plate and belt surface defects detection system based on depth convolutional neural networks
CN110929795B (en)Method for quickly identifying and positioning welding spot of high-speed wire welding machine
CN114972316A (en)Battery case end surface defect real-time detection method based on improved YOLOv5
CN111914902B (en) A method of traditional Chinese medicine identification and surface defect detection based on deep neural network
CN112686833A (en)Industrial product surface defect detecting and classifying device based on convolutional neural network
CN111724355A (en) An image measurement method of abalone body shape parameters
CN117952904A (en) A method for locating and measuring surface defects of large equipment based on the combination of image and point cloud
CN111127417B (en)Printing defect detection method based on SIFT feature matching and SSD algorithm improvement
CN114240886B (en)Steel picture defect detection method in industrial production based on self-supervision contrast characterization learning technology
CN117036243A (en)Method, device, equipment and storage medium for detecting surface defects of shaving board
CN115829995A (en)Cloth flaw detection method and system based on pixel-level multi-scale feature fusion
CN112258490A (en)Low-emissivity coating intelligent damage detection method based on optical and infrared image fusion

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication
RJ01Rejection of invention patent application after publication

Application publication date:20210108


[8]ページ先頭

©2009-2025 Movatter.jp