Movatterモバイル変換


[0]ホーム

URL:


CN112634216B - Insulator self-explosion detection method based on deep learning model - Google Patents

Insulator self-explosion detection method based on deep learning model
Download PDF

Info

Publication number
CN112634216B
CN112634216BCN202011485662.4ACN202011485662ACN112634216BCN 112634216 BCN112634216 BCN 112634216BCN 202011485662 ACN202011485662 ACN 202011485662ACN 112634216 BCN112634216 BCN 112634216B
Authority
CN
China
Prior art keywords
model
insulator
training
image
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011485662.4A
Other languages
Chinese (zh)
Other versions
CN112634216A (en
Inventor
王倩
王晔琳
李俊
何复兴
朱龙辉
李宁
李贺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of TechnologyfiledCriticalXian University of Technology
Priority to CN202011485662.4ApriorityCriticalpatent/CN112634216B/en
Publication of CN112634216ApublicationCriticalpatent/CN112634216A/en
Application grantedgrantedCritical
Publication of CN112634216BpublicationCriticalpatent/CN112634216B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种基于深度学习模型的绝缘子自爆检测方法,包括采集绝缘子图像,将绝缘子图像转化为单通道标注图,构建U‑Net模型和CNN模型,用部分单通道标注图训练U‑Net模型和CNN模型,通过训练的U‑Net模型提高其余部分单通道标注图的像素精度,获得最优像素的掩膜图像,将掩膜图像输入训练后的CNN模型中,若CNN模型输出数值>0.5,则认为所述绝缘子没有发生自爆;否则,则认为所述绝缘子发生了自爆。采用本发明方法对绝缘子状态进行检测,可以有效减少人工工作量,提高识别效率和清晰度。

The invention discloses a method for detecting insulator self-explosion based on a deep learning model, which includes collecting insulator images, converting the insulator images into single-channel annotation images, constructing a U‑Net model and a CNN model, and training U‑Net using part of the single-channel annotation images. model and CNN model, improve the pixel accuracy of the remaining single-channel annotation maps through the trained U‑Net model, obtain the optimal pixel mask image, and input the mask image into the trained CNN model. If the CNN model output value > 0.5, it is considered that the insulator has not self-exploded; otherwise, it is considered that the insulator has self-exploded. Using the method of the present invention to detect the status of insulators can effectively reduce manual workload and improve identification efficiency and clarity.

Description

Translated fromChinese
一种基于深度学习模型的绝缘子自爆检测方法A method for detecting insulator self-explosion based on deep learning model

技术领域Technical field

本发明属于电子元器件故障检测技术领域,涉及一种基于深度学习模型的绝缘子自爆检测方法。The invention belongs to the technical field of electronic component fault detection, and relates to an insulator self-explosion detection method based on a deep learning model.

背景技术Background technique

绝缘子串是高压输电线路中的重要元件,在电绝缘和机械支撑中起重要作用。绝缘子暴露于雨水,风或降雪等野生动植物和气象条件下,这些组件很容易出现诸如破裂、污染,甚至会引起爆炸。绝缘子元件自爆会导致输电线路严重断电,对于电气公司来说,绝缘子元件自爆危害巨大,影响深远,及时对绝缘子状态进行检测,防止自爆的出现是非常必要的。Insulator strings are important components in high-voltage transmission lines and play an important role in electrical insulation and mechanical support. Insulators are exposed to wildlife and meteorological conditions such as rain, wind or snowfall, and these components are prone to rupture, contamination, and even explosions. The self-explosion of insulator components will cause serious power outages in transmission lines. For electrical companies, the self-explosion of insulator components has great harm and far-reaching impact. It is very necessary to detect the status of insulators in a timely manner to prevent the occurrence of self-explosion.

传统绝缘子自爆检测需要专业人员检查视频序列,以寻找电力传输线元件中的潜在缺陷,该过程非常耗时。基于传统计算机视觉算法的自爆检测策略可以在受控照明和背景条件下,在结构化图像中提供适当的结果,以提高绝缘子识别效率。但是这个方法需要人为设定参数和调整,误差较大。Traditional insulator self-explosion detection requires professionals to inspect video sequences to find potential defects in power transmission line components, a process that is time-consuming. Self-explosion detection strategies based on traditional computer vision algorithms can provide appropriate results in structured images under controlled lighting and background conditions to improve insulator identification efficiency. However, this method requires manual parameter setting and adjustment, and the error is large.

发明内容Contents of the invention

本发明的目的是提供一种基于深度学习模型的绝缘子自爆检测方法,解决了现有绝缘子检测方法需要人为设定参数和调整,误差较大的问题。The purpose of the present invention is to provide a method for detecting insulator self-explosion based on a deep learning model, which solves the problem that existing insulator detection methods require manual parameter setting and adjustment, resulting in large errors.

本发明所采用的技术方案是,一种基于深度学习模型的绝缘子自爆检测方法,其特征在于,包括采集绝缘子图像,将绝缘子图像转化为单通道标注图,构建U-Net模型和CNN模型,用部分单通道标注图训练U-Net模型和CNN模型,通过训练的U-Net模型提高其余部分单通道标注图的像素精度,获得最优像素的掩膜图像,将掩膜图像输入训练后的CNN模型中,若CNN模型输出数值>0.5,则认为所述绝缘子没有发生自爆;否则,则认为所述绝缘子发生了自爆。The technical solution adopted by the present invention is an insulator self-explosion detection method based on a deep learning model, which is characterized by including collecting insulator images, converting the insulator images into single-channel annotation maps, constructing a U-Net model and a CNN model, and using Part of the single-channel annotated images trains the U-Net model and CNN model. The trained U-Net model improves the pixel accuracy of the remaining single-channel annotated images, obtains the optimal pixel mask image, and inputs the mask image into the trained CNN. In the model, if the CNN model output value is >0.5, it is considered that the insulator has not self-exploded; otherwise, it is considered that the insulator has self-exploded.

本发明的技术特征还在于,The technical features of the present invention also include:

包括以下步骤:Includes the following steps:

步骤1,采集同一个绝缘子的多张绝缘子图像,对采集的绝缘子图像像素进行统一化调整;Step 1: Collect multiple insulator images of the same insulator, and uniformly adjust the pixels of the collected insulator images;

步骤2,采用图像标注工具对绝缘子图像进行标注和格式转化,得到单通道标注图;Step 2: Use image annotation tools to annotate and format the insulator image to obtain a single-channel annotation map;

步骤3,构建U-Net模型,使特征图在进行卷积操作前后图像的像素一致;采用步骤2获得的部分单通道标注图训练U-Net模型,以损失最小的训练参数作为U-Net模型的最终参数,然后通过训练后的U-Net模型提高其余部分单通道标注图的像素精度,获得最优像素的掩膜图像;Step 3: Build a U-Net model so that the pixels of the feature map are consistent before and after the convolution operation; use part of the single-channel annotation map obtained in step 2 to train the U-Net model, and use the training parameters with the smallest loss as the U-Net model The final parameters are then used to improve the pixel accuracy of the remaining single-channel annotation maps through the trained U-Net model to obtain the optimal pixel mask image;

步骤4,构建卷积神经网络CNN模型,采用训练U-Net模型的单通道标注图来训练卷积神经网络CNN模型,以损失最小的训练参数作为CNN模型的最终参数,即获得训练后的CNN模型;Step 4: Construct a convolutional neural network CNN model, use the single-channel annotation map of the U-Net model to train the convolutional neural network CNN model, and use the training parameters with the smallest loss as the final parameters of the CNN model, that is, obtain the trained CNN Model;

步骤5,将步骤3获得的掩膜图像输入训练后的CNN模型中,若CNN模型输出的数值>0.5,则认为所述绝缘子图像完整,即绝缘子没有发生自爆;否则,则认为所述绝缘子图像有缺失,即绝缘子发生了自爆。Step 5: Input the mask image obtained in step 3 into the trained CNN model. If the value output by the CNN model is >0.5, the insulator image is considered complete, that is, the insulator has not self-exploded; otherwise, the insulator image is considered There is a defect, that is, the insulator has self-exploded.

步骤1中,利用无人机、钢丝滚动机器人或攀爬机器人采集绝缘子图像。In step 1, a drone, wire rolling robot or climbing robot is used to collect insulator images.

步骤2的具体操作过程如下:The specific operation process of step 2 is as follows:

步骤2.1,采用图像标注工具对绝缘子图像进行标注,将绝缘件标注为disc,将连接件标注为ca;Step 2.1, use the image annotation tool to annotate the insulator image, mark the insulating part as disc, and mark the connecting part as ca;

步骤2.2,将标注好的图像保存为Json格式,得到三通道标注图;Step 2.2, save the annotated image in Json format to obtain a three-channel annotation image;

步骤2.3,利用MATLAB将三通道标注图进行图像格式转化,得到单通道标注图。Step 2.3, use MATLAB to convert the three-channel annotated image into image format to obtain a single-channel annotated image.

步骤3具体包括以下步骤:Step 3 specifically includes the following steps:

步骤3.1,构建U-Net模型,在每层卷积操作前,对特征图进行零填充来改进模型,使特征图在进行卷积操作前后图像的像素一致;Step 3.1, build a U-Net model. Before each layer of convolution operation, zero-fill the feature map to improve the model, so that the pixels of the feature map are consistent before and after the convolution operation;

步骤3.2,将步骤2得到的单通道标注图分为三部分,分别是训练集,验证集和测试集;Step 3.2, divide the single-channel annotation map obtained in step 2 into three parts, namely the training set, the verification set and the test set;

步骤3.3,采用优化器训练步骤3.1获得的U-Net模型,将训练集输入U-Net模型中进行训练,每训练完一趟,用验证集进行验证,若验证集上损失最小的训练参数满足MIoU函数,则以验证集上损失最小的训练参数作为U-Net模型的最终参数,即完成对U-Net模型的训练,MIoU函数如下:Step 3.3, use the optimizer to train the U-Net model obtained in step 3.1, and input the training set into the U-Net model for training. After each training pass, use the verification set for verification. If the training parameters with the smallest loss on the verification set satisfy MIoU function, the training parameter with the smallest loss on the verification set is used as the final parameter of the U-Net model, that is, the training of the U-Net model is completed. The MIoU function is as follows:

式中,i表示真实值,j表示预测值,K+1是类别个数(包含空类),pij表示本属于类别i但被预测为类别j的像素数量,pii表示类别i真正的像素数量,pji表示本属于类别j但被预测为类别i的像素数量;In the formula, i represents the true value, j represents the predicted value, K+1 is the number of categories (including empty categories), pij represents the number of pixels that belong to category i but are predicted to be category j, and pii represents the true value of category i. The number of pixels, pji represents the number of pixels that belong to category j but are predicted to be category i;

步骤3.4,采用训练后的U-Net模型提高测试集的像素精度,获得最优像素精度的掩膜图像。Step 3.4: Use the trained U-Net model to improve the pixel accuracy of the test set and obtain the mask image with optimal pixel accuracy.

步骤3.3中,用验证集进行验证时,使用交叉熵评估训练结果,计算损失采用的损失函数如下:In step 3.3, when using the validation set for verification, cross entropy is used to evaluate the training results, and the loss function used to calculate the loss is as follows:

式中,i—验证集样本,即验证集中的单通道标注图;c—从1到M的类别序列,M—验证集类别的总数量,pic—样本i属于类别c的预测概率,yic—指示变量,若样本i属于类别c,yic为1,否则为0;N—验证集样本总数量。In the formula, i—validation set sample, that is, the single-channel annotation map in the verification set; c—category sequence from 1 to M, M—the total number of verification set categories,pic —the predicted probability that sample i belongs to category c, yic - indicator variable, if sample i belongs to category c, yic is 1, otherwise it is 0; N - the total number of samples in the validation set.

步骤3.4中,最优像素精度的计算公式如下:In step 3.4, the optimal pixel accuracy is calculated as follows:

步骤4的具体过程如下:The specific process of step 4 is as follows:

步骤4.1,构建卷积神经网络CNN模型;Step 4.1, build the convolutional neural network CNN model;

步骤4.1.1,建立卷积神经网络CNN的框架结构,整个网络结构包括3个卷积层,3个完全连接层和3个池化层;Step 4.1.1, establish the framework structure of the convolutional neural network CNN. The entire network structure includes 3 convolutional layers, 3 fully connected layers and 3 pooling layers;

步骤4.1.2,对卷积层和池化层部分进行特征提取操作,对完全连接层部分进行分类操作;Step 4.1.2, perform feature extraction operations on the convolutional layer and pooling layer parts, and perform classification operations on the fully connected layer part;

步骤4.1.3,最后一层卷积层后,Flatten操作将特征图进行向量化,然后跟着三个完全连接层,即获得CNN模型;Step 4.1.3, after the last convolutional layer, the Flatten operation vectorizes the feature map, and then follows three fully connected layers to obtain the CNN model;

步骤4.2,采用优化器训练CNN模型,将训练集输入CNN模型中进行模型训练,每训练完一趟,用二元交叉熵损失函数进行验证,当二元交叉熵损失函数值不再下降时,停止训练,保存当前的权重值作为训练好的CNN模型参数,即完成对CNN模型的训练。Step 4.2, use the optimizer to train the CNN model, input the training set into the CNN model for model training, and use the binary cross-entropy loss function to verify after each training pass. When the binary cross-entropy loss function value no longer decreases, Stop training and save the current weight values as the trained CNN model parameters, which completes the training of the CNN model.

步骤4.1中,3个完全连接层的神经元个数分别为128,64,1;3个卷积层的卷积核大小都是3x3,步长都为1x1;每个卷积层后紧跟着池化层,池化层的池化窗口大小为2x2,步长为2x2,池化后特征图的个数保持不变,大小缩减为原来的一半。In step 4.1, the numbers of neurons in the three fully connected layers are 128, 64, and 1 respectively; the convolution kernel sizes of the three convolutional layers are all 3x3, and the step size is 1x1; each convolutional layer is followed by With the pooling layer, the pooling window size of the pooling layer is 2x2 and the step size is 2x2. After pooling, the number of feature maps remains unchanged and the size is reduced to half of the original size.

步骤4.2中,二元交叉熵损失函数为:In step 4.2, the binary cross-entropy loss function is:

式中,yi—样本i的label,若样本i为正类,则yi为1,若样本i为负类,则yi为0;pi—样本i预测为正类的概率。In the formula, yi - the label of sample i. If sample i is a positive class, then yi is 1. If sample i is a negative class, then yi is 0; pi - the probability that sample i is predicted to be a positive class.

本发明的有益效果是,通过采集绝缘子图像,将绝缘子图像转化为单通道标注图,构建、训练U-Net模型和CNN模型,通过训练的U-Net模型来提高单通道标注图的像素精度,获得最优像素的掩膜图像,将掩膜图像输入训练后的CNN模型中,根据输出结果来识别相应绝缘子是否发生了自爆,该过程不需要人为设定参数和调整,提高了识别精度,应用范围广阔,实用性较高。The beneficial effects of the present invention are to collect insulator images, convert the insulator images into single-channel annotation images, build and train U-Net models and CNN models, and improve the pixel accuracy of single-channel annotation images through the trained U-Net model. Obtain the mask image with the optimal pixels, input the mask image into the trained CNN model, and use the output results to identify whether the corresponding insulator has self-exploded. This process does not require manual parameter setting and adjustment, and improves the recognition accuracy. Application Broad range and high practicality.

附图说明Description of drawings

图1是本发明一种基于深度学习模型的绝缘子自爆检测方法的流程示意图;Figure 1 is a schematic flow chart of an insulator self-explosion detection method based on a deep learning model of the present invention;

图2是本发明实施例中用于模型测试的原始绝缘子图像一;Figure 2 is the first image of the original insulator used for model testing in the embodiment of the present invention;

图3是本发明实施例中用于模型测试的原始绝缘子图像二;Figure 3 is the second image of the original insulator used for model testing in the embodiment of the present invention;

图4是本发明实施例中用于模型测试的原始绝缘子图像三;Figure 4 is the original insulator image three used for model testing in the embodiment of the present invention;

图5是本发明实施例中用于模型测试的原始绝缘子图像四;Figure 5 is the original insulator image four used for model testing in the embodiment of the present invention;

图6是本发明实施例中构建、训练和应用U-net模型的流程示意图;Figure 6 is a schematic flow chart of constructing, training and applying the U-net model in the embodiment of the present invention;

图7是本发明实施例中用于模型测试的原始绝缘子图像一的掩膜图像;Figure 7 is a mask image of the original insulator image one used for model testing in the embodiment of the present invention;

图8是本发明实施例中用于模型测试的原始绝缘子图像二的掩膜图像;Figure 8 is a mask image of the original insulator image two used for model testing in the embodiment of the present invention;

图9是本发明实施例中用于模型测试的原始绝缘子图像三的掩膜图像;Figure 9 is a mask image of the original insulator image three used for model testing in the embodiment of the present invention;

图10是本发明实施例中用于模型测试的原始绝缘子图像四的掩膜图像;Figure 10 is a mask image of the original insulator image 4 used for model testing in the embodiment of the present invention;

图11是本发明实施例中构建、训练和应用CNN模型的流程示意图。Figure 11 is a schematic flowchart of constructing, training and applying a CNN model in an embodiment of the present invention.

具体实施方式Detailed ways

下面结合附图和具体实施方式对本发明进行详细说明。The present invention will be described in detail below with reference to the drawings and specific embodiments.

本发明一种基于深度学习模型的绝缘子自爆检测方法,参照图1,包括以下步骤:An insulator self-explosion detection method based on a deep learning model of the present invention, with reference to Figure 1, includes the following steps:

步骤1,利用无人机采集同一个绝缘子的26张绝缘子图像,将原始的26张绝缘子图片划分成两部分,22张用于模型训练,4张用于模型测试,因本实施例采集的绝缘子图像较少,故对选取的22张图像利用数据增强技术进行进一步的扩充,利用Augmentor数据增强工具中的图像旋转、图像左右互换、图像放大与缩小等操作按照随机概率执行得到扩充后的数据集550张,使用于模型训练的数据集扩充到原来的25倍;Step 1: Use a drone to collect 26 insulator images of the same insulator. Divide the original 26 insulator images into two parts. 22 images are used for model training and 4 images are used for model testing. Because the insulator images collected in this embodiment There are few images, so the 22 selected images were further expanded using data enhancement technology. Operations such as image rotation, image left and right swapping, image zooming in and out, etc. in the Augmentor data enhancement tool were performed according to random probability to obtain the expanded data. With a set of 550 images, the data set used for model training is expanded to 25 times the original size;

图2为用于模型测试的原始绝缘子图像一,图3为用于模型测试的原始绝缘子图像二,图4为用于模型测试的原始绝缘子图像三,图5为用于模型测试的原始绝缘子图像四。Figure 2 is the original insulator image 1 used for model testing. Figure 3 is the original insulator image 2 used for model testing. Figure 4 is the original insulator image 3 used for model testing. Figure 5 is the original insulator image used for model testing. Four.

训练网络所采用的计算机内存容量有限,为了保证训练的顺利完成,需要对原始图像的像素进行调整。原始图像的像素大小不一,大多在500×500×3左右,所以在训练前对以上绝缘子图像大小进行统一化调整,使其像素统一调整为256×256×3;The computer memory used to train the network has limited memory capacity. In order to ensure the smooth completion of training, the pixels of the original image need to be adjusted. The pixel sizes of the original images vary, mostly around 500×500×3, so before training, the size of the above insulator images is uniformly adjusted so that the pixels are uniformly adjusted to 256×256×3;

步骤2,采用图像标注工具Lableme对绝缘子图像进行标注和格式转化,具体操作过程如下:Step 2: Use the image annotation tool Labelme to annotate and format the insulator image. The specific operation process is as follows:

步骤2.1,采用图像标注工具Lableme对绝缘子图像进行标注,将绝缘件标注为disc,将连接件标注为cap;Step 2.1, use the image annotation tool Lableme to annotate the insulator image, label the insulating part as disc and the connecting part as cap;

步骤2.2,将标注好的图像保存为Json格式,得到三通道标注图;Step 2.2, save the annotated image in Json format to obtain a three-channel annotation image;

步骤2.3,利用MATLAB将三通道标注图进行图像格式转化,得到单通道标注图;Step 2.3, use MATLAB to convert the three-channel annotated image into image format to obtain a single-channel annotated image;

步骤3,参照图6,构建U-net模型,采用部分单通道标注图训练U-Net模型,然后通过训练后的U-Net模型提高其余部分单通道标注图的像素精度,获得最优像素的掩膜图像,具体包括以下步骤:Step 3: Refer to Figure 6 to construct a U-net model, use some single-channel annotated images to train the U-Net model, and then use the trained U-Net model to improve the pixel accuracy of the remaining single-channel annotated images to obtain the optimal pixel Mask image, including the following steps:

步骤3.1,构建U-Net模型,在每层卷积操作前,对特征图进行零填充来改进模型,使特征图在进行卷积操作前后图像的像素一致;Step 3.1, build a U-Net model. Before each layer of convolution operation, zero-fill the feature map to improve the model, so that the pixels of the feature map are consistent before and after the convolution operation;

步骤3.2,将步骤2得到的单通道标注图分为三部分,分别是训练集,验证集和测试集,训练集为扩充后的550张绝缘子图片和对应的标注图,验证集为最初的22张绝缘子图片和对应的标注图,测试集为用于模型测试的4张绝缘子图片;Step 3.2: Divide the single-channel annotated image obtained in step 2 into three parts, namely the training set, the verification set and the test set. The training set is the expanded 550 insulator pictures and the corresponding annotated images, and the verification set is the original 22 Insulator pictures and corresponding annotation pictures, the test set is 4 insulator pictures used for model testing;

步骤3.3,采用Adam优化器训练步骤3.1获得的U-Net模型,学习率设置为0.005,网络最后一层使用softmax作为分类函数,将训练集输入U-Net模型中进行训练,第一次训练时,将epoch设置为30,每个epoch中进行50次迭代,共计1500次模型参数训练;调整参数后进行了第二次模型训练,每训练完一趟,用验证集进行验证,若验证集上损失最小的训练参数满足MIoU函数,则以验证集上损失最小的训练参数作为U-Net模型的最终参数,即完成对U-Net模型的训练;MIoU函数如下:Step 3.3. Use the Adam optimizer to train the U-Net model obtained in step 3.1. The learning rate is set to 0.005. The last layer of the network uses softmax as the classification function. The training set is input into the U-Net model for training. During the first training , set the epoch to 30, and perform 50 iterations in each epoch, for a total of 1500 model parameter trainings; after adjusting the parameters, the second model training was performed. After each training session, the verification set was used for verification. If the verification set If the training parameter with the smallest loss satisfies the MIoU function, then the training parameter with the smallest loss on the verification set is used as the final parameter of the U-Net model, that is, the training of the U-Net model is completed; the MIoU function is as follows:

式中,i表示真实值,j表示预测值,K+1是类别个数(包含空类),pij表示本属于类别i但被预测为类别j的像素数量,pii表示类别i真正的像素数量,pji表示本属于类别j但被预测为类别i的像素数量;In the formula, i represents the true value, j represents the predicted value, K+1 is the number of categories (including empty categories), pij represents the number of pixels that belong to category i but are predicted to be category j, and pii represents the true value of category i. The number of pixels, pji represents the number of pixels that belong to category j but are predicted to be category i;

因为此处是多分类问题,所以该U-Net模型中计算损失采用的损失函数为分类交叉熵,用其来衡量U-Net模型的参数是否达到最佳,采用的具体损失函数如下:Because this is a multi-classification problem, the loss function used to calculate the loss in the U-Net model is classification cross entropy, which is used to measure whether the parameters of the U-Net model are optimal. The specific loss function used is as follows:

式中,i—验证集样本,即验证集中的单通道标注图;c—从1到M的类别序列,M—验证集类别的总数量,pic—样本i属于类别c的预测概率,yic—指示变量,若样本i属于类别c,yic为1,否则为0;N—验证集样本总数量。In the formula, i—validation set sample, that is, the single-channel annotation map in the verification set; c—category sequence from 1 to M, M—the total number of verification set categories,pic —the predicted probability that sample i belongs to category c, yic - indicator variable, if sample i belongs to category c, yic is 1, otherwise it is 0; N - the total number of samples in the validation set.

此实施例中,设置批次大小为32,验证集上损失值最小的epoch对应的模型为epoch-32,即epoch-32为训练后最终的改进U-Net模型。In this embodiment, the batch size is set to 32, and the model corresponding to the epoch with the smallest loss value on the verification set is epoch-32, that is, epoch-32 is the final improved U-Net model after training.

步骤3.4,采用训练后的U-Net模型提高测试集的像素精度,获得最优像素精度的掩膜图像。Step 3.4: Use the trained U-Net model to improve the pixel accuracy of the test set and obtain the mask image with optimal pixel accuracy.

最优像素精度PK的计算公式如下:The calculation formula of the optimal pixel accuracy PK is as follows:

图7为用于模型测试的原始绝缘子图像一的掩膜图像,图8为用于模型测试的原始绝缘子图像二的掩膜图像,图9为用于模型测试的原始绝缘子图像三的掩膜图像,图10为用于模型测试的原始绝缘子图像四的掩膜图像。Figure 7 is a mask image of the original insulator image one used for model testing. Figure 8 is a mask image of the original insulator image two used for model testing. Figure 9 is a mask image of the original insulator image three used for model testing. , Figure 10 is the mask image of the original insulator image four used for model testing.

步骤4,参照图11,构建卷积神经网络CNN模型,采用单通道标注图训练卷积神经网络CNN模型,获得训练后的CNN模型,具体过程如下:Step 4: Refer to Figure 11 to construct a convolutional neural network CNN model, use a single-channel annotation map to train the convolutional neural network CNN model, and obtain the trained CNN model. The specific process is as follows:

步骤4.1,构建卷积神经网络CNN模型Step 4.1, build the convolutional neural network CNN model

步骤4.1.1,建立卷积神经网络CNN的框架结构,基础网络为ImageNet竞赛中VGG16网络,修改最后的输出层为二分类softmax层,整个网络结构包括3个卷积层,3个完全连接层和3个池化层;Step 4.1.1, establish the framework structure of the convolutional neural network CNN. The basic network is the VGG16 network in the ImageNet competition. The final output layer is modified to a two-class softmax layer. The entire network structure includes 3 convolutional layers and 3 fully connected layers. and 3 pooling layers;

3个完全连接层,前两层完全连接层的激活函数为ReLU,最后一层完全连接层的激活函数为sigmoid,3个完全连接层的神经元个数分别为128,64,1;3个卷积层的卷积核大小都是3x3,步长都为1x1,激活函数都是ReLU;每个卷积层后紧跟着池化层,池化层的池化窗口大小为2x2,步长为2x2,池化后特征图的个数保持不变,大小缩减为原来的一半。3 fully connected layers, the activation function of the first two fully connected layers is ReLU, the activation function of the last fully connected layer is sigmoid, the number of neurons in the 3 fully connected layers are 128, 64, 1; 3 respectively The convolution kernel size of the convolution layer is 3x3, the step size is 1x1, and the activation function is ReLU; each convolution layer is followed by a pooling layer, and the pooling window size of the pooling layer is 2x2, and the step size is is 2x2, the number of feature maps remains unchanged after pooling, and the size is reduced to half of the original.

步骤4.1.2,对卷积层和池化层部分进行特征提取操作,对完全连接层部分进行分类操作;Step 4.1.2, perform feature extraction operations on the convolutional layer and pooling layer parts, and perform classification operations on the fully connected layer part;

步骤4.1.3,为了避免模型过拟合,在池化操作后使用Dropout正则化,Dropout比率选为0.2,最后一层卷积层后,Flatten操作将特征图进行向量化,然后跟着三个完全连接层,即完成CNN模型的构建;Step 4.1.3, in order to avoid model overfitting, Dropout regularization is used after the pooling operation, and the Dropout ratio is selected as 0.2. After the last convolutional layer, the Flatten operation vectorizes the feature map, and then follows three complete The connection layer completes the construction of the CNN model;

步骤4.2,采用Adam优化器训练CNN模型,学习率设置为0.0001,将训练集输入CNN模型中进行模型训练,每训练完一趟,用二元交叉熵损失函数进行验证,当二元交叉熵损失函数值不再下降时,停止训练,保存当前的权重值作为训练好的CNN模型参数,即完成对CNN模型的训练。Step 4.2, use the Adam optimizer to train the CNN model, set the learning rate to 0.0001, input the training set into the CNN model for model training, and use the binary cross entropy loss function to verify after each training pass. When the binary cross entropy loss When the function value no longer decreases, the training is stopped and the current weight values are saved as the trained CNN model parameters, that is, the training of the CNN model is completed.

不同于改进U-Net模型,CNN模型是二分类问题,所以计算损失采用的损失函数为二元交叉熵,二元交叉熵损失函数为:Different from the improved U-Net model, the CNN model is a binary classification problem, so the loss function used to calculate the loss is binary cross entropy, and the binary cross entropy loss function is:

式中,yi—样本i的label,若样本i为正类,则yi为1,若样本i为负类,则yi为0;pi—样本i预测为正类的概率。In the formula, yi - the label of sample i. If sample i is a positive class, then yi is 1. If sample i is a negative class, then yi is 0; pi - the probability that sample i is predicted to be a positive class.

模型训练中将epoch设置为20,batch_size设置为10,共计560次模型参数训练。During model training, epoch was set to 20 and batch_size was set to 10, resulting in a total of 560 model parameter trainings.

步骤5,将步骤3获得的4张掩膜图像输入训练后的CNN模型中,CNN模型会输出0-1之间的任意数值,若CNN模型输出的数值>0.5,则认为所述绝缘子图像完整,即绝缘子没有发生自爆;若CNN模型输出的数值≤0.5,则认为所述绝缘子图像有缺失,即绝缘子发生了自爆,结果如下表所示:Step 5: Input the 4 mask images obtained in step 3 into the trained CNN model. The CNN model will output any value between 0 and 1. If the value output by the CNN model is >0.5, the insulator image is considered complete. , that is, the insulator did not self-explode; if the value output by the CNN model is ≤ 0.5, it is considered that the insulator image is missing, that is, the insulator self-exploded. The results are shown in the following table:

图2为用于模型测试的原始绝缘子图像一,图3为用于模型测试的原始绝缘子图像二,图4为用于模型测试的原始绝缘子图像三,图5为用于模型测试的原始绝缘子图像四。图7为用于模型测试的原始绝缘子图像一的掩膜图像,图8为用于模型测试的原始绝缘子图像二的掩膜图像,图9为用于模型测试的原始绝缘子图像三的掩膜图像,图10为用于模型测试的原始绝缘子图像四的掩膜图像。Figure 2 is the original insulator image 1 used for model testing. Figure 3 is the original insulator image 2 used for model testing. Figure 4 is the original insulator image 3 used for model testing. Figure 5 is the original insulator image used for model testing. Four. Figure 7 is a mask image of the original insulator image one used for model testing. Figure 8 is a mask image of the original insulator image two used for model testing. Figure 9 is a mask image of the original insulator image three used for model testing. , Figure 10 is the mask image of the original insulator image four used for model testing.

表1 CNN模型测试结果Table 1 CNN model test results

从表1中可知,训练好的CNN模型较好地学习到了绝缘子完整和缺失的特征,在测试集上的测试结果近乎完美,测试准确率达到了100%。As can be seen from Table 1, the trained CNN model has learned the complete and missing features of the insulator relatively well, and the test results on the test set are almost perfect, with the test accuracy reaching 100%.

Claims (8)

Translated fromChinese
1.一种基于深度学习模型的绝缘子自爆检测方法,其特征在于,包括采集绝缘子图像,将绝缘子图像转化为单通道标注图,构建U-Net模型和CNN模型,用部分单通道标注图训练U-Net模型和CNN模型,通过训练的U-Net模型提高其余部分单通道标注图的像素精度,获得最优像素的掩膜图像,将所述掩膜图像输入训练后的CNN模型中,若CNN模型输出数值>0.5,则认为所述绝缘子没有发生自爆;否则,则认为所述绝缘子发生了自爆;1. An insulator self-explosion detection method based on a deep learning model, which is characterized by including collecting insulator images, converting the insulator images into single-channel annotated images, constructing U-Net models and CNN models, and training U with part of the single-channel annotated images. -Net model and CNN model, improve the pixel accuracy of the remaining single-channel annotation maps through the trained U-Net model, obtain the optimal pixel mask image, and input the mask image into the trained CNN model. If CNN If the model output value is >0.5, it is considered that the insulator has not self-exploded; otherwise, it is considered that the insulator has self-exploded;包括以下步骤:Includes the following steps:步骤1,采集同一个绝缘子的多张绝缘子图像,对采集的绝缘子图像像素进行统一化调整;Step 1: Collect multiple insulator images of the same insulator, and uniformly adjust the pixels of the collected insulator images;步骤2,采用图像标注工具对绝缘子图像进行标注和格式转化,得到单通道标注图;Step 2: Use image annotation tools to annotate and format the insulator image to obtain a single-channel annotation map;步骤3,构建U-Net模型,使特征图在进行卷积操作前后图像的像素一致;采用步骤2获得的部分单通道标注图训练U-Net模型,以损失最小的训练参数作为U-Net模型的最终参数,然后通过训练后的U-Net模型提高其余部分单通道标注图的像素精度,获得最优像素的掩膜图像;Step 3: Construct a U-Net model so that the pixels of the feature map are consistent before and after the convolution operation; use part of the single-channel annotation map obtained in step 2 to train the U-Net model, and use the training parameters with the smallest loss as the U-Net model The final parameters, and then use the trained U-Net model to improve the pixel accuracy of the remaining single-channel annotation maps to obtain the optimal pixel mask image;所述步骤3具体包括以下步骤:The step 3 specifically includes the following steps:步骤3.1,构建U-Net模型,在每层卷积操作前,对特征图进行零填充来改进模型,使特征图在进行卷积操作前后图像的像素一致;Step 3.1, build a U-Net model. Before each layer of convolution operation, zero-fill the feature map to improve the model, so that the pixels of the feature map are consistent before and after the convolution operation;步骤3.2,将步骤2得到的单通道标注图分为三部分,分别是训练集,验证集和测试集;Step 3.2, divide the single-channel annotation map obtained in step 2 into three parts, namely the training set, the verification set and the test set;步骤3.3,采用优化器训练步骤3.1获得的U-Net模型,将训练集输入U-Net模型中进行训练,每训练完一趟,用验证集进行验证,若验证集上损失最小的训练参数满足MIoU函数,则以验证集上损失最小的训练参数作为U-Net模型的最终参数,即完成对U-Net模型的训练,MIoU函数如下:Step 3.3, use the optimizer to train the U-Net model obtained in step 3.1, and input the training set into the U-Net model for training. After each training pass, use the verification set for verification. If the training parameters with the smallest loss on the verification set satisfy MIoU function, the training parameter with the smallest loss on the verification set is used as the final parameter of the U-Net model, that is, the training of the U-Net model is completed. The MIoU function is as follows:式中,i表示真实值,j表示预测值,K+1是类别个数,包含空类,pij表示本属于类别i但被预测为类别j的像素数量,pii表示类别i真正的像素数量,pji表示本属于类别j但被预测为类别i的像素数量;In the formula, i represents the real value, j represents the predicted value, K+1 is the number of categories, including empty categories, pij represents the number of pixels that belong to category i but are predicted to be category j, and pii represents the real pixels of category i. Quantity, pji represents the number of pixels that belong to category j but are predicted to be category i;步骤3.4,采用训练后的U-Net模型提高测试集的像素精度,获得最优像素精度的掩膜图像;Step 3.4, use the trained U-Net model to improve the pixel accuracy of the test set and obtain the mask image with optimal pixel accuracy;步骤4,构建卷积神经网络CNN模型,采用训练U-Net模型的单通道标注图来训练卷积神经网络CNN模型,以损失最小的训练参数作为CNN模型的最终参数,即获得训练后的CNN模型;Step 4: Construct a convolutional neural network CNN model, use the single-channel annotation map of the U-Net model to train the convolutional neural network CNN model, and use the training parameters with the smallest loss as the final parameters of the CNN model, that is, obtain the trained CNN Model;步骤5,将步骤3获得的掩膜图像输入训练后的CNN模型中,若CNN模型输出的数值>0.5,则认为所述绝缘子图像完整,即绝缘子没有发生自爆;否则,则认为所述绝缘子图像有缺失,即绝缘子发生了自爆。Step 5: Input the mask image obtained in step 3 into the trained CNN model. If the value output by the CNN model is >0.5, the insulator image is considered complete, that is, the insulator has not self-exploded; otherwise, the insulator image is considered There is a defect, that is, the insulator has self-exploded.2.根据权利要求1所述的一种基于深度学习模型的绝缘子自爆检测方法,其特征在于,所述步骤1中,利用无人机、钢丝滚动机器人或攀爬机器人采集绝缘子图像。2. An insulator self-explosion detection method based on a deep learning model according to claim 1, characterized in that in step 1, a drone, a wire rolling robot or a climbing robot is used to collect insulator images.3.根据权利要求1所述的一种基于深度学习模型的绝缘子自爆检测方法,其特征在于,所述步骤2的具体操作过程如下:3. An insulator self-explosion detection method based on a deep learning model according to claim 1, characterized in that the specific operation process of step 2 is as follows:步骤2.1,采用图像标注工具对绝缘子图像进行标注,将绝缘件标注为disc,将连接件标注为ca;Step 2.1, use the image annotation tool to annotate the insulator image, mark the insulating part as disc, and mark the connecting part as ca;步骤2.2,将标注好的图像保存为Json格式,得到三通道标注图;Step 2.2, save the annotated image in Json format to obtain a three-channel annotation image;步骤2.3,利用MATLAB将三通道标注图进行图像格式转化,得到单通道标注图。Step 2.3, use MATLAB to convert the three-channel annotated image into image format to obtain a single-channel annotated image.4.根据权利要求1所述的一种基于深度学习模型的绝缘子自爆检测方法,其特征在于,所述步骤3.3中,用验证集进行验证时,使用交叉熵评估训练结果,计算损失采用的损失函数如下:4. A method for detecting self-explosion of insulators based on a deep learning model according to claim 1, characterized in that in step 3.3, when verifying with the verification set, cross entropy is used to evaluate the training results and the loss is used to calculate the loss. The function is as follows:式中,i—验证集样本,即验证集中的单通道标注图;c—从1到M的类别序列,M—验证集类别的总数量,pic—样本i属于类别c的预测概率,yic—指示变量,若样本i属于类别c,yic为1,,否则为0;N—验证集样本总数量。In the formula, i—validation set sample, that is, the single-channel annotation map in the verification set; c—category sequence from 1 to M, M—the total number of verification set categories, pic—the predicted probability that sample i belongs to category c, yic— Indicator variable, if sample i belongs to category c, yic is 1, otherwise it is 0; N—the total number of samples in the validation set.5.根据权利要求4所述的一种基于深度学习模型的绝缘子自爆检测方法,其特征在于,所述步骤3.4中,最优像素精度的计算公式如下:5. An insulator self-explosion detection method based on a deep learning model according to claim 4, characterized in that in step 3.4, the calculation formula of the optimal pixel accuracy is as follows:6.根据权利要求5所述的一种基于深度学习模型的绝缘子自爆检测方法,其特征在于,所述步骤4的具体过程如下:6. An insulator self-explosion detection method based on a deep learning model according to claim 5, characterized in that the specific process of step 4 is as follows:步骤4.1,构建卷积神经网络CNN模型;Step 4.1, build the convolutional neural network CNN model;步骤4.1.1,建立卷积神经网络CNN的框架结构,整个网络结构包括3个卷积层,3个完全连接层和3个池化层;Step 4.1.1, establish the framework structure of the convolutional neural network CNN. The entire network structure includes 3 convolutional layers, 3 fully connected layers and 3 pooling layers;步骤4.1.2,对卷积层和池化层部分进行特征提取操作,对完全连接层部分进行分类操作;Step 4.1.2, perform feature extraction operations on the convolutional layer and pooling layer parts, and perform classification operations on the fully connected layer part;步骤4.1.3,最后一层卷积层后,Flatten操作将特征图进行向量化,然后跟着三个完全连接层,即获得CNN模型;Step 4.1.3, after the last convolutional layer, the Flatten operation vectorizes the feature map, and then follows three fully connected layers to obtain the CNN model;步骤4.2,采用优化器训练CNN模型,将训练集输入CNN模型中进行模型训练,每训练完一趟,用二元交叉熵损失函数进行验证,当二元交叉熵损失函数值不再下降时,停止训练,保存当前的权重值作为训练好的CNN模型参数,即完成对CNN模型的训练。Step 4.2, use the optimizer to train the CNN model, input the training set into the CNN model for model training, and use the binary cross-entropy loss function to verify after each training pass. When the binary cross-entropy loss function value no longer decreases, Stop training and save the current weight values as the trained CNN model parameters, which completes the training of the CNN model.7.根据权利要求6所述的一种基于深度学习模型的绝缘子自爆检测方法,其特征在于,所述步骤4.1中,3个完全连接层的神经元个数分别为128,64,1;3个卷积层的卷积核大小都是3x3,步长都为1x1;每个卷积层后紧跟着池化层,池化层的池化窗口大小为2x2,步长为2x2,池化后特征图的个数保持不变,大小缩减为原来的一半。7. An insulator self-explosion detection method based on a deep learning model according to claim 6, characterized in that in step 4.1, the number of neurons in the three fully connected layers is 128, 64, 1; 3 respectively. The convolution kernel size of each convolution layer is 3x3, and the step size is 1x1; each convolution layer is followed by a pooling layer, and the pooling window size of the pooling layer is 2x2, and the step size is 2x2. The number of feature maps remains unchanged and the size is reduced to half of the original size.8.根据权利要求6所述的一种基于深度学习模型的绝缘子自爆检测方法,其特征在于,所述步骤4.2中,二元交叉熵损失函数为:8. An insulator self-explosion detection method based on a deep learning model according to claim 6, characterized in that in step 4.2, the binary cross entropy loss function is:式中,yi—样本i的label,若样本i为正类,则yi为1,若样本i为负类,则yi为0;pi—样本i预测为正类的概率。In the formula, yi - the label of sample i. If sample i is a positive class, then yi is 1. If sample i is a negative class, then yi is 0; pi - the probability that sample i is predicted to be a positive class.
CN202011485662.4A2020-12-162020-12-16Insulator self-explosion detection method based on deep learning modelActiveCN112634216B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202011485662.4ACN112634216B (en)2020-12-162020-12-16Insulator self-explosion detection method based on deep learning model

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011485662.4ACN112634216B (en)2020-12-162020-12-16Insulator self-explosion detection method based on deep learning model

Publications (2)

Publication NumberPublication Date
CN112634216A CN112634216A (en)2021-04-09
CN112634216Btrue CN112634216B (en)2024-02-09

Family

ID=75313680

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202011485662.4AActiveCN112634216B (en)2020-12-162020-12-16Insulator self-explosion detection method based on deep learning model

Country Status (1)

CountryLink
CN (1)CN112634216B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114463544A (en)*2022-01-272022-05-10南京甄视智能科技有限公司 A Fast Labeling Method for Semantic Segmentation of Irregular Objects
CN114639101B (en)*2022-03-212025-02-07陕西科技大学 Emulsion droplet identification system, method, computer device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106780438A (en)*2016-11-112017-05-31广东电网有限责任公司清远供电局Defects of insulator detection method and system based on image procossing
CN109934222A (en)*2019-03-012019-06-25长沙理工大学 A method for identifying the self-explosion of insulator strings based on transfer learning
WO2020062088A1 (en)*2018-09-282020-04-02安徽继远软件有限公司Image identification method and device, storage medium, and processor
WO2020172838A1 (en)*2019-02-262020-09-03长沙理工大学Image classification method for improvement of auxiliary classifier gan

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106780438A (en)*2016-11-112017-05-31广东电网有限责任公司清远供电局Defects of insulator detection method and system based on image procossing
WO2020062088A1 (en)*2018-09-282020-04-02安徽继远软件有限公司Image identification method and device, storage medium, and processor
WO2020172838A1 (en)*2019-02-262020-09-03长沙理工大学Image classification method for improvement of auxiliary classifier gan
CN109934222A (en)*2019-03-012019-06-25长沙理工大学 A method for identifying the self-explosion of insulator strings based on transfer learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
朱有产 ; 王雯瑶 ; .基于改进Mask R-CNN的绝缘子目标识别方法.微电子学与计算机.2020,(第02期),全文.*
王义军 ; 曹培培 ; 王雪松 ; 闫星宇 ; .基于深度学习的绝缘子自爆检测方法研究.东北电力大学学报.2020,(第03期),全文.*
陈俊杰 ; 叶东华 ; 产焰萍 ; 陈凌睿 ; .基于Faster R-CNN模型的绝缘子故障检测.电工电气.2020,(第04期),全文.*

Also Published As

Publication numberPublication date
CN112634216A (en)2021-04-09

Similar Documents

PublicationPublication DateTitle
CN110827251B (en)Power transmission line locking pin defect detection method based on aerial image
CN110598736B (en)Power equipment infrared image fault positioning, identifying and predicting method
CN112070134B (en) Power equipment image classification method, device, power equipment and storage medium
CN114283117A (en) An Insulator Defect Detection Method Based on Improved YOLOv3 Convolutional Neural Network
CN109858352B (en)Fault diagnosis method based on compressed sensing and improved multi-scale network
CN116863274B (en)Semi-supervised learning-based steel plate surface defect detection method and system
CN109934222A (en) A method for identifying the self-explosion of insulator strings based on transfer learning
CN107564025A (en)A kind of power equipment infrared image semantic segmentation method based on deep neural network
CN110838112A (en) An insulator defect detection method based on Hough transform and YOLOv3 network
CN107449994A (en)Partial discharge method for diagnosing faults based on CNN DBN networks
CN115861263A (en) An Image Detection Method for Insulator Defects Based on Improved YOLOv5 Network
CN116229380A (en) A method for identifying bird species related to bird-related faults in substations
CN108154072A (en)Insulator breakdown of taking photo by plane based on depth convolutional neural networks detects automatically
CN111444939A (en) Small-scale equipment component detection method based on weakly supervised collaborative learning in open scenarios in the power field
CN112419268A (en) A transmission line image defect detection method, device, equipment and medium
CN116503318A (en) A method, system and equipment for aerial insulator multi-defect detection combining CAT-BiFPN and attention mechanism
CN112634216B (en)Insulator self-explosion detection method based on deep learning model
CN117113066B (en) A method for detecting defects in transmission line insulators based on computer vision
CN111598854A (en)Complex texture small defect segmentation method based on rich robust convolution characteristic model
CN110751642A (en) Insulator crack detection method and system
CN116434051A (en)Transmission line foreign matter detection method, system and storage medium
CN112036472A (en) A method and system for visual image classification of power system
CN113344915B (en) A method and system for defect detection of key components of power transmission lines
CN114897857A (en) Defect detection method for solar cells based on lightweight neural network
CN113989209A (en)Power line foreign matter detection method based on fast R-CNN

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
OL01Intention to license declared
OL01Intention to license declared

[8]ページ先頭

©2009-2025 Movatter.jp