Movatterモバイル変換


[0]ホーム

URL:


CN108520516A - A Crack Detection and Segmentation Method for Bridge Pavement Based on Semantic Segmentation - Google Patents

A Crack Detection and Segmentation Method for Bridge Pavement Based on Semantic Segmentation
Download PDF

Info

Publication number
CN108520516A
CN108520516ACN201810309151.3ACN201810309151ACN108520516ACN 108520516 ACN108520516 ACN 108520516ACN 201810309151 ACN201810309151 ACN 201810309151ACN 108520516 ACN108520516 ACN 108520516A
Authority
CN
China
Prior art keywords
layers
crack
denseblock
convolution
semantic segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810309151.3A
Other languages
Chinese (zh)
Inventor
李良福
孙瑞赟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Normal University
Original Assignee
Shaanxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Normal UniversityfiledCriticalShaanxi Normal University
Priority to CN201810309151.3ApriorityCriticalpatent/CN108520516A/en
Publication of CN108520516ApublicationCriticalpatent/CN108520516A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

Invention belongs to computer vision, deep learning field herein;More particularly to a kind of bridge pavement Crack Detection and dividing method based on semantic segmentation, the sample concentrated to data carries out artificial semantic segmentation, makes the label of training sample;Secondly, the amount of images concentrated to data by data enhancing expands;Then, ready training set input FC DenseNet103 network models are trained, finally carry out crack extract using the crack image of collected test set;Traditional Crack Detection mostly uses greatly the methods of edge detection, morphology or thresholding, need artificially setting and adjusting parameter, the deep learning method being currently known is established affected by noise small, crack target clearly on the basis of, underestimate the complexity of bridge pavement image, it is difficult to meet the needs of engineer application;Present invention combination semantic segmentation algorithm provides a kind of bridge pavement crack suitable under complex background detection and dividing method automatically.

Description

Translated fromChinese
一种基于语义分割的桥梁路面裂缝检测和分割方法A Crack Detection and Segmentation Method for Bridge Pavement Based on Semantic Segmentation

技术领域technical field

本发明属于计算机视觉、深度学习领域,具体涉及一种基于语义分割的桥梁路面裂缝检测和分割方法。The invention belongs to the fields of computer vision and deep learning, and in particular relates to a method for detecting and segmenting bridge pavement cracks based on semantic segmentation.

背景技术Background technique

采取有效的手段对桥梁裂缝进行检测与分割,对确保公共交通的安全和正常运行起着十分重要的作用,长期以来已经受到了国内外学术界的广泛关注。从传统的图像处理技术到如今机器学习与深度学习的大热,国内外学者不断将新的技术用于桥梁路面裂缝的检测和分割,并且取得了一些优秀的研究成果;Kirschke等提出了一种基于图像直方图的路面裂缝分割算法,该算法根据图像直方图特点,利用阈值分割提取裂缝信息;孙亮等人提出了基于自适应阈值Canny算法的裂缝检测方法,该方法针对只能人工选取阈值的缺点进行了改进,结合Harris特征检测算法和图像中各像素点的梯度值,对裂缝图像进行提取;Landstrom等人提出了一种全自动裂缝监测系统,该算法结合形态学处理和逻辑回归算法对裂缝进行检测,利用统计学分类方法滤除噪声,提高检测精度;随着科学技术的发展,将深度学习应用到桥梁裂缝检测的算法随即出现;Chen Yao等人提出了基于爬墙机器人的桥梁裂缝的检测和分类方法,该方法应用机器视觉中的非接触采集图像和机器学习中的支持向量机算法,并且结合小波变换与形态学分析完成了裂缝的提取与识别,开始了对深度学习领域的探索;刘洪公等人提出了基于卷积神经网络的桥梁裂缝检测与识别,该算法结合机器视觉与卷积神经网络技术,改进裂缝分类的卷积神经网络模型(CNN),最终提出一种新的智能裂缝检测方案;Zhang lei等人提出了基于深度卷积神经网络的路面裂缝检测算法,该算法训练了有监督的深度卷积神经网络,对收集到的图像中的每个图像块进行分类,并产生良好效果;以上提出的桥梁裂缝检测算法之所以取得了良好的实验效果,是因为采集图像的对比度很高,噪声很低,且场景比较简单,不存在任何障碍物;而现实生活中这种理想的情况却比较少,实际的桥梁路面往往会存在水渍、车道线、落叶等障碍物或者路面纹理极其复杂,传统方法难以满足工程需要。Taking effective measures to detect and segment bridge cracks plays a very important role in ensuring the safety and normal operation of public transportation, and has received extensive attention from domestic and foreign academic circles for a long time. From the traditional image processing technology to the current popularity of machine learning and deep learning, domestic and foreign scholars continue to use new technologies for the detection and segmentation of bridge pavement cracks, and have achieved some excellent research results; Kirschke et al. proposed a A pavement crack segmentation algorithm based on image histogram, which uses threshold segmentation to extract crack information according to the characteristics of image histogram; Sun Liang et al. proposed a crack detection method based on adaptive threshold Canny algorithm, which can only manually select the threshold The shortcomings of the system have been improved, and the crack image is extracted by combining the Harris feature detection algorithm and the gradient value of each pixel in the image; Landstrom et al. proposed a fully automatic crack monitoring system, which combines morphological processing and logistic regression algorithm To detect cracks, use statistical classification methods to filter out noise and improve detection accuracy; with the development of science and technology, algorithms that apply deep learning to bridge crack detection appear immediately; Chen Yao et al. proposed a bridge based on wall-climbing robots Crack detection and classification method, this method applies non-contact image acquisition in machine vision and support vector machine algorithm in machine learning, and combines wavelet transform and morphological analysis to complete the extraction and identification of cracks, and started the deep learning field exploration; Liu Honggong and others proposed a bridge crack detection and recognition based on convolutional neural network. This algorithm combines machine vision and convolutional neural network technology to improve the convolutional neural network model (CNN) for crack classification, and finally proposes a new Intelligent crack detection scheme; Zhang lei et al. proposed a pavement crack detection algorithm based on deep convolutional neural network, which trained a supervised deep convolutional neural network to classify each image block in the collected images , and produce good results; the bridge crack detection algorithm proposed above has achieved good experimental results, because the contrast of the collected images is very high, the noise is very low, and the scene is relatively simple, without any obstacles; and in real life This ideal situation is relatively rare. The actual bridge pavement often has obstacles such as water stains, lane lines, and fallen leaves, or the pavement texture is extremely complex. Traditional methods are difficult to meet engineering needs.

针对以上研究的不足,提出了基于语义分割的复杂背景下的桥梁路面缝检测和分割方法。Aiming at the deficiencies of the above research, a bridge pavement seam detection and segmentation method in complex background based on semantic segmentation is proposed.

发明内容Contents of the invention

为了解决现有技术中存在的受环境因素制约,对于复杂背景下的检测和分割方法不准确的问题,本发明提供了一种基于语义分割的桥梁路面裂缝检测和分割方法;本发明要解决的技术问题通过以下技术方案实现:In order to solve the problem of inaccurate detection and segmentation methods in complex backgrounds, which are restricted by environmental factors in the prior art, the present invention provides a method for detecting and segmenting bridge pavement cracks based on semantic segmentation; The technical problem is achieved through the following technical solutions:

一种基于语义分割的桥梁路面裂缝检测和分割方法,包括以下步骤:A method for detecting and segmenting bridge pavement cracks based on semantic segmentation, comprising the following steps:

步骤一:数据集采集,通过对沿路面裂缝方向连续拍照,得到裂缝图像样本,并人工的制作相应的标签,对所述裂缝图像进行语义分割标注,对裂缝图像中裂缝标注为一种单一颜色,裂缝图像中裂缝之外的所有干扰物以及背景全部设置为另一种统一的单一颜色;对数据集进行扩增,并将扩增后数据集随机分类为训练集与测试集;Step 1: Data set collection, by continuously taking pictures along the crack direction of the pavement, obtaining crack image samples, and manually making corresponding labels, performing semantic segmentation and labeling on the crack images, and marking the cracks in the crack images as a single color , all distractors and backgrounds in the crack image are set to another unified single color; the data set is amplified, and the amplified data set is randomly classified into a training set and a test set;

步骤二:将训练集中裂缝图像分次输入FC-Densenet103网络模型进行训练,具体方法如下:Step 2: Input the crack images in the training set into the FC-Densenet103 network model for training. The specific method is as follows:

步骤1:将训练集中裂缝图像进行一次3*3的卷积;Step 1: Perform a 3*3 convolution on the crack image in the training set;

步骤2:并将卷积结果输入包含4个layers层的DenseBlock模块;Step 2: Input the convolution result into the DenseBlock module containing 4 layers;

步骤3:将所述步骤2结果进行Transition Down操作,降低裂缝图像分辨率;Step 3: Transition Down the result of the step 2 to reduce the resolution of the crack image;

步骤4:将所述DenseBlock模块layers层数量依次设置为5层、7层、10层、12层,依次重复4次步骤2与步骤3;Step 4: Set the number of layers of the DenseBlock module to 5 layers, 7 layers, 10 layers, and 12 layers in turn, and repeat step 2 and step 3 four times in turn;

步骤5:将所述步骤4的结果输入由15个layers组成的Bottleneck,完成全部下采样,并进行多个特征的连接操作;Step 5: Input the result of the step 4 into the Bottleneck composed of 15 layers, complete all the downsampling, and perform the connection operation of multiple features;

步骤6:将上层输出结果输入由Transition Up和DenseBlock组成的上采样通道,DenseBlock对应下采样中的layers层数为12层;Step 6: Input the output result of the upper layer into the upsampling channel composed of Transition Up and DenseBlock, and DenseBlock corresponds to 12 layers in the downsampling layer;

步骤7:将所述步骤6中DenseBlock的layers层数依次设为10、7、5、4,重复4次步骤6;Step 7: Set the layers of DenseBlock in step 6 to 10, 7, 5, and 4 in sequence, and repeat step 6 4 times;

步骤8:对所述步骤7的输出结果进行1*1卷积操作;Step 8: Perform a 1*1 convolution operation on the output result of the step 7;

步骤9:将所述步骤8结果输入softmax层进行判断,输出裂缝与非裂缝的概率;Step 9: Input the result of step 8 into the softmax layer for judgment, and output the probability of cracks and non-cracks;

步骤三:所述步骤二训练完成后,通过训练好的FC-Densenet103网络模型对测试集中裂缝图像进行测试,得到测试结果。Step 3: After the training in Step 2 is completed, test the crack images in the test set through the trained FC-Densenet103 network model to obtain the test results.

进一步的,所述步骤二中FC-DenseNet 103网络模型包括由DenseBlock以及Transition Down组成的下采样路径,和由DenseBlock与Transition Up组成的上采样路径,以及softmax函数。Further, the FC-DenseNet 103 network model in the step 2 includes a down-sampling path composed of DenseBlock and Transition Down, an up-sampling path composed of DenseBlock and Transition Up, and a softmax function.

进一步的,所述FC-DenseNet 103网络模型共由103个卷积层构成:第一个卷积直接作用于输入图像,由DenseBlock组成的下采样路径中有38个卷积层,瓶颈Bottleneck中有15个卷积层,由DenseBlock组成的上采样路径中有38个卷积层,所述FC-DenseNet 103网络模型中还包含5个Transition Down,每个Transition Down包含一个卷积,以及5个Transition Up,每个Transition Up包含一个转置卷积,以及网络中最后一层的1*1卷积。Further, the FC-DenseNet 103 network model is composed of 103 convolutional layers: the first convolution directly acts on the input image, there are 38 convolutional layers in the downsampling path composed of DenseBlock, and there are 15 convolutional layers, 38 convolutional layers in the upsampling path composed of DenseBlock, the FC-DenseNet 103 network model also contains 5 Transition Down, each Transition Down contains a convolution, and 5 Transition Up, each Transition Up contains a transposed convolution, and a 1*1 convolution of the last layer in the network.

进一步的,所述步骤5完成下采样后,对多个输出特征进行连接操作,具体表示为如公式(1):Further, after the down-sampling is completed in step 5, a connection operation is performed on multiple output features, which is specifically expressed as formula (1):

Xl=Hl([X0X1,...,Xl-1] (1)Xl =Hl ([X0 X1 ,...,Xl-1 ] (1)

式中l表示层数,Xl表示l层的输出,[X0X1...Xl-1]表示将0到l-1层的输出特征图连接,Hl(·)表示Batch Normalization、ReLU和3*3的卷积的组合。In the formula, l represents the number of layers, Xl represents the output of layer l, [X0 X1 ... Xl-1 ] represents the connection of output feature maps from layer 0 to layer l-1, Hl( ) represents Batch Normalization, A combination of ReLU and 3*3 convolution.

进一步的,所述步骤2中包含4个layers层的DenseBlock模块,其layer层由BatchNormalization、ReLU、3*3卷积和Dropout构成,所述Dropout是指在深度学习网络的训练过程中,对于神经网络单元,按照一定的概率将其暂时从网络中丢弃,以使每一个mini-batch都在训练不同的网络,其中Dropout=0.2。Further, the step 2 includes a DenseBlock module with 4 layers layers, and its layer layer is composed of BatchNormalization, ReLU, 3*3 convolution and Dropout, and the Dropout refers to the training process of the deep learning network. The network unit is temporarily discarded from the network according to a certain probability, so that each mini-batch is training a different network, where Dropout=0.2.

进一步的,所述Batch Normalization具体算法如下:Further, the specific algorithm of the Batch Normalization is as follows:

Batch Normalization算法在每一次迭代中的每一层输入都进行了归一化,将输入数据的分布归一化为均值为0,方差为1的分布,具体如公式(2):The Batch Normalization algorithm normalizes the input of each layer in each iteration, and normalizes the distribution of the input data to a distribution with a mean value of 0 and a variance of 1, as shown in formula (2):

其中,xk表示输入数据的第k维,E[xk]表示k维的平均值,表示标准差;Among them, xk represents the kth dimension of the input data, E[xk ] represents the average value of the k dimension, Indicates the standard deviation;

Batch Normalization算法设置了两个可学习的变量γ和β,具体如公式(3),The Batch Normalization algorithm sets two learnable variables γ and β, as shown in formula (3),

γ和β用于还原上一层应该学到的数据分布。γ and β are used to restore the data distribution that the previous layer should have learned.

进一步的,所述ReLU包括连续的非线性激活函数Activation Function与Rectifier,具体计算如公式(4)所示:Further, the ReLU includes a continuous nonlinear activation function Activation Function and Rectifier, and the specific calculation is shown in formula (4):

rectifier(x)=max(0,x) (4)。rectifier(x)=max(0,x) (4).

进一步的,所述Transition Down用于减少特征图的空间维度,由BatchNormalization、RELU、1*1卷积与2*2池化操作组成,其中1*1的卷积用于保存特征图的数量,2*2的池化操作用于降低特征图的分辨率,所述Transition Up由一个转置卷积构成,用于恢复输入图像的空间分辨率,所述转置卷积仅对最后一个DenseBlock的特征图使用。Further, the Transition Down is used to reduce the spatial dimension of the feature map, and consists of BatchNormalization, RELU, 1*1 convolution and 2*2 pooling operations, where the 1*1 convolution is used to save the number of feature maps, The 2*2 pooling operation is used to reduce the resolution of the feature map. The Transition Up consists of a transposed convolution to restore the spatial resolution of the input image. The transposed convolution is only for the last DenseBlock feature map used.

进一步的,所述Bottleneck是由15个layers层构成的DenseBlock。Further, the Bottleneck is a DenseBlock composed of 15 layers.

进一步的,所述FC-DenseNet 103网络模型采用Filter Concatenation把特征图按深度链接起来。Further, the FC-DenseNet 103 network model uses Filter Concatenation to link feature maps by depth.

与现有技术相比,本发明的有益效果:Compared with prior art, the beneficial effect of the present invention:

传统的裂缝检测大多采用边缘检测、形态学或者阈值化等方法,需要人为设置和调整参数。随着深度学习的快速发展,该方法已经成功应用到桥梁裂缝检测领域,虽具有自适应性,不再需要人为设置和调整参数,但目前已知的深度学习方法均建立在受噪声影响小,裂缝目标清晰的基础上,低估了桥梁路面图像的复杂程度,难以满足工程应用的需要,本发明将DenseNet结构用于桥梁路面裂缝检测与提取,并取得了显著效果,打破了原有桥梁裂缝检测中背景单一的局限,更有实用价值。Most of the traditional crack detection methods use edge detection, morphology, or thresholding methods, which require manual setting and adjustment of parameters. With the rapid development of deep learning, this method has been successfully applied to the field of bridge crack detection. Although it is self-adaptive and does not require manual setting and adjustment of parameters, the currently known deep learning methods are all based on the fact that they are less affected by noise. On the basis of the clear crack target, the complexity of the bridge pavement image is underestimated, which is difficult to meet the needs of engineering applications. The present invention uses the DenseNet structure for bridge pavement crack detection and extraction, and has achieved remarkable results, breaking the original bridge crack detection. With the limitation of a single background, it has more practical value.

附图说明Description of drawings

图1是桥梁路面裂缝图像。Figure 1 is an image of bridge pavement cracks.

图2是裂缝图像以及裂缝图像标签可视化后的图像。Figure 2 is the visualized image of the crack image and the crack image label.

图3是扩增后数据集图像。Figure 3 is the image of the data set after amplification.

图4是DenseBlock示意图。Figure 4 is a schematic diagram of DenseBlock.

图5是FC-DenseNet 103网络模型的具体结构。Figure 5 is the specific structure of the FC-DenseNet 103 network model.

图6是layers层的具体结构。Figure 6 is the specific structure of the layers layer.

图7是Transition Down的具体结构。Figure 7 is the specific structure of Transition Down.

图8是Transition Up的具体结构。Figure 8 is the specific structure of Transition Up.

图9是部分测试结果图。Figure 9 is a diagram of some test results.

具体实施方式Detailed ways

下面结合具体实施例对本发明做进一步详细的描述,但本发明的实施方式不限于此。The present invention will be described in further detail below in conjunction with specific examples, but the embodiments of the present invention are not limited thereto.

一种基于语义分割的桥梁路面裂缝检测和分割方法,包括以下步骤:A method for detecting and segmenting bridge pavement cracks based on semantic segmentation, comprising the following steps:

步骤一:数据集采集,通过无人机沿路面裂缝方向飞行,并连续拍照,得到裂缝图像;对裂缝图像进行语义分割,需要对数据集中的样本人工的制作相应的标签;在具体的标注过程中,对裂缝图像中裂缝标注为一种单一颜色,裂缝图像中裂缝之外的所有干扰物以及背景全部设置为另一种统一的单一颜色;基于语义分割的桥梁路面缝检测和分割方法的实现,需要大量的、带语义类别的标签的路面裂缝图像作为训练集和测试集;但是,到目前为止,全球还没有公开的、带类别标签的、用于桥梁路面裂缝图像语义分割的数据集合;因此,必须自己创建用于桥梁路面裂缝图像检测和分割的数据集合;由于手动制作图像标签也有相当大的工作量,因此我们应采用效率最高、计算量最小的数据集扩增方法;具体采用的数据增强的方法为:Step 1: Data set collection, fly the UAV along the direction of road cracks, and take pictures continuously to obtain crack images; for semantic segmentation of crack images, it is necessary to manually make corresponding labels for samples in the data set; in the specific labeling process In the crack image, the cracks in the crack image are marked as a single color, and all the distractors and backgrounds in the crack image are set to another unified single color; the implementation of the bridge pavement crack detection and segmentation method based on semantic segmentation , requires a large number of labeled pavement crack images with semantic categories as the training set and test set; however, so far, there is no public data set with category labels for semantic segmentation of bridge pavement crack images; Therefore, it is necessary to create a data set for bridge pavement crack image detection and segmentation by ourselves; since manual image labeling also has a considerable workload, we should use the most efficient and least computationally intensive data set augmentation method; the specific use The methods of data augmentation are:

a.从256×256的图像中提取随机的224×224的碎片;a. Extract random 224×224 fragments from the 256×256 image;

b.对随机裁取的碎片进行水平反射与垂直反射;b. Perform horizontal reflection and vertical reflection on randomly selected fragments;

并将扩增后数据集随机分类为训练集与测试集;And randomly classify the augmented data set into training set and test set;

步骤二:将训练集中裂缝图像分次输入FC-Densenet103网络模型进行训练,具体方法如下:Step 2: Input the crack images in the training set into the FC-Densenet103 network model for training. The specific method is as follows:

步骤1:将训练集中裂缝图像进行一次3*3的卷积;Step 1: Perform a 3*3 convolution on the crack image in the training set;

步骤2:并将卷积结果输入包含4个layers层的DenseBlock模块;Step 2: Input the convolution result into the DenseBlock module containing 4 layers;

步骤3:将步骤2结果进行Transition Down操作,降低裂缝图像分辨率;Step 3: Transition Down the result of step 2 to reduce the resolution of the crack image;

步骤4:将DenseBlock模块layers层数量依次设置为5层、7层、10层、12层,依次重复4次步骤2与步骤3;Step 4: Set the number of layers of the DenseBlock module to 5 layers, 7 layers, 10 layers, and 12 layers in turn, and repeat steps 2 and 3 four times in turn;

步骤5:将步骤4的结果输入由15个layers组成的Bottleneck,完成全部下采样,并进行多个特征的连接操作;Step 5: Input the results of step 4 into the Bottleneck composed of 15 layers, complete all downsampling, and perform the connection operation of multiple features;

步骤6:将上层输出结果输入由Transition Up和DenseBlock组成的上采样通道,DenseBlock对应下采样中的layers层数为12层;Step 6: Input the output result of the upper layer into the upsampling channel composed of Transition Up and DenseBlock, and DenseBlock corresponds to 12 layers in the downsampling layer;

步骤7:将步骤6中DenseBlock的layers层数依次设为10、7、5、4,重复4次步骤6;Step 7: Set the layers of DenseBlock in step 6 to 10, 7, 5, and 4 in sequence, and repeat step 6 4 times;

步骤8:对步骤7的输出结果进行1*1卷积操作;Step 8: Perform 1*1 convolution operation on the output result of step 7;

步骤9:将步骤8结果输入softmax层进行判断,输出裂缝与非裂缝的概率;Step 9: Input the result of step 8 into the softmax layer for judgment, and output the probability of cracks and non-cracks;

步骤三:步骤二训练完成后,通过训练好的FC-Densenet103网络模型对测试集中裂缝图像进行测试,得到测试结果。Step 3: After the training in Step 2 is completed, test the crack images in the test set through the trained FC-Densenet103 network model to obtain the test results.

如图5、图6、图7、图8所示,步骤二中FC-DenseNet 103网络模型包括由DenseBlock以及Transition Down组成的下采样路径,和由DenseBlock与Transition Up组成的上采样路径,以及softmax函数,DenseBlock与Transition Up组成的的上采样路径用于恢复输入图像空间分辨率,其中m代表特征图的个数,c代表最后分类个数。As shown in Figure 5, Figure 6, Figure 7, and Figure 8, the FC-DenseNet 103 network model in step 2 includes a downsampling path composed of DenseBlock and Transition Down, an upsampling path composed of DenseBlock and Transition Up, and softmax Function, the upsampling path composed of DenseBlock and Transition Up is used to restore the spatial resolution of the input image, where m represents the number of feature maps, and c represents the number of final classifications.

FC-DenseNet 103网络模型共由103个卷积层构成:第一个卷积直接作用于输入图像,由DenseBlock组成的下采样路径中有38个卷积层,瓶颈Bottleneck中有15个卷积层,由DenseBlock组成的上采样路径中有38个卷积层,FC-DenseNet 103网络模型中还包含5个Transition Down,每个Transition Down包含一个卷积,以及5个Transition Up,每个Transition Up包含一个转置卷积,以及网络中最后一层的1*1卷积。The FC-DenseNet 103 network model consists of a total of 103 convolutional layers: the first convolution acts directly on the input image, there are 38 convolutional layers in the downsampling path composed of DenseBlock, and there are 15 convolutional layers in the bottleneck Bottleneck , there are 38 convolutional layers in the upsampling path composed of DenseBlock, FC-DenseNet 103 network model also contains 5 Transition Down, each Transition Down contains a convolution, and 5 Transition Up, each Transition Up contains A transposed convolution, and a 1*1 convolution for the last layer in the network.

Dense Convolutional Network(DenseNet)是一种具有密集连接的卷积神经网络。在该网络中,任何两层之间都有直接的连接,也就是说,网络每一层的输入都是前面所有层输出的并集,而该层所学习的特征图也会被直接传给其后面所有层作为输入。在传统的卷积神经网络中,如果你有L层,那么就会有L个连接,但是在DenseNet中,会有L(L+1)/2个连接,具体表示为如下公式(1):Dense Convolutional Network (DenseNet) is a convolutional neural network with dense connections. In this network, there is a direct connection between any two layers, that is, the input of each layer of the network is the union of the outputs of all previous layers, and the feature map learned by this layer will also be directly passed to All subsequent layers are used as input. In a traditional convolutional neural network, if you have L layers, then there will be L connections, but in DenseNet, there will be L(L+1)/2 connections, specifically expressed as the following formula (1):

Xl=Hl([X0X1,...,Xl-1]) (1)Xl =Hl ([X0 X1 ,...,Xl-1 ]) (1)

式中l表示层数,Xl表示l层的输出,[X0X1...Xl-1]表示将0到l-1层的输出特征图连接;Hl(·)表示Batch Normalization、ReLU和3*3的卷积的组合。In the formula, l represents the number of layers, Xl represents the output of layer l, [X0 X1 ... Xl-1 ] represents the connection of output feature maps from layer 0 to layer l-1; Hl ( ) represents Batch Normalization , a combination of ReLU and 3*3 convolutions.

众所周知,在一定程度上网络模型越深,取得的效果越好,然而网络越深往往越难以训练;因为卷积网络在训练的过程中,前一层的参数变化影响着后面层的变化,而且这种影响会随着网络深度的增加而不断放大。卷积网络进行训练时,绝大多数都采用批梯度下降法,那么随着输入数据的不断变化以及网络中参数不断调整,网络的各层输入数据的分布则会不断变化,那么各层在训练的过程中就需要不断的改变以适应这种新的数据分布,从而造成网络训练困难并且难以拟合的问题;针对这种问题,本发明在训练过程中引入Batch Normalization层;As we all know, to a certain extent, the deeper the network model, the better the effect, but the deeper the network, the more difficult it is to train; because during the training process of the convolutional network, the parameter changes of the previous layer affect the changes of the subsequent layer, and This effect will continue to amplify as the depth of the network increases. When the convolutional network is trained, most of them use the batch gradient descent method. Then, with the continuous change of the input data and the continuous adjustment of the parameters in the network, the distribution of the input data of each layer of the network will continue to change. Then each layer is training. Just need to constantly change in the process to adapt to this new data distribution, thereby cause network training difficulty and the problem that is difficult to fit; To this kind of problem, the present invention introduces Batch Normalization layer in training process;

Batch Normalization算法在每一次迭代中的每一层输入都进行了归一化,将输入数据的分布归一化为均值为0,方差为1的分布,具体如公式(2):The Batch Normalization algorithm normalizes the input of each layer in each iteration, and normalizes the distribution of the input data to a distribution with a mean value of 0 and a variance of 1, as shown in formula (2):

其中,xk表示输入数据的第k维,E[xk]表示k维的平均值,表示标准差;Among them, xk represents the kth dimension of the input data, E[xk ] represents the average value of the k dimension, Indicates the standard deviation;

Batch Normalization算法设置了两个可学习的变量γ和β,具体如公式(3),The Batch Normalization algorithm sets two learnable variables γ and β, as shown in formula (3),

γ和β用于还原上一层应该学到的数据分布,y表示数据输出值。γ and β are used to restore the data distribution that the previous layer should learn, and y represents the data output value.

为了增强网络的表达能力,深度学习引入了连续的非线性激活函数ActivationFunction,ReLU包括连续的非线性激活函数Activation Function与Rectifier函数;网络中一般采用的激活函数有Sigmod函数和Rectifier函数,具体计算如公式(4)所示:In order to enhance the expressive ability of the network, deep learning introduces the continuous nonlinear activation function ActivationFunction, ReLU includes the continuous nonlinear activation function Activation Function and Rectifier function; the activation functions generally used in the network are Sigmod function and Rectifier function, the specific calculation is as follows Formula (4) shows:

rectifier(x)=max(0,x) (4)。rectifier(x)=max(0,x) (4).

由于激活函数ReLU一般被认为有生物上的解释,并且ReLU已经被证明比Sigmod函数的拟合效果更好;因此,模型中的激活函数选择使用ReLU。Since the activation function ReLU is generally considered to have a biological explanation, and ReLU has been proven to be better than the fitting effect of the Sigmod function; therefore, the activation function in the model chooses to use ReLU.

根据(1)式可知我们需要对多个输出特征图进行连接操作,而进行连接操作的必要条件是特征图的大小一致;在卷积网络中下采样层是必不可少的,它的作用是通过改变特征图的大小来进行降维;因此,为了便于在我们的体系结构中能进行下采样并且顺利完成连接操作,我们将网络划分为多个密集连接的密集块DenseBlock,每个DenseBlock中的特征图的大小相同。According to formula (1), we can know that we need to connect multiple output feature maps, and the necessary condition for the connection operation is that the size of the feature maps is consistent; the downsampling layer is essential in the convolutional network, and its role is Dimensionality reduction is performed by changing the size of the feature map; therefore, in order to facilitate downsampling and complete the connection operation in our architecture, we divide the network into multiple densely connected dense blocks DenseBlock, each DenseBlock The feature maps are of the same size.

步骤2中包含4个layers层的DenseBlock模块,其layer层由BatchNormalization、ReLU、3*3卷积和Dropout构成,Dropout是指在深度学习网络的训练过程中,对于神经网络单元,按照一定的概率将其暂时从网络中丢弃,以使每一个mini-batch都在训练不同的网络,其中Dropout=0.2,采用Dropout层可以有效防止过拟合,提高实验准确率。Step 2 contains a DenseBlock module with 4 layers. The layer layer is composed of BatchNormalization, ReLU, 3*3 convolution and Dropout. Dropout refers to the neural network unit during the training process of the deep learning network. According to a certain probability It is temporarily discarded from the network so that each mini-batch is training a different network, where Dropout=0.2, using the Dropout layer can effectively prevent overfitting and improve the accuracy of the experiment.

Transition Down操作用来减少特征图的空间维度,这样的转换由BatchNormalization、RELU、1*1卷积和2*2池化操作组成;其中使用1*1的卷积用于保存特征图的数量,使用2*2的池化操作用于降低特征图的分辨率。随着层数的增加导致特征数量线性增长,然而,池化操作可以有效降低特征图的分辨率,因此通过池化操作来降低空间分辨率,以此来补偿由层数增加造成的特征图数量的增长。The Transition Down operation is used to reduce the spatial dimension of the feature map. This conversion consists of BatchNormalization, RELU, 1*1 convolution and 2*2 pooling operations; where 1*1 convolution is used to save the number of feature maps. A 2*2 pooling operation is used to reduce the resolution of the feature map. As the number of layers increases, the number of features increases linearly. However, the pooling operation can effectively reduce the resolution of the feature map, so the spatial resolution is reduced by the pooling operation to compensate for the number of feature maps caused by the increase in the number of layers. growth of.

Transition Up操作的作用是恢复输入图像的空间分辨率,这样的转换由一个转置卷积构成,转置卷积仅对最后一个DenseBlock的特征图使用,因为最后一个DenseBlock综合了所有之前DenseBlock的信息。The function of the Transition Up operation is to restore the spatial resolution of the input image. Such a conversion consists of a transposed convolution. The transposed convolution is only used for the feature map of the last DenseBlock, because the last DenseBlock synthesizes all the information of the previous DenseBlock. .

下采样路径的最后一层被称为Bottleneck;Bottlenek实际是由15个layers层构成的DenseBlock,其优点是可以缓解梯度消失,并且大大减少了计算量。The last layer of the downsampling path is called Bottleneck; Bottlenek is actually a DenseBlock composed of 15 layers, which has the advantage of alleviating gradient disappearance and greatly reducing the amount of calculation.

FC-DenseNet 103网络模型采用Filter Concatenation把特征图按深度链接起来。The FC-DenseNet 103 network model uses Filter Concatenation to link feature maps by depth.

本发明所提供的另一实施例为:Another embodiment provided by the present invention is:

数据集采集过程具体的采集方法是让无人机在路面裂缝的附近进行悬停,然后通过无人机上的云台调整面阵相机的姿态,使得相机的镜头平行于路面裂缝的表面,并且要求相机的镜头距离路面裂缝的表面大致为30cm,调整好相机的姿态和距离后,然无人机从悬停状态转化为沿着路面裂缝方向平稳飞行,连续拍照,拍照如图1所示。The specific collection method of the data set collection process is to let the UAV hover near the road cracks, and then adjust the attitude of the area array camera through the pan/tilt on the UAV, so that the camera lens is parallel to the surface of the road cracks, and requires The distance between the lens of the camera and the surface of the road crack is about 30cm. After adjusting the attitude and distance of the camera, the UAV changes from a hovering state to a smooth flight along the direction of the road crack, taking pictures continuously, as shown in Figure 1.

本实施例使用的图像语义分割标注工具是像素级别的图像语义分割标注工具LabelMe;具体的标注方法是,把图像中的裂缝标注为绿色即RGB颜色为(0,255,0),图像中裂缝之外的所有干扰物以及背景全部设置为黑色,即RGB颜色为(0,0,0);裂缝图像以及对裂缝图像标签可视化后的图像如图2所示,其中第一行为原图,第二行为裂缝标签可视化后图像。The image semantic segmentation and labeling tool used in this embodiment is LabelMe, a pixel-level image semantic segmentation and labeling tool; the specific labeling method is to label the cracks in the image as green, that is, the RGB color is (0, 255, 0), and the cracks in the image All other distracting objects and backgrounds are set to black, that is, the RGB color is (0, 0, 0); the crack image and the image after visualizing the crack image label are shown in Figure 2, where the first row is the original image, and the first row is the original image. The second row is the image after visualization of the crack labels.

通过数据增强对数据集进行数据库扩增,使数据库扩增为原来的4096倍。部分扩增后的数据集如下图3所示,其中第一行为原图,第二行为对原图进行随机裁剪后的图像,第三行为对裁剪后图像进行水平翻转后的图像,第四行为对裁剪后的图像进行垂直翻转后的图像。The data set is augmented by data augmentation, and the database is enlarged by 4096 times. Part of the amplified data set is shown in Figure 3 below, where the first row is the original image, the second row is the image after random cropping of the original image, the third row is the image after horizontally flipping the cropped image, and the fourth row is Image after vertically flipping the cropped image.

将处理好的数据集输入FC-Densenet103模型进行训练,首先用3*3的卷积核对输入数据进行处理。由于原图为彩色图像,有RGB三个通道,因此输入图像的m为3,经过3*3的卷积运算后,m成为48。Input the processed data set into the FC-Densenet103 model for training, and first process the input data with a 3*3 convolution kernel. Since the original image is a color image with three RGB channels, the m of the input image is 3, and after the 3*3 convolution operation, m becomes 48.

将卷积处理后的图像输入包含4个layers层的DenseBlock模块,每个DenseBlock是前面特征图的迭代级联,因此经过此模块后m为上层卷积后的特征图个数与新增特征图个数之和,即48+4*16=112。Input the convoluted image into the DenseBlock module containing 4 layers. Each DenseBlock is an iterative cascade of the previous feature maps. Therefore, after this module, m is the number of feature maps after the upper layer convolution and the new feature maps. The sum of the numbers is 48+4*16=112.

进行Transition Down;经过DenseBlock后特征数量线性增长,进行TransitionDown可以有效降低特征图的分辨率,以此来补偿由层数增加造成的特征图数量的迅速增长,可有效避免信息爆炸;Transition Down仅降低特征图分辨率,不改变数量,因此m仍然为112。Perform Transition Down; the number of features increases linearly after DenseBlock, and TransitionDown can effectively reduce the resolution of feature maps to compensate for the rapid increase in the number of feature maps caused by the increase in the number of layers, which can effectively avoid information explosion; Transition Down only reduces The feature map resolution does not change the number, so m is still 112.

仅改变DenseBlock的layers层数,将层数依次设置为5层、7层、10层、12层,重复4次图像输入DenseBlock模块与进行Transition Down步骤;同样的计算原则下,可得m依次为192、304、464、656。Only change the number of layers of DenseBlock, set the number of layers to 5 layers, 7 layers, 10 layers, and 12 layers in sequence, and repeat the image input DenseBlock module and Transition Down step 4 times; under the same calculation principle, m can be obtained in turn as 192, 304, 464, 656.

输入Bottleneck;Bottleneck实际是一个由15层layers组成的DenseBlock;因此,特征图个数仍然为之前特征图的特征级联,即m=896+15*16=896。Input Bottleneck; Bottleneck is actually a DenseBlock composed of 15 layers; therefore, the number of feature maps is still the feature cascade of the previous feature map, that is, m=896+15*16=896.

将上层输出结果输入由Transition Up和DenseBlock组成的上采样通道;DenseBlock对应下采样中的layers层数为12层。由转置卷积上采样上一个DenseBlock生成的特征图,然后与下采样过程中相同分辨率的特征图进行跨层连接,以用于弥补池化过程中丢失的细节特征;为了避免特征图爆炸,此时的DenseBlock的输入并不连接到它的输出;因此,此时的m一共由三部分构成,来自TU的特征图数量、来自下采样路径的相同分辨率的特征图数量以及新DenseBlock中生成的特征图数量;即m=15*16+656+12*16=1088。Input the output result of the upper layer into the upsampling channel composed of Transition Up and DenseBlock; DenseBlock corresponds to 12 layers in the downsampling layer. The feature map generated by the transposed convolution upsampling the previous DenseBlock is then cross-layer connected with the feature map of the same resolution in the downsampling process to make up for the details lost in the pooling process; in order to avoid feature map explosion , the input of DenseBlock at this time is not connected to its output; therefore, m at this time is composed of three parts, the number of feature maps from TU, the number of feature maps of the same resolution from the downsampling path, and the new DenseBlock. The number of generated feature maps; that is, m=15*16+656+12*16=1088.

将DenseBlock的layers层数依次为10、7、5、4;重复将上层输出结果输入由Transition Up和DenseBlock组成的上采样通道;计算得的m依次为816、578、384、256。Set the layers of DenseBlock to 10, 7, 5, and 4 in sequence; repeatedly input the output of the upper layer into the upsampling channel composed of Transition Up and DenseBlock; the calculated m is 816, 578, 384, and 256 in sequence.

对图像进行1*1卷积操作,将m降为类别个数,由于我们用于桥梁裂缝的提取,只分为裂缝与非裂缝,因此,此时m=2。Perform 1*1 convolution operation on the image, and reduce m to the number of categories. Since we use it for the extraction of bridge cracks, it is only divided into cracks and non-cracks, so m=2 at this time.

最后输入softmax层进行判断,输出裂缝与非裂缝的概率;若裂缝概率大于非裂缝概率,则判断该像素为裂缝;若裂缝概率小于非裂缝概率,则判断该像素为非裂缝。Finally, input the softmax layer for judgment, and output the probability of cracks and non-cracks; if the crack probability is greater than the non-crack probability, the pixel is judged to be a crack; if the crack probability is smaller than the non-crack probability, the pixel is judged to be non-crack.

利用训练好的模型,对测试集进行测试。得到的部分测试结果如下图9所示;其中第一、三行是原图像,第二、四行是测试结果。Use the trained model to test on the test set. Part of the test results obtained are shown in Figure 9 below; the first and third lines are the original images, and the second and fourth lines are the test results.

以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, and it cannot be assumed that the specific implementation of the present invention is limited to these descriptions. For those of ordinary skill in the technical field of the present invention, without departing from the concept of the present invention, some simple deduction or replacement can be made, which should be regarded as belonging to the protection scope of the present invention.

Claims (10)

CN201810309151.3A2018-04-092018-04-09 A Crack Detection and Segmentation Method for Bridge Pavement Based on Semantic SegmentationPendingCN108520516A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201810309151.3ACN108520516A (en)2018-04-092018-04-09 A Crack Detection and Segmentation Method for Bridge Pavement Based on Semantic Segmentation

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201810309151.3ACN108520516A (en)2018-04-092018-04-09 A Crack Detection and Segmentation Method for Bridge Pavement Based on Semantic Segmentation

Publications (1)

Publication NumberPublication Date
CN108520516Atrue CN108520516A (en)2018-09-11

Family

ID=63430737

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201810309151.3APendingCN108520516A (en)2018-04-092018-04-09 A Crack Detection and Segmentation Method for Bridge Pavement Based on Semantic Segmentation

Country Status (1)

CountryLink
CN (1)CN108520516A (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109493320A (en)*2018-10-112019-03-19苏州中科天启遥感科技有限公司Method for extracting remote sensing image road and system, storage medium, electronic equipment based on deep learning
CN109669474A (en)*2018-12-212019-04-23国网安徽省电力有限公司淮南供电公司The adaptive hovering position optimization algorithm of multi-rotor unmanned aerial vehicle based on priori knowledge
CN109784386A (en)*2018-12-292019-05-21天津大学A method of it is detected with semantic segmentation helpers
CN109872318A (en)*2019-02-222019-06-11中国石油大学(华东) A method for producing geological outcrop fracture datasets for deep learning
CN109919942A (en)*2019-04-042019-06-21哈尔滨工业大学 Intelligent detection method of bridge cracks based on high precision noise reduction theory
CN110070008A (en)*2019-04-042019-07-30中设设计集团股份有限公司Bridge disease identification method adopting unmanned aerial vehicle image
CN110147714A (en)*2019-03-282019-08-20中国矿业大学Coal mine gob crack identification method and detection system based on unmanned plane
CN110197477A (en)*2019-05-072019-09-03北京邮电大学The method, apparatus and system of pavement crack detection
CN110349122A (en)*2019-06-102019-10-18长安大学A kind of pavement crack recognition methods based on depth convolution fused neural network
CN111626092A (en)*2020-03-262020-09-04陕西陕北矿业韩家湾煤炭有限公司Unmanned aerial vehicle image ground crack identification and extraction method based on machine learning
CN111832437A (en)*2020-06-242020-10-27万翼科技有限公司Building drawing identification method, electronic equipment and related product
CN112989981A (en)*2021-03-052021-06-18五邑大学Pavement crack detection method, system and storage medium
CN113092502A (en)*2021-04-142021-07-09成都理工大学Unmanned aerial vehicle pavement damage detection method and system
CN113222904A (en)*2021-04-212021-08-06重庆邮电大学Concrete pavement crack detection method for improving PoolNet network structure
CN113409267A (en)*2021-06-172021-09-17西安热工研究院有限公司Pavement crack detection and segmentation method based on deep learning
CN113506281A (en)*2021-07-232021-10-15西北工业大学 A bridge crack detection method based on deep learning framework
CN113610778A (en)*2021-07-202021-11-05武汉工程大学Bridge surface crack detection method and system based on semantic segmentation
CN114359272A (en)*2022-03-112022-04-15科大天工智能装备技术(天津)有限公司DenseNet-based bridge steel cable breakage detection method and system
CN114689600A (en)*2022-03-312022-07-01南京林业大学Method and system for detecting surface crack of bridge concrete structure
CN116229235A (en)*2023-03-062023-06-06西安电子科技大学 A human body pose estimation network model and estimation method based on thermal imaging
CN117152163A (en)*2023-11-012023-12-01安徽乾劲企业管理有限公司Bridge construction quality visual detection method
CN117612021A (en)*2023-10-192024-02-27广州大学Remote sensing extraction method and system for agricultural plastic greenhouse
CN119354969A (en)*2024-09-202025-01-24浙江大学 A bridge defect detection method based on acoustic and optical joint detection and recognition

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106910186A (en)*2017-01-132017-06-30陕西师范大学A kind of Bridge Crack detection localization method based on CNN deep learnings
CN107133960A (en)*2017-04-212017-09-05武汉大学Image crack dividing method based on depth convolutional neural networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106910186A (en)*2017-01-132017-06-30陕西师范大学A kind of Bridge Crack detection localization method based on CNN deep learnings
CN107133960A (en)*2017-04-212017-09-05武汉大学Image crack dividing method based on depth convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SIMON JEGOU等: ""The One Hundred Layers Tiramisu:Fully Convolutional DenseNets for Semantic Segmentation"", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS》*

Cited By (33)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109493320A (en)*2018-10-112019-03-19苏州中科天启遥感科技有限公司Method for extracting remote sensing image road and system, storage medium, electronic equipment based on deep learning
CN109493320B (en)*2018-10-112022-06-17苏州中科天启遥感科技有限公司Remote sensing image road extraction method and system based on deep learning, storage medium and electronic equipment
CN109669474B (en)*2018-12-212022-02-15国网安徽省电力有限公司淮南供电公司 Adaptive hovering position optimization algorithm for multi-rotor UAV based on prior knowledge
CN109669474A (en)*2018-12-212019-04-23国网安徽省电力有限公司淮南供电公司The adaptive hovering position optimization algorithm of multi-rotor unmanned aerial vehicle based on priori knowledge
CN109784386A (en)*2018-12-292019-05-21天津大学A method of it is detected with semantic segmentation helpers
CN109784386B (en)*2018-12-292020-03-17天津大学Method for assisting object detection by semantic segmentation
CN109872318A (en)*2019-02-222019-06-11中国石油大学(华东) A method for producing geological outcrop fracture datasets for deep learning
CN110147714A (en)*2019-03-282019-08-20中国矿业大学Coal mine gob crack identification method and detection system based on unmanned plane
CN110147714B (en)*2019-03-282023-06-23煤炭科学研究总院 Crack identification method and detection system in coal mine goaf based on unmanned aerial vehicle
CN109919942A (en)*2019-04-042019-06-21哈尔滨工业大学 Intelligent detection method of bridge cracks based on high precision noise reduction theory
CN110070008A (en)*2019-04-042019-07-30中设设计集团股份有限公司Bridge disease identification method adopting unmanned aerial vehicle image
CN110197477A (en)*2019-05-072019-09-03北京邮电大学The method, apparatus and system of pavement crack detection
CN110349122A (en)*2019-06-102019-10-18长安大学A kind of pavement crack recognition methods based on depth convolution fused neural network
CN111626092A (en)*2020-03-262020-09-04陕西陕北矿业韩家湾煤炭有限公司Unmanned aerial vehicle image ground crack identification and extraction method based on machine learning
CN111626092B (en)*2020-03-262023-04-07陕西陕北矿业韩家湾煤炭有限公司 A method for ground fissure recognition and extraction based on machine learning from UAV images
CN111832437B (en)*2020-06-242024-03-01万翼科技有限公司Building drawing identification method, electronic equipment and related products
CN111832437A (en)*2020-06-242020-10-27万翼科技有限公司Building drawing identification method, electronic equipment and related product
CN112989981A (en)*2021-03-052021-06-18五邑大学Pavement crack detection method, system and storage medium
CN112989981B (en)*2021-03-052023-10-17五邑大学 A pavement crack detection method, system and storage medium
CN113092502A (en)*2021-04-142021-07-09成都理工大学Unmanned aerial vehicle pavement damage detection method and system
CN113222904A (en)*2021-04-212021-08-06重庆邮电大学Concrete pavement crack detection method for improving PoolNet network structure
CN113409267A (en)*2021-06-172021-09-17西安热工研究院有限公司Pavement crack detection and segmentation method based on deep learning
CN113610778A (en)*2021-07-202021-11-05武汉工程大学Bridge surface crack detection method and system based on semantic segmentation
CN113610778B (en)*2021-07-202024-03-26武汉工程大学Bridge surface crack detection method and system based on semantic segmentation
CN113506281A (en)*2021-07-232021-10-15西北工业大学 A bridge crack detection method based on deep learning framework
CN113506281B (en)*2021-07-232024-02-27西北工业大学Bridge crack detection method based on deep learning framework
CN114359272A (en)*2022-03-112022-04-15科大天工智能装备技术(天津)有限公司DenseNet-based bridge steel cable breakage detection method and system
CN114689600A (en)*2022-03-312022-07-01南京林业大学Method and system for detecting surface crack of bridge concrete structure
CN116229235A (en)*2023-03-062023-06-06西安电子科技大学 A human body pose estimation network model and estimation method based on thermal imaging
CN117612021A (en)*2023-10-192024-02-27广州大学Remote sensing extraction method and system for agricultural plastic greenhouse
CN117152163B (en)*2023-11-012024-02-27安徽乾劲企业管理有限公司Bridge construction quality visual detection method
CN117152163A (en)*2023-11-012023-12-01安徽乾劲企业管理有限公司Bridge construction quality visual detection method
CN119354969A (en)*2024-09-202025-01-24浙江大学 A bridge defect detection method based on acoustic and optical joint detection and recognition

Similar Documents

PublicationPublication DateTitle
CN108520516A (en) A Crack Detection and Segmentation Method for Bridge Pavement Based on Semantic Segmentation
CN109961049B (en)Cigarette brand identification method under complex scene
CN108562589B (en)Method for detecting surface defects of magnetic circuit material
CN107016405B (en) A Pest Image Classification Method Based on Hierarchical Prediction Convolutional Neural Network
CN109145872B (en) A ship target detection method based on fusion of CFAR and Fast-RCNN in SAR images
CN111898432B (en)Pedestrian detection system and method based on improved YOLOv3 algorithm
CN113837039B (en)Fruit growth morphology visual identification method based on convolutional neural network
CN110210362A (en)A kind of method for traffic sign detection based on convolutional neural networks
CN106650690A (en)Night vision image scene identification method based on deep convolution-deconvolution neural network
CN108197606A (en)The recognition methods of abnormal cell in a kind of pathological section based on multiple dimensioned expansion convolution
CN111652273B (en)Deep learning-based RGB-D image classification method
CN110555465A (en)Weather image identification method based on CNN and multi-feature fusion
CN106897673A (en)A kind of recognition methods again of the pedestrian based on retinex algorithms and convolutional neural networks
CN108009518A (en)A kind of stratification traffic mark recognition methods based on quick two points of convolutional neural networks
CN110008853B (en) Pedestrian detection network and model training method, detection method, medium, equipment
CN111797712A (en) Remote sensing image cloud and cloud shadow detection method based on multi-scale feature fusion network
CN110751089A (en) A flame target detection method based on digital images and convolutional features
CN113537173B (en) A Face Image Authenticity Recognition Method Based on Facial Patch Mapping
CN110969171A (en) Image classification model, method and application based on improved convolutional neural network
CN111199245A (en)Rape pest identification method
CN110543906A (en)Skin type automatic identification method based on data enhancement and Mask R-CNN model
CN111199255A (en)Small target detection network model and detection method based on dark net53 network
CN108304786A (en)A kind of pedestrian detection method based on binaryzation convolutional neural networks
CN114565675A (en) A method for removing dynamic feature points in the front end of visual SLAM
CN109840498B (en) A real-time pedestrian detection method, neural network and target detection layer

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication
RJ01Rejection of invention patent application after publication

Application publication date:20180911


[8]ページ先頭

©2009-2025 Movatter.jp