Circuit icing detection method based on deep learningTechnical Field
The invention relates to the technical field of image processing, in particular to a line icing detection method.
Background
The icing disaster is one of common natural disasters in life, and accidents such as flashover and outage of a power transmission line, collapse of a pole tower and the like can be caused when the icing disaster is serious, so that the safety and stability operation of key infrastructure such as a power grid and the like are threatened. The existing icing detection method mainly can be divided into two types, namely actual field test and remote image detection. The actual field measurement mainly comprises a direct measurement method, a weighing method and the like, and almost all the methods rely on line inspection staff to manually measure the information such as the ice coating thickness of a line or require additional devices such as a tension sensor to be installed on a monitoring facility. The problems of large workload, high cost, complex operation and the like basically exist in the methods, so that the method is difficult to flexibly apply to the line icing state monitoring work in the actual environment.
The icing detection method based on the image is characterized in that an image acquisition device is arranged on a tower or other monitoring facilities to acquire and analyze the icing image so as to judge the icing state of the line. The method for measuring the ice coating thickness by utilizing the line ice coating image under multiple visual angles of the unmanned aerial vehicle comprises the steps of acquiring the line ice coating image under multiple visual angles by utilizing the unmanned aerial vehicle, wherein although the unmanned aerial vehicle can be flexibly utilized to acquire the line ice coating state with more visual angles, the implementation cost is high, the all-weather real-time monitoring is difficult to realize, the line edge information is extracted from the line ice coating image by adopting a wavelet transformation and morphological edge detection method, so that the ice coating state is confirmed, the method is easy to be interfered by an environment background, in addition, the image quality is improved by adopting a median filtering and image enhancement technology, and the line edge is identified by adopting a Canny edge detection algorithm, and although the method can improve some conditions, the interference of complex background noise is still difficult to be completely avoided.
In a line icing detection task based on deep learning, an icing region in an icing image is identified and detected based on a semantic segmentation model, and the performance of eliminating background noise interference is obviously superior to that of algorithms such as edge detection. Moreover, most methods for acquiring the ice coating thickness information based on the image fail to consider the influence caused by different types of ice coating density, and the influence of the same ice coating thickness on the circuit caused by different ice coating types is obviously different, so that the full utilization of the ice coating density information is critical to the application in the actual environment. The convolutional neural network model based on deep learning can well judge the icing type in the image, so that important density information is provided for calculation of the icing thickness, and the uniform icing thickness of a surrounding line can be calculated better by acquiring the icing thickness of different angles through different visual angles. Therefore, the deep learning model is fully utilized, and the performance of detecting the ice coating thickness of the circuit can be obviously improved.
Disclosure of Invention
The invention aims to solve the problem of the background technology and provides a line icing detection method which comprises the steps of identifying the icing type, calculating the equivalent icing thickness of a line by utilizing a semantic segmentation technology, and simultaneously optimizing and adjusting the line icing thickness calculation method at night, in a side view line invisible state and the like, so that line icing detection in a complex environment can be realized.
The invention adopts the following technical scheme for solving the technical problems:
A line icing detection method based on deep learning specifically comprises the following steps:
step S1, acquiring an original ice coating image of a line shot by ice coating monitoring equipment, and manufacturing a related data set;
s2, utilizing histogram equalization to enhance brightness features in an original ice-coated image, taking the enhanced image as a brightness feature image, acquiring an average brightness value and a brightness standard deviation of the enhanced image, respectively encoding the brightness feature image into feature tensors consistent with the dimension of the brightness feature image, and splicing the feature tensors with the brightness feature image in the dimension of a channel to obtain a composite brightness feature image;
S3, processing an original icing image by using a horizontal local binary pattern H-LBP, obtaining a roughness texture feature image of the original icing image, obtaining contrast and homogeneity information of the original icing image by using a gray level co-occurrence matrix, respectively encoding the contrast and homogeneity information into feature tensors consistent with the dimension of the roughness texture feature image, and splicing the feature tensors with the roughness texture feature image in the channel dimension to obtain a composite roughness feature image;
S4, constructing a multi-branch line icing type recognition model IceNet-T, which comprises a main branch, a brightness branch and a roughness branch, inputting an original icing image into the main branch, inputting a composite brightness characteristic image into the brightness branch, inputting a composite roughness characteristic image into the roughness branch, and fusing multi-branch extraction characteristics to obtain an icing type recognition result;
S5, detecting and segmenting a line icing region in an original icing image by utilizing a semantic segmentation model SCTNet, and improving segmentation accuracy by combining a segmentation result of a Multi-scale conditional random field MSCRF optimization model, wherein MSCRF is Multi-Scale Conditional Random Field;
S6, respectively calculating the horizontal ice coating thickness and the vertical ice coating thickness according to the segmentation result of the semantic segmentation model on the main view angle and the side view angle lines in the original ice coating image, simultaneously obtaining the ice coating density by combining the ice coating type recognition result, and calculating the equivalent ice coating thickness of the current line by using an equivalent area method, wherein the equivalent ice coating thickness comprises the optimization calculation of environmental states such as visible side view angle, invisible side view angle, low light at night and the like, the horizontal ice coating thickness is a long diameter a, and the vertical ice coating thickness is a short diameter b.
As a further preferable scheme of the line icing detection method based on deep learning, in the step S2, a composite brightness characteristic diagram is obtained, and the method specifically comprises the following steps:
step S2.1, enhancing a brightness characteristic diagram in an original icing image by using a histogram equalization method, wherein the enhancement formula is as follows: wherein, the method comprises the steps of,Is the position in the equalized imageThe gray value of the pixel at that point,Is the position in the original ice-covered imageAt the pixel gray value, L is the number of gray levels, where, for a b-bit image, l=2b,Is that the gray value in the original icing image isN is the total number of pixels of the image,Is that the gray value in the original icing image is smaller than or equal toA cumulative distribution function of pixels of (a);
The average brightness value and the standard deviation of the brightness of the original icing image are calculated, and the formula is as follows: wherein, the method comprises the steps of,Represents the value of the average luminance value of the light,Representing the standard deviation of luminance, H, W and C are the height, width and channel number of the input image,Representing an input imageBrightness value corresponding to the position;
s2.2, constructing a characteristic tensor consisting of an average brightness value and a brightness standard deviation, namely expanding the two values into a matrix with the same size as the image, and splicing the matrix with the original image data in the channel dimension, wherein the formula is as follows: wherein, the method comprises the steps of,Representing an average luminance value feature tensor consistent with the original dimensions,Representing a standard deviation feature tensor of luminance consistent with the original dimensions,And (3) representing a matrix with all elements of 1, splicing the average brightness value characteristic tensor and the brightness standard deviation characteristic tensor after the dimension expansion with the brightness characteristic graph in the channel dimension, wherein the formula is as follows: Wherein,Representing the resulting composite luminance feature tensor, concat representing the channel dimension stitching function,The characteristic tensor of the enhanced luminance characteristic map, the characteristic tensor of the average luminance value after the dimension expansion and the characteristic tensor of the luminance standard deviation are respectively represented.
As a further preferable scheme of the line icing detection method based on deep learning, in the step S3, a composite roughness characteristic diagram is obtained, and the method specifically comprises the following steps:
Step S3.1, processing the original ice-covered image by using a horizontal local binary pattern H-LBP to obtain a roughness texture feature map, wherein the horizontal local binary pattern is focused on texture information in the horizontal direction, and the formula is as follows: Wherein,Representing the positionThe results of the H-LBP calculation at that point,Representing position in imagePixel values at, P is the horizontal neighborhood range,Is a binarization function, namely:;
the H-LBP formula consists of two parts, wherein the left side of the plus sign is the comparison of the target pixel point and the left side horizontal neighborhood pixel, the right side of the plus sign is the comparison of the target pixel point and the right side horizontal neighborhood pixel, and the comparison results are respectively binarized and then given with weightThereby encoding the result into a binary number, the result of H-LBPThe H-LBP characteristic map T calculated for a final H W-sized image, which is an integer representing the horizontal texture pattern, can be expressed as:;
In addition to the roughness texture feature map obtained by the H-LBP, the step S3.2 includes a contrast ratio calculated by using a gray level co-occurrence matrix and a feature tensor formed by expanding homogeneity, wherein the contrast ratio can represent the roughness of the icing image, and the homogeneity can represent the smoothness of the icing image, and a specific calculation formula is as follows: Wherein Contrast is Ct, homogeneity is homogeneity is Hg, L represents gray level number,The method is characterized by comprising the steps of representing joint occurrence probability of pixel pairs with gray values of x and y in a certain distance and direction as a gray level co-occurrence matrix;
step S3.3, after the roughness texture feature map T and the contrast Ct and the homogeneity Hg of the original icing image are obtained, the Ct and Hg are required to be expanded to the same dimension as the T so as to be spliced with the roughness texture feature map T in the channel dimension, and the expansion formula is as follows: Wherein H and W are the height and width dimensions of the roughness texture feature map T,Represents the contrast characteristic tensor after the dimension expansion,Representing the expanded dimension homogeneity characteristic tensor,And splicing the roughness texture feature map with the contrast and homogeneity feature tensor after the coding expansion on the channel dimension as a matrix with all elements of 1, wherein the formula is as follows: wherein, the method comprises the steps of,Representing the generated composite roughness characteristic diagram Concat as a channel dimension splicing function, and splicingThe texture feature expression capability of the image can be enhanced.
As a further preferable scheme of the line icing detection method based on deep learning, in the step S4, a three-branch icing type recognition model IceNet-T is constructed by combining an original icing image according to the composite brightness characteristic diagram and the composite roughness characteristic diagram extracted in the steps S2 and S3, and a final icing type recognition result is obtained by extracting and fusing three branch results, and the method specifically comprises the following steps:
step S4.1, inputting an original icing image into a main branch, wherein the main branch is formed by a main feature extraction network of a deep learning migration model, and the main branch is adapted to an icing type identification task by fine tuning an output layer structure of the main branch, inputting the original icing image into the main branch, extracting global icing features, and the extraction process can be represented by the following formula:
Wherein I represents the original ice-covered image input,The migration model is represented for the subject network feature extraction process,Representing the fully connected output layer of the replacement original migration model,For the new bias term(s),Representing ReLU activation function to finally obtain global features extracted from trunk branches;
Step S4.2, respectively inputting the composite brightness feature map and the composite roughness feature map into a brightness branch and a roughness branch, wherein the two branches are similar in network structure except for parameter adjustment for adapting to respective received dimensions of the composite feature map, each network structure adopted by the feature extraction part consists of 1 initialization module, 4 main body feature extraction modules and 1 classifier, and the specific feature extraction process can be represented by the following formula: Wherein,A composite luminance profile is shown,The method comprises the steps of representing a composite roughness feature map, wherein Init represents an initialization module, M represents a main feature extraction module, each main feature extraction module comprises an SE channel attention layer, classifiers represent classifiers, subscripts represent branches to which the Classifier belongs, superscripts represent different modules, and finally brightness branches and roughness branches respectively obtain feature mapsAndWherein SE is the Squeeze-and-specification;
step S4.3, obtaining the extraction result of the three branches、AndAnd then, fusing the extracted features of different branches together by a weighted summation method, wherein the features are expressed by the following formula: Wherein,Representing the characteristics of the three branches after fusion,The method comprises the steps of respectively representing weight parameters of a main branch, a brightness branch and a roughness branch, wherein the sum of the weights is 1, and result represents a probability distribution result obtained by converting a Softmax function of a fusion characteristic, wherein the largest probability is the final icing type identification result.
As a further preferable scheme of the line icing detection method based on deep learning of the present invention, in step S5, a multi-scale conditional random field MSCRF used for optimizing semantic segmentation results specifically includes the following steps:
Step S5.1, extracting semantic segmentation feature graphs with different scales from a multi-layer network of an encoder and a decoder according to a SCTNet network structure, refining and optimizing segmentation results by MSCRF through combining space and color information of the multi-scale feature graphs, wherein the multi-scale feature graphs extracted by different convolution layers of a semantic segmentation model have different dimensions, the feature graphs need to be adjusted to be uniform in size through preprocessing, and then the feature graphs are input into MSCRF together with a target feature graph for iterative optimization, and the preprocessing process can be represented by the following formula: Wherein,For the adjusted multiscale feature map, resize is a feature map for different scalesThe function of dimension adjustment is performed, H and W are the height and width of the target dimension, n is the number of feature graphs to be processed, and the final multi-scale feature graph can be expressed as a set:;
step S5.2, MSCRF, optimizing the segmentation result of the target feature map by repeatedly updating the label between adjacent pixels in the region with similar color and position by minimizing an energy function, and optimizing the iterative optimization formula as follows: wherein U represents Unary, namely a target feature map which needs to be optimized,Representing feature graphs of different scales, n representing the number of feature graphs that need to be processed,AndRespectively representing the space Gaussian weight and the bilateral Gaussian weight, respectively representing the space information and the color information in the feature map, and performing normalization processing on the weights by using a Softmax function, and finally outputting an optimized feature map after k iterationsThe method comprises the following steps: wherein Concat functions splice the iteration results of the feature graphs with different scales in the channel dimension, reshape functions readjust the spliced iteration results to make the channel dimension the same as the original input target feature graph U, and finally obtain the optimized result。
As a further preferable scheme of the line icing detection method based on deep learning, in step S6, an equivalent area method is used to calculate the equivalent icing thickness of the line, and the method comprises the following steps:
S6.1, segmenting lines of a main view angle and a side view angle in an original icing image by utilizing a semantic segmentation model to obtain segmented regions of the lines, wherein the areas of the icing regions of the same line are the same in the same view angleThickness of ice coated horizontallyThe following relationship exists: The diameter of the bare wire at this angle is knownObtaining pixel area of bare line by semantic segmentationAnd pixel area in the ice-covered stateThe following formula can be used to calculate the horizontal ice coating thickness in the ice coating state:Wherein,The length and diameter parameters a required in the equivalent area method are the same, and the thickness of the ice coating in the vertical direction can be obtained by means of the segmentation result of the side view angle line in the original ice coating image, namely the short diameter parameters b required in the equivalent area method;
step S6.2, the equivalent area method is to equivalent the irregular cross section of the ice coating line to a standard circle with the same area, the radius of the standard circle is used for subtracting the radius of the bare wire to obtain the equivalent uniform ice coating thickness of the line, firstly, the irregular ice coating cross section is a regular ellipse, the area of the ellipse is enabled to be close to the original irregular cross section by adding the adjusting parameter, the ellipse is further equivalent to the standard circle with the same cross section, and finally, the equivalent uniform ice coating thickness is obtained, and the process can be expressed by the following formula: Wherein,In order to require equivalent uniform ice thickness, r is the radius of the bare wire,For the icing density, a is a long diameter parameter, b is a short diameter parameter, the left side of the equation represents the equivalent standard circle area minus the bare wire cross section, the right bracket of the equation represents the elliptical area minus the bare wire cross section, the regular icing cross section outside the bare wire is represented, and the error is regulated by the icing density and a constant parameter, so that the following equivalent uniform icing thickness calculation formula is obtained: Wherein, the ice coating densityThe estimation can be carried out according to the identification result of the ice coating type;
Step S6.3, aiming at the condition that a line with a side view angle is difficult to observe due to low light darkness at night, optimizing and adjusting an equivalent icing thickness calculation formula, deducing the subsequent icing state of the line with the side view angle at night by utilizing the icing states of the main view angle and the side view angle of the same line at the last moment of the day in the same day, and further deducing the short-diameter parameter at night;
Estimating the final time of day is the ratio of the length to the diameter of the steel sheet:Wherein, the method comprises the steps of,The length and diameter parameters are the last moment in the daytime; the short diameter parameter is the final moment of the day;
the long diameter parameters at night can be obtained according to the icing state of the line at night main visual angleAnd then combining the ratio of the length to diameter parameters at the last moment of the daytimeFurther calculate the short diameter parameter at night:;
From the ratio of the length to the diameter parameters at the last moment of the dayAnd long diameter parameters at nightObtaining an optimized night equivalent icing thickness calculation formula:;
Step S6.4, aiming at the situation that the side view angle line is almost invisible or effective information is difficult to extract due to line layout, linear relations among long and short paths in the side view angle visible state are statistically observed in a large number, a polynomial function relation is utilized to describe the relation among the long and short paths in the side view angle visible state, so that the icing state of the side view angle is calculated by the icing state based on the main view angle, and a large number of long and short path parameter data sets in the side view angle visible state are collectedWhere N is the number of samples, the polynomial function is defined as follows: Wherein,Representing the short diameter parameter in the side view visible state,Representing the long diameter parameter in the side view visible state, n representing the order of the polynomial,Determining coefficients of a polynomial by least squares method, approximating the function to a data point, using a large number ofFitting data to form a rule functionThen according to the long diameter parameter obtained by detection and calculation under the invisible state of the side view angleCalculating the short diameter parameter:From a rule functionAnd length and diameter parametersObtaining an equivalent icing thickness optimization calculation formula under the invisible state of the side view:。
The further preferable scheme of the line icing detection method based on deep learning comprises a line icing detection system, wherein the line icing detection system comprises a preprocessing module, an icing type identification module, an icing segmentation module and an equivalent icing thickness calculation module;
The preprocessing module is used for acquiring an original ice coating image shot by the ice coating monitoring equipment, and respectively acquiring a composite brightness characteristic image and a composite roughness characteristic image through the brightness preprocessing module and the roughness preprocessing module;
The icing type identification module is a three-branch convolutional neural network IceNet-T comprising a main branch, a brightness branch and a roughness branch, and is used for inputting an original icing image into the main branch, respectively inputting a composite brightness characteristic image and a composite roughness characteristic image obtained by preprocessing into the brightness branch and the roughness branch, and fusing icing characteristics extracted from the three branches to obtain an icing type identification result;
The icing segmentation module is used for training the semantic segmentation model SCTNet by using the marked data set, optimizing the icing segmentation result by using the multi-scale conditional random field MSCRF in the segmentation result optimization module SegBoost to obtain a more accurate icing region, and extracting pixel areas of the main view icing region and the side view icing region;
The equivalent icing thickness calculation module is used for obtaining icing density parameters according to the icing type identification result, calculating long and short diameter parameters required by an equivalent icing calculation formula according to the pixel areas of the main view and side view icing areas identified by the icing segmentation module, further calculating uniform equivalent icing thickness, and optimizing and adjusting the icing thickness calculation formula aiming at invisible night and side view lines to finally obtain equivalent icing thickness calculation results under different environmental conditions.
Compared with the prior art, the technical scheme provided by the invention has the following technical effects:
According to the line icing detection method, the multi-branch convolutional neural network model is used for extracting and fusing various icing characteristics, the brightness branches can focus on illumination change and overall brightness modes in the image, the reflection characteristics of different icing types can be recognized, the roughness branches can capture details of surface textures and reflect physical textures of different icing types, the multi-branch mode ensures the comprehensiveness of characteristic extraction, and the distinguishing capability of the icing types is improved;
According to the line icing detection method, the semantic segmentation technology is used for carrying out segmentation extraction on the icing region in the image, and the multi-scale conditional random field MSCRF is used for extracting the characteristic information such as the multi-scale space and the color in the segmentation process, so that the segmentation result is optimized, the method can carefully capture the fine change of the icing, improve the accuracy of the detection result, and improve the performance of the detection result under the condition of not increasing the model training cost, so that the pixel area of the icing region can be extracted more accurately, and the equivalent icing thickness can be calculated accurately;
The line icing detection method not only can calculate the equivalent icing thickness of the line under the conventional condition, but also can calculate the equivalent icing thickness at night by utilizing the daytime icing law, so that the icing state of the line can be monitored for 24 hours on the whole day, and in addition, the line icing detection method can calculate the equivalent icing thickness of the line under the invisible side view state of a single line by statistically analyzing the linear relation between the horizontal icing thickness (long diameter a) and the vertical icing thickness (short diameter b) under the visible side view state, and further can calculate the equivalent icing thickness under the invisible side view state smoothly by utilizing the obtained linear relation, and can calculate the icing state of the main view by means of the icing state of the side view, so that the line icing detection method can be flexibly adjusted according to the shooting condition of practical icing monitoring equipment.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for detecting ice coating on a circuit according to the present invention;
FIG. 2 is a diagram showing the overall model structure of the line icing detection method of the present invention;
FIG. 3 (a) is a schematic diagram of a main view line and a side view line according to an embodiment of the present invention;
FIG. 3 (b) is a schematic diagram of the major-minor diameters of the cross section of the ice coating of the circuit according to the embodiment of the invention;
FIG. 4 is a diagram of a IceNet-T model of the line icing type identification module of the present invention;
FIG. 5 is a diagram of a model structure of an ice-coating segmentation module according to the present invention;
FIG. 6 is a graph showing the verification accuracy and loss value of the IceNet-T model comparison experiment of the present invention;
FIG. 7 is a graph showing the effect of the ice coating segmentation module on line segmentation of the main view and the side view of the original ice coating image;
fig. 8 is a display diagram of a detection result of an actual icing image according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the application will be further elaborated in conjunction with the accompanying drawings, and the described embodiments are only a part of the embodiments to which the present invention relates. All non-innovative embodiments in this example by others skilled in the art are intended to be within the scope of the invention. Meanwhile, the step numbers in the embodiments of the present invention are set for convenience of illustration, the order between the steps is not limited, and the execution order of the steps in the embodiments can be adaptively adjusted according to the understanding of those skilled in the art.
In one embodiment of the present invention, the method for detecting ice coating on a circuit, as shown in fig. 1, includes the following steps:
step S1, acquiring an original ice coating image of a line shot by ice coating monitoring equipment, and manufacturing a related data set;
Step S2, utilizing histogram equalization to enhance brightness characteristics in an original icing image, taking the enhanced image as a brightness characteristic image, acquiring an average brightness value and a brightness standard deviation of the enhanced image, respectively encoding the brightness characteristic image into characteristic tensors consistent with the dimension of the brightness characteristic image, and splicing the brightness characteristic image in the channel dimension to obtain a composite brightness characteristic image, wherein the method specifically comprises the following steps of:
step S2.1, enhancing a brightness characteristic diagram in an original icing image by using a histogram equalization method, wherein the enhancement formula is as follows: wherein, the method comprises the steps of,Is the position in the equalized imageThe gray value of the pixel at that point,Is the position in the original ice-covered imageAt the pixel gray value, L is the number of gray levels, where, for a b-bit image, l=2b,Is that the gray value in the original icing image isN is the total number of pixels of the image,Is that the gray value in the original icing image is smaller than or equal toA cumulative distribution function of pixels of (a);
The average brightness value and the standard deviation of the brightness of the original icing image are calculated, and the formula is as follows: wherein, the method comprises the steps of,Represents the value of the average luminance value of the light,Representing the standard deviation of luminance, H, W and C are the height, width and channel number of the input image,Representing an input imageBrightness value corresponding to the position;
s2.2, constructing a characteristic tensor consisting of an average brightness value and a brightness standard deviation, namely expanding the two values into a matrix with the same size as the image, and splicing the matrix with the original image data in the channel dimension, wherein the formula is as follows: wherein, the method comprises the steps of,Representing an average luminance value feature tensor consistent with the original dimensions,Representing a standard deviation feature tensor of luminance consistent with the original dimensions,And (3) representing a matrix with all elements of 1, splicing the average brightness value characteristic tensor and the brightness standard deviation characteristic tensor after the dimension expansion with the brightness characteristic graph in the channel dimension, wherein the formula is as follows: Wherein,Representing the resulting composite luminance feature tensor, concat representing the channel dimension stitching function,The characteristic tensor of the enhanced luminance characteristic map, the characteristic tensor of the average luminance value after the dimension expansion and the characteristic tensor of the luminance standard deviation are respectively represented.
S3, processing an original icing image by using a horizontal local binary pattern (H-LBP), obtaining a roughness texture feature map thereof, obtaining contrast and homogeneity information of the original icing image by using a gray level co-occurrence matrix, respectively encoding the contrast and homogeneity information into feature tensors consistent with the dimension of the roughness texture feature map, and splicing the feature tensors with the roughness texture feature map in the channel dimension to obtain a composite roughness feature map, wherein the method specifically comprises the following steps:
Step S3.1, processing the original ice-covered image by using a horizontal local binary pattern H-LBP to obtain a roughness texture feature map, wherein the horizontal local binary pattern is focused on texture information in the horizontal direction, and the formula is as follows: Wherein,Representing the positionThe results of the H-LBP calculation at that point,Representing position in imagePixel values at, P is the horizontal neighborhood range,Is a binarization function, namely:;
the H-LBP formula consists of two parts, wherein the left side of the plus sign is the comparison of the target pixel point and the left side horizontal neighborhood pixel, the right side of the plus sign is the comparison of the target pixel point and the right side horizontal neighborhood pixel, and the comparison results are respectively binarized and then given with weightThereby encoding the result into a binary number, the result of H-LBPThe H-LBP characteristic map T calculated for a final H W-sized image, which is an integer representing the horizontal texture pattern, can be expressed as:;
In addition to the roughness texture feature map obtained by the H-LBP, the step S3.2 includes a contrast ratio calculated by using a gray level co-occurrence matrix and a feature tensor formed by expanding homogeneity, wherein the contrast ratio can represent the roughness of the icing image, and the homogeneity can represent the smoothness of the icing image, and a specific calculation formula is as follows: Wherein Contrast is Ct, homogeneity is homogeneity is Hg, L represents gray level number,The method is characterized by comprising the steps of representing joint occurrence probability of pixel pairs with gray values of x and y in a certain distance and direction as a gray level co-occurrence matrix;
step S3.3, after the roughness texture feature map T and the contrast Ct and the homogeneity Hg of the original icing image are obtained, the Ct and Hg are required to be expanded to the same dimension as the T so as to be spliced with the roughness texture feature map T in the channel dimension, and the expansion formula is as follows: Wherein H and W are the height and width dimensions of the roughness texture feature map T,Represents the contrast characteristic tensor after the dimension expansion,Representing the expanded dimension homogeneity characteristic tensor,And splicing the roughness texture feature map with the contrast and homogeneity feature tensor after the coding expansion on the channel dimension as a matrix with all elements of 1, wherein the formula is as follows: wherein, the method comprises the steps of,Representing the generated composite roughness characteristic diagram Concat as a channel dimension splicing function, and splicingThe texture feature expression capability of the image can be enhanced.
S4, constructing a multi-branch line icing type recognition model IceNet-T, which comprises a main branch, a brightness branch and a roughness branch, inputting an original icing image into the main branch, inputting a composite brightness characteristic image into the brightness branch, inputting a composite roughness characteristic image into the roughness branch, and fusing multi-branch extraction characteristics to obtain an icing type recognition result, wherein the method specifically comprises the following steps of:
Step S4.1, inputting an original icing image into a main branch, wherein the main branch is formed by a main feature extraction network of a deep learning migration model, and the main branch is adapted to an icing type identification task by fine tuning an output layer structure of the main branch, inputting the original icing image into the main branch, extracting global basic icing features, and the extraction process can be represented by the following formula:
Wherein I represents the original ice-covered image input,The migration model is represented for the subject network feature extraction process,Representing the fully connected output layer of the replacement original migration model,For the new bias term(s),Representing ReLU activation function to finally obtain global features extracted from trunk branches;
Step S4.2, respectively inputting the composite brightness characteristic diagram and the composite roughness characteristic diagram into a brightness branch and a roughness branch, wherein the two branches are similar in network structure except for necessary parameter adjustment for adapting to the dimension of the respectively received composite characteristic diagram, each characteristic extraction part comprises 1 initialization module, 4 main characteristic extraction modules and 1 classifier, and the specific characteristic extraction process can be expressed by the following formula: Wherein,A composite luminance profile is shown,The method comprises the steps of representing a composite roughness feature map, wherein Init represents an initialization module, M represents a main feature extraction module, each main feature extraction module comprises a SE channel attention layer, a Classifier is represented by a Classifier, subscripts of the above items represent branches to which the subscripts belong, superscripts represent different modules, and finally, a brightness branch and a roughness branch respectively obtain a feature mapAndWherein SE is the Squeeze-and-specification;
step S4.3, obtaining the extraction result of the three branches、AndAnd then, fusing the extracted features of different branches together by a weighted summation method, wherein the features are expressed by the following formula: Wherein,Representing the characteristics of the three branches after fusion,The method comprises the steps of respectively representing weight parameters of a main branch, a brightness branch and a roughness branch, wherein the sum of the weights is 1, and result represents a probability distribution result obtained by converting a Softmax function of a fusion characteristic, wherein the largest probability is the final icing type identification result.
Step S5, detecting and segmenting a line icing region in an original icing image by utilizing a semantic segmentation model SCTNet, and improving segmentation accuracy by combining a segmentation result of a multi-scale conditional random field MSCRF optimization model, wherein the method specifically comprises the following steps:
Step S5.1, extracting semantic segmentation feature graphs with different scales from a multi-layer network of an encoder and a decoder according to a SCTNet network structure, refining and optimizing segmentation results by MSCRF through combining space and color information of the multi-scale feature graphs, wherein the multi-scale feature graphs extracted by different convolution layers of a semantic segmentation model have different dimensions, the feature graphs need to be adjusted to be uniform in size through preprocessing, and then the feature graphs are input into MSCRF together with a target feature graph for iterative optimization, and the preprocessing process can be represented by the following formula: Wherein,For the adjusted multiscale feature map, resize is a feature map for different scalesThe function of dimension adjustment is performed, H and W are the height and width of the target dimension, n is the number of feature graphs to be processed, and the final multi-scale feature graph can be expressed as a set:;
Step S5.2, MSCRF is to the target feature map and multiscale feature map P that needs to optimize of input, through minimizing an energy function, make the label unanimous between the adjacent pixel of target feature map in having similar colour and position area, through many times of iterative updating, can optimize the segmentation result of target feature map, the iterative optimization formula is as follows: wherein U represents Unary, namely a target feature map which needs to be optimized,Representing feature graphs of different scales, n representing the number of feature graphs that need to be processed,AndRespectively representing the space Gaussian weight and the bilateral Gaussian weight, respectively representing the space information and the color information in the feature map, and performing normalization processing on the weights by using a Softmax function, and finally outputting an optimized feature map after k iterationsThe method comprises the following steps: wherein Concat functions splice the iteration results of the feature graphs with different scales in the channel dimension, reshape functions readjust the spliced iteration results to make the channel dimension the same as the original input target feature graph U, and finally obtain the optimized result。
Step S6, according to the segmentation result of the semantic segmentation model on the main view angle and the side view angle lines in the original icing image, respectively calculating the horizontal icing thickness (long diameter a) and the vertical icing thickness (short diameter b), simultaneously combining the icing type identification result to obtain the icing density, and calculating the equivalent icing thickness of the current line by using an equivalent area method, wherein the method comprises the following steps of:
S6.1, segmenting lines of a main view angle and a side view angle in an original icing image by utilizing a semantic segmentation model to obtain segmented regions of the lines, wherein the areas of the icing regions of the same line are the same in the same view angleThickness of ice coated horizontallyThe following relationship exists: The diameter of the bare wire at this angle is knownObtaining pixel area of bare line by semantic segmentationAnd pixel area in the ice-covered stateThe following formula can be used to calculate the horizontal ice coating thickness in the ice coating state:Wherein,The length and diameter parameters a required in the equivalent area method are the same, and the thickness of the ice coating in the vertical direction can be obtained by means of the segmentation result of the side view angle line in the original ice coating image, namely the short diameter parameters b required in the equivalent area method;
Step S6.2, equivalent area method equivalent to cover the irregular cross section of the ice circuit to a standard circle with the same area, utilize the radius of this standard circle to subtract the radius of bare wire, get the equivalent even icing thickness of the circuit, at first approximate the irregular icing cross section to a regular ellipse, and make the area of this ellipse close to the original irregular cross section as much as possible through increasing the regulation parameter, further equivalent this ellipse to the standard circle with the same cross section, finally calculate the equivalent even icing thickness, this process can be expressed with the following formula: Wherein,In order to require equivalent uniform ice thickness, r is the radius of the bare wire,For icing density, a is a long diameter parameter, b is a short diameter parameter, the left side of the equation represents equivalent standard circle area minus bare wire cross section, the right side of the equation is the approximate elliptical area minus bare wire cross section in parentheses, and represents regular icing cross section outside the bare wire, the part adjusts error by icing density and a constant parameter, so as to obtain the following equivalent uniform icing thickness calculation formula: Wherein, the ice coating densityThe estimation can be carried out according to the identification result of the ice coating type;
Step S6.3, aiming at the condition that a line with a side view angle is difficult to observe due to low light darkness at night, optimizing and adjusting an equivalent icing thickness calculation formula, deducing the subsequent icing state of the line with the side view angle at night by utilizing the icing states of the main view angle and the side view angle of the same line at the last moment of the day in the same day, and further deducing the short-diameter parameter at night;
Estimating the final time of day is the ratio of the length to the diameter of the steel sheet:Wherein, the method comprises the steps of,The length and diameter parameters are the last moment in the daytime; the short diameter parameter is the final moment of the day;
the long diameter parameters at night can be obtained according to the icing state of the line at night main visual angleAnd then combining the ratio of the length to diameter parameters at the last moment of the daytimeFurther calculate the short diameter parameter at night:;
From the ratio of the length to the diameter parameters at the last moment of the dayAnd long diameter parameters at nightObtaining an optimized night equivalent icing thickness calculation formula:;
step S6.4, aiming at the situation that the line of the side view angle is almost invisible or effective information is difficult to extract due to the line layout, linear relations among long and short paths in the visible state of the side view angle are observed through a large number of statistics, the relation among the long and short paths in the visible state of the side view angle is approximately described by utilizing a polynomial function relation, so that the icing state of the side view angle is calculated by the icing state based on the main view angle, and a large number of parameter data sets of the long and short paths in the visible state of the side view angle are collectedWhere N is the number of samples, the polynomial function is defined as follows: Wherein,Representing the short diameter parameter in the side view visible state,Representing the long diameter parameter in the side view visible state, n representing the order of the polynomial,Determining coefficients of a polynomial by least squares method to make the function as close as possible to data points, using a large number ofFitting data to form a rule functionThen according to the long diameter parameter obtained by detection and calculation under the invisible state of the side view angleCalculating the short diameter parameter:From a rule functionAnd length and diameter parametersObtaining an equivalent icing thickness optimization calculation formula under the invisible state of the side view:。
the overall model structure of the invention is shown in fig. 2, and the model integrally comprises three modules, and the specific construction flow is as follows:
firstly, acquiring an original icing image by using icing monitoring equipment, preprocessing the original icing image in an icing type identification module, and respectively obtaining a composite brightness characteristic image and a composite roughness characteristic image by processing the composite brightness characteristic image and the composite roughness characteristic image by a brightness preprocessing module and a roughness preprocessing module;
Then, inputting the original icing image, the composite brightness characteristic image and the composite roughness characteristic image into a main branch, a brightness branch and a roughness branch in IceNet-T respectively, extracting global characteristics, brightness characteristics and roughness characteristics of the icing image as shown in fig. 4, and fusing the extracted characteristics of the three branches to obtain an icing type identification result;
Meanwhile, the original ice-covered image is input into an ice-covered segmentation module, a main view angle line and a side view angle line in the ice-covered image are segmented by utilizing SCTNet, and then MSCRF designed in a segmentation optimization module SegBoost is utilized to optimize, so that a final ice-covered region optimal segmentation result is obtained;
and finally, in an equivalent ice coating thickness calculation module, the horizontal ice coating thickness and the vertical ice coating thickness of the ice coating cross section of the line are calculated according to the segmentation result of the ice coating segmentation module, namely a long diameter a and a short diameter b, the ice coating density is obtained by utilizing the ice coating type identification result, the long diameter parameter and the short diameter parameter and the ice coating density are substituted into the ice coating thickness calculation formula, the ice coating thickness calculation formula is optimized and adjusted aiming at special environmental conditions such as invisible line at night and at a side view angle, and finally the equivalent ice coating thickness calculation result under different environmental conditions is obtained.
In this embodiment, a detailed flow of optimizing the output result of the semantic segmentation model by the SegBoost module is shown in fig. 5, multiple iterative updating is performed on each training model, a detailed updating process of a conditional random field (Conditional Random Field, or CRF) of a single-scale feature map is shown in the figure, all feature maps extracted from different convolution layers of SCTNet are subjected to a round of CRF updating, namely, MSCRF iterative processes, and the final segmentation effect is shown in fig. 7, wherein the first column is an original icing image, the second column is an icing region label of a corresponding main view line and a side view line in the original icing image, and the third column is a final output icing region segmentation result.
In this embodiment, the network structures adopted by the luminance branches and the roughness branches of the IceNet-T model are similar, and the difference is that the dimensions of the feature graphs received by the two branches are different, so that the model parameters of the two branches are different, the main body part of the trunk branch is the feature extraction network selected by the migration deep learning model MobileNet-V3, in order to adapt to the recognition task of the icing type, the output network layer of the trunk branch is appropriately adjusted, the final output of the three branches is a one-dimensional feature vector, and the weighted normalization summation of the feature vectors output by the three branches can obtain the fused probability distribution result, wherein the largest probability value is the final icing type recognition result.
In this embodiment, the long diameter parameter a and the short diameter parameter b can be calculated by using the ice coating areas of the main view and the side view lines obtained by semantic segmentation, the current ice coating density ρ can be obtained according to the recognition result of the ice coating type, the ice coating density of rime is typically 0.7-0.9 g/cm3, the rime ice coating density is 0.1-0.4 g/cm3, the mixing rime ice coating density is 0.2-0.6 g/cm3, and then the corresponding equivalent ice coating thickness calculation formula can be used according to different environmental conditions, wherein the specific environmental conditions include visible side view, invisible side view, night and the like, and the formula calculation result is the final equivalent ice coating thickness of the lines.
In addition, training tests are performed on the icing type identification dataset by using advanced classification models such as EFFICIENTNET, MOBILEONE, REPVIT and ResNeXt, the accuracy and loss value change curves of the models on the verification set are recorded, and compared with IceNet-T, and experimental results are shown in FIG. 6, so that the verification accuracy of IceNet-T can be stabilized to be about 98% and higher than that of other models, and the stability of the models at IceNet-T is also superior to that of the other models.
Meanwhile, ablation experiments are carried out on three branches IceNet-T respectively, icing type identification experiments are carried out by using a main branch, a brightness branch and a roughness branch respectively, the quantity of parameters, the size of a model and the verification accuracy rate of the model are recorded, and detailed results are shown in table 1:
TABLE 1
| Method of | Quantity of parameters (M) | Model size (M) | Accuracy (%) |
| Luminance branching | 0.87 | 3.44 | 89.78 |
| Roughness branching | 0.87 | 3.43 | 80.28 |
| Trunk branch | 1.52 | 5.94 | 92.03 |
| IceNet-T | 3.26 | 12.81 | 98.67 |
As can be seen, the complete IceNet-T can identify the icing type with an accuracy up to 98.67%, which is superior to any branch.
Finally, based on the above-mentioned line detection method, a test experiment is performed on the line icing image, where the test includes a test of icing conditions such as a side view angle being visible, a side view angle being invisible, and low light at night, the experimental segmentation effect is shown in fig. 8, the corresponding line detection output result is shown in table 2, where the test result includes information such as a horizontal icing thickness, a vertical icing thickness, an equivalent icing thickness, an icing type, and an illumination environment judgment, and the main view angle icing area segmented by using the red coverage area display model in fig. 8 is shown in the yellow coverage area. According to the experimental results, the method for detecting the line icing in the embodiment of the invention can well detect the information such as the icing type, the icing thickness and the like of the icing line.
TABLE 2
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.