Disclosure of Invention
Aiming at the problems, the invention provides a real-time detection method of a weld joint target, which can be used for training a weld joint detector based on a convolutional neural network by acquiring and preprocessing a training sample, so that the weld joint detector can quickly and accurately identify and position different weld joint positions, and the problems of inaccurate positioning and poor robustness of a welding starting point by adopting a morphological method in an automatic weld joint tracking system in the current automatic welding technology can be effectively solved.
The invention is realized by adopting the following technical scheme: a real-time detection method for a weld joint target comprises the following steps:
establishing a training sample set, wherein the training sample set is constructed by collecting welding seam images in different forms as source samples and preprocessing the source samples to form training samples;
the off-line training detector is constructed for training the neural network under different initial conditions by using the training sample, and an optimal neural network model obtained by multiple times of training is used as a weld joint detector;
and an on-line detection configured to acquire a detection image, perform a weld detection using the weld detector, and output a detection result.
Compared with the prior art, the invention has the beneficial effects that:
(1) the seam detector based on the convolutional neural network is trained offline, the positions of different types of seams are quickly and accurately identified and positioned, and the problems that an automatic seam tracking system in the current automatic welding technology adopts a morphological method to position a welding starting point inaccurately and the robustness is poor can be effectively solved.
(2) The welding line image is generated and collected through the line laser sensor and the embedded controller, and the welding line image acquisition device has the characteristics of clear imaging, difficulty in being interfered by an external environment light source, simplicity and convenience in operation and simple structure;
(3) by preprocessing the acquired training samples, the acquired sample types are distributed evenly, the positions and angles of welding seams are variable, and the time and labor for repeatedly acquiring the training samples are saved;
(4) the method has the advantages that the essential characteristics of the welding seam are learned from a large sample through the convolutional neural network, the characteristics are guaranteed to have strong separability, compared with a traditional morphological detection method, the method is higher in robustness, and the positioning effect on the welding seams of different types is more accurate.
Detailed Description
The purpose of the present invention will be described in further detail with reference to specific embodiments, which are not repeated herein, but the embodiments of the present invention are not limited thereto.
A real-time detection method for a weld joint target is based on a detection system shown in figure 2 and comprises a six-degree-of-freedom mechanical arm 1, a welding gun 2, alinear laser sensor 3, a workbench 4, anembedded controller 5 and aworkpiece 7, wherein theworkpiece 7 is arranged on the workbench 4, the linearlaser vision sensor 3 is installed on the welding gun 2, the welding gun 2 is arranged at the tail end of the six-degree-of-freedom mechanical arm 1, and thelinear laser sensor 3 and the welding gun 2 change the position of the welding gun 2 in space through the movement of the six-degree-of-freedom mechanical arm 1. The internal structure of theline laser sensor 3 is shown in fig. 3, and includes acamera 6 and alaser generator 8,
as shown in fig. 1, in one embodiment, a method for detecting a weld target in real time includes the following steps:
s1, establishing a training sample set;
in one embodiment, the specific process of S1 is as follows: and collecting welding seam images in different forms as source samples, and preprocessing the source samples to form training samples.
In this embodiment, the S1 establishing the training sample set includes:
s11, collecting a source sample, and collecting an image before welding initiation through acamera 6 in theline laser sensor 3;
s12, preprocessing the source sample;
s13 generates a target sample: and (3) taking the minimum number of the 4 types of weld seam samples after pretreatment as a reference, and downsampling the other 3 types of weld seam samples until the number of the weld seam samples is the same as the reference to obtain a final training sample.
In this embodiment, the S11 collecting the source sample includes:
s111, adjusting the position of the mechanical arm 1 of the six-degree-of-freedom welding robot, enabling the tail end of a welding gun 2 to be positioned right above the position of a welding seam of aworkpiece 7 to be welded, and enabling alinear laser sensor 3 fixed on the welding gun 2 to be positioned at an optimal working position, so that a clear image can be captured in the welding process, and the linear laser sensor and the workpiece cannot interfere with each other;
and S112, emitting laser by thelaser generator 8, and acquiring an image by using thecamera 6 in theline laser sensor 3 and sending the image to the embeddedindustrial controller 5.
In this embodiment, the preprocessing the source sample by the S12 includes:
s121, eliminating weld joint samples with arc light and splashes in the source samples, and only keeping clean weld joint samples without arc light and splashes before welding;
s122, carrying out scale transformation on the clean arc-free and splash-free welding seam sample, and unifying the size of the sample to 1280x 1024;
s123, horizontally overturning the welding seam sample after the scale transformation;
s124, randomly carrying out translation transformation, rotation transformation, brightness transformation, contrast transformation and Gaussian white noise addition on the horizontally turned weld joint sample;
and S125, normalizing the weld joint sample added with the Gaussian white noise.
In this embodiment, 1200 welds of four different types (L-weld, V-weld, I-weld, and Open-weld) were collected by a camera in a line laser sensor for a total of 4800 weld samples. The size is unified to 1280x 1024 pixels. And randomly carrying out horizontal inversion, translation transformation ([ -40, +40] pixels), rotation transformation ([ -20, +20] degrees), brightness transformation ([0.8,1.2] times), contrast transformation ([0.8,1.2] times) and addition of Gaussian white noise (mu is 0 and sigma is 20) on the welding seam samples to obtain 1200 randomly processed welding seam pictures, namely the number of each welding seam sample is 2400, and the total number of samples is 9600. The pixel values of all samples are divided by 255, and their pixel value values are normalized to [0, -1 ].
S2 training the detector offline;
in one embodiment, the specific process of S2 is as follows: and training the neural network under different initial conditions by using the training sample of S1, and taking the optimal neural network model obtained by multiple times of training as a weld joint detector.
In this embodiment, the S2 offline training detector includes:
s21, training a weld joint detector by adopting a Fast-RCNN algorithm, firstly, extracting the characteristics of an input image by utilizing a shared characteristic layer, then, outputting a candidate Region by utilizing a Region candidate Network (RPN), finally, outputting the classification result of the candidate Region by taking Fast-RCNN as a classifier, and only reserving the candidate Region with the highest score by adopting a Non-Maximum Suppression (Non-Maximum Suppression) algorithm;
s22, initializing parameters of an RPN network and a Fast-RCNN classifier under different initialization conditions, training the RPN network and the Fast-RCNN classifier after the parameters reach a preset maximum iteration number or the error rate on a verification set does not decrease any more, and obtaining an optimal model as a weld joint detector after multiple times of training.
In this embodiment, the training of the weld detector by the S21 method using the fast-RCNN algorithm includes the following steps:
s211, pre-training parameters of a convolutional neural network shared feature layer by using an ImageNet data set;
the embodiment adopts a fast-RCNN algorithm, and the network structure of the shared feature layer adopts an inclusion v2 network, and uses convolution kernels with different sizes in one layer of convolution to perform operation so as to obtain features with different sizes, and the features with different sizes are combined before being input into the lower layer network and then are input into the lower layer network together.
S212 adjusts parameters of the regional candidate network: fixing the aspect ratio of the output candidate region to be 1, setting the side length of the size of the output candidate region to be three sizes of 0.5, 0.75 and 1, and setting the total number of the output candidate regions to be 100;
s213 defines the loss function: the loss function is the sum of the classification error and the position deviation of the candidate region;
the principle of defining the loss function in this embodiment is: when IoU of the output candidate region and the real target region is the maximum value, marking the score of the candidate region as a positive sample; when IoU for the output candidate region and the real target region is greater than 0.7, the candidate region score is also marked as a positive sample.
According to the above definition, the loss function is defined as formula (1):
in formula (1), i represents the sequence number of the candidate region in a small sample lot, P
iRepresents the prediction result of the area i; according to the rule defined above, if the candidate region score is marked as a positive sample, then
Equal to 1, otherwise
Equal to 0; t is t
i={t
x,t
y,t
w,t
hIs a vector representing the coordinates of the position of the center point of the predicted candidate region and the width and height;
representing the position coordinates and width and height of the central point of the real target area; classification loss function L
clsIs a cross entropy function, regression loss function, of two classes
A stable Smooth L1 loss function is used. λ is default to 10, so that the classification loss function is approximately equal in weight to the regression loss function. N is a radical of
clsRepresenting the total number of sample classes in the sample of the small lot, N
regRepresenting the total number of coordinates of the target area in the sample of the small lot.
For the regression loss function, the present embodiment adjusts the parameters of the candidate region coordinates by using the method of formula (2):
in formula (2), x, y, w and h represent the center coordinates of the candidate region and the width and height of the region, respectively, and the variables x, xaAnd x*Respectively, a prediction area, an anchor area (anchor box), and an actual area (ground channel box).
S214, calculating the gradient by using a BP algorithm, performing back propagation, setting the learning rate to be 0.0001, and updating the parameters of the shared feature layer by adopting an approximate joint training mode.
In this embodiment, the ratio of the classification error to the scaling coefficient of the position deviation in the loss function set in S213 is 1: 1.5.
In this embodiment, the S214 includes:
s2141, when calculating the gradient by BP algorithm, calculating the network error by using minimum-batch (mini-batches) mode with the extracted ROI area as a fixed value;
s2142, parameters of the RPN network and the Fast-RCNN classifier are updated at the same time;
s2143 combines the gradient from the RPN network and the gradient of the Fast-RCNN classifier, and inputs the combined gradient and the gradient into the shared feature layer for parameter updating.
S3, carrying out online detection;
in one embodiment, the specific process of S3 is as follows: and acquiring a detection image by using thecamera 6 and the embeddedcontroller 5 in thelinear laser sensor 3, detecting the weld joint by using the weld joint detector obtained in the step S2, and outputting a detection result.
In this embodiment, the online detection in step S3 includes:
s31 acquiring detection image: a camera of a linear laser sensor is used for collecting a welding seam image and sending the welding seam image to an embedded industrial controller;
s32 preprocesses the acquired image: carrying out normalization processing on the image;
s33 calling the welding detector S2, and detecting the image by using a Faster-RCNN algorithm;
s34 outputs the detection result: and outputting the classification result of the welding seam and the position of the welding seam in the image, and finishing the detection of the welding seam before the welding starts.
The embodiment solves the problem that the welding starting point is difficult to position in the current welding seam tracking system, and has the advantages of high positioning precision of the welding starting point, high detection speed, high robustness and the like.
The above examples of the present invention are merely examples for clearly illustrating the present invention and are not intended to limit the embodiments of the present invention. Variations or modifications in different forms may occur to those skilled in the art upon reading the foregoing description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.