Movatterモバイル変換


[0]ホーム

URL:


CN113192029A - Welding seam identification method based on ToF - Google Patents

Welding seam identification method based on ToF
Download PDF

Info

Publication number
CN113192029A
CN113192029ACN202110472422.9ACN202110472422ACN113192029ACN 113192029 ACN113192029 ACN 113192029ACN 202110472422 ACN202110472422 ACN 202110472422ACN 113192029 ACN113192029 ACN 113192029A
Authority
CN
China
Prior art keywords
image
coordinate system
weld
welding seam
tof
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110472422.9A
Other languages
Chinese (zh)
Other versions
CN113192029B (en
Inventor
商亮亮
张�浩
泮佳俊
张帆
李佩齐
刘腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong UniversityfiledCriticalNantong University
Priority to CN202110472422.9ApriorityCriticalpatent/CN113192029B/en
Publication of CN113192029ApublicationCriticalpatent/CN113192029A/en
Application grantedgrantedCritical
Publication of CN113192029BpublicationCriticalpatent/CN113192029B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention discloses a welding line identification method based on ToF, which comprises the following steps: acquiring an original weld image; preprocessing the amplitude image; carrying out local threshold binarization processing on the preprocessed amplitude image to obtain a binarized image; extracting edge features of the binary image; performing radon transformation on the edge image, and identifying a weld image based on appearance conditions of the weld; acquiring two-dimensional information of the welding seam through the identified welding seam image, and solving a three-dimensional coordinate of the welding seam by combining with corresponding depth information; constructing a conversion relation among a world coordinate system, a camera coordinate system, an image coordinate system and a pixel coordinate system; converting the three-dimensional coordinates of the welding line into space coordinates under a world coordinate system according to the conversion relation; the method can improve the efficiency and accuracy of the weld joint identification.

Description

Welding seam identification method based on ToF
Technical Field
The invention relates to the technical field of weld joint identification, in particular to a weld joint identification method based on ToF.
Background
Welding is one of the essential basic manufacturing technologies in the machining industry, and is widely applied to the modern manufacturing industry, such as the fields of marine ships, aerospace, rail transit and the like. Nowadays, the welding process enters the intelligent manufacturing era, the traditional manual welding can not meet the requirements of the processing precision and efficiency of related equipment, and the automation and the intellectualization of the welding technology become the mainstream of the market.
The common automatic welding method is mainly 'manual teaching-memory reproduction', and technicians are still required to control a welding robot to complete welding through a teaching device. I.e. by recording the taught path or trajectory, the welding robot can repeatedly complete the operation, but the type and position of the weld needs to be determined before the welding starts. Therefore, when the number of welding seams is large or the welding process is complex, the welding requirements are difficult to meet through manual teaching operation.
In order to realize high-precision intelligent welding, an automatic welding method often needs to be matched with a welding seam identification technology. Common weld joint identification technologies are classified into a contact type and a non-contact type, wherein the non-contact type weld joint identification technology based on machine vision is widely applied in industrial production. However, the non-contact weld joint recognition technology mainly represented by machine vision is often complex, image noise is removed in a multi-filtering mode, a complex weld joint feature extraction algorithm is required to recognize a weld joint, and accurate three-dimensional information of the weld joint is difficult to obtain. However, the contact-type weld joint identification technology is not widely used due to the defects of low precision, high failure rate, incapability of distinguishing obstacles on the surface of the weld joint and the like.
Conventional three-dimensional imaging techniques include binocular stereo vision techniques and structured light techniques. Patent CN 112059363A discloses unmanned wall climbing welding robot based on binocular vision measurement and welding method thereof, and this measuring method can accurate guide welding robot to reach the welding seam position. Although the binocular stereo vision technology has high precision and low cost, the calculated amount is large, the requirement on the algorithm is high, and the use environment is limited to a certain extent; the main challenge is how to solve the correspondence problem, i.e. how to find the same point in another camera, given a point in the image. Before the corresponding relationship is established, the difference cannot be accurately determined, and thus the three-dimensional information of the target cannot be determined. In addition, patent CN 108335286 a also discloses an online weld forming visual inspection method based on double-line structured light. The structured light method actively projects an optical signal with specific characteristics to the surface of a measured object through a projector. The optical signal with specific characteristics is deformed to a certain extent due to the concave-convex condition of the surface of the object, namely modulated, then the modulated optical signal is collected again by the camera, and then the depth information of the target is determined by the triangulation principle. The time-of-flight ranging is applied to an ultrasonic range finder at the earliest, and the principle is as follows: the infrared ray which can be modulated is emitted to the object to be measured, the infrared ray is received by the receiving end, and the depth information of the object to be measured is rapidly and accurately obtained by analyzing the phase difference and the time difference between the emitted ray and the received ray. And the three-dimensional information of the object can be obtained by combining the shooting of the traditional camera. With the development of precision electronic technology and microelectronic technology, the problems of low resolution, more noise and high cost of a camera based on the ToF technology are solved, and then the flight ranging method based on high-performance photoelectrons is widely applied to various fields. Such as robot navigation, autopilot, super-resolution imaging, non-visual field imaging, and industrial detection, and in addition, time-of-flight ranging shows great potential in the field of machine vision. The method is known by looking up a large amount of literature data, and the weld joint identification technology based on the time-of-flight ranging method is developed and applied in China, and has the advantage that after the central line of the weld joint is obtained, the depth information of the corresponding weld joint can be directly determined in the depth image.
Disclosure of Invention
In view of the above, the present invention is directed to a ToF-based weld joint identification method, which can improve the efficiency and accuracy of weld joint identification.
In order to achieve the purpose, the invention provides the following technical scheme:
a welding seam identification method based on ToF comprises the following steps:
step S1, acquiring an original weld image of the weldment to be processed through a camera based on the ToF technology, wherein the original weld image comprises; an amplitude image and a depth image;
step S2, preprocessing the amplitude image obtained in the step S1 to obtain a preprocessed amplitude image;
step S3, local threshold value binarization processing is carried out on the amplitude image after preprocessing obtained in the step S2 to obtain a corresponding binarized image;
s4, extracting the edge characteristics of the binarized image through a Gabor filter, and acquiring an edge image of a welding seam;
step S5, performing radon transformation on the edge image acquired in the step S4 to obtain a horizontally corrected edge image, and identifying a weld image based on the appearance condition of the weld;
s6, acquiring two-dimensional information of the welding seam through the welding seam image identified in the S5, and solving a three-dimensional coordinate of the welding seam by combining the depth information acquired in the S1;
step S7, constructing a conversion relation among a world coordinate system, a camera coordinate system, an image coordinate system and a pixel coordinate system;
and step S8, converting the three-dimensional coordinates of the welding seam acquired in the step S6 into space coordinates in the world coordinate system according to the conversion relation between the pixel coordinate system and the world coordinate system in the step S7.
Further, the sensor in the camera based on the ToF technology is an array type.
Further, in the step S2, the preprocessing includes: the amplitude image is clipped to obtain an image containing a weld region, and the image is subjected to filtering processing.
Further, the step S4 specifically includes:
s401, passing through the imaginary part of the Gabor filter function, wherein the expression is shown as formula (1), 4 scales are selected, f is 0.15, 0.3, 0.15 and 0.6 respectively, 6 directions are selected, and theta is 0,
Figure BDA0003045996640000031
Pi and
Figure BDA0003045996640000032
constructing 24 filter banks;
Figure BDA0003045996640000033
in the formula (1), x is a Gaussian scale in the main shaft direction; y is a gaussian scale in which the principal axis directions are orthogonal, f is a filter center frequency, θ is a rotation angle of the gaussian principal axis, η and γ are constants, and x ═ xcos θ + ysin θ, y ═ xsin θ + ycos θ.
S402, performing space domain convolution on the 24 filter banks obtained in the S401 and the binary image obtained in the S3 to obtain preliminary edge detection images with 4 scales and 6 directions;
step S403, performing non-maximum value suppression on the preliminary edge detection image obtained in step S402, comparing two points near the corresponding image according to the detection direction, if the two points are the maximum value, reserving the two points, and if the two points are not the maximum value, changing the two points to 0;
and S404, fusing the preliminary edge detection images of 4 scales and 6 directions, and then performing edge connection on the fused images to obtain edge images of the welding seams.
Further, the step S7 specifically includes:
firstly, converting a world coordinate system into a camera coordinate system through rigid body transformation;
then, converting the camera coordinate system into an image coordinate system through perspective projection;
and finally, discretizing the image coordinate system to obtain a pixel coordinate system.
The invention has the beneficial effects that:
compared with a contact-based weld joint identification method, the method provided by the invention has the advantages that the algorithm is simple, the identification speed is higher, the complex weld joint can be accurately identified in a shorter time, and the identification precision is higher. And the ToF camera can directly acquire the depth information of the welding seam while acquiring the welding seam image, so that the target can be quickly and accurately reconstructed in three dimensions compared with a binocular vision method and a structured light method.
Drawings
Fig. 1 is a schematic diagram of the conversion from the world coordinate system to the camera coordinate system in embodiment 1.
Fig. 2 is a schematic diagram of conversion from the camera coordinate system to the image coordinate system in embodiment 1.
Fig. 3 is a schematic diagram of conversion from an image coordinate system to a pixel coordinate system in embodiment 1.
Fig. 4 is an original weld image obtained by using a camera based on the ToF technique in example 1.
Fig. 5 is a point cloud image of the weld centerline finally obtained in example 1.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
Referring to fig. 1 to 5, the present embodiment provides a ToF-based weld joint identification method, including the following steps:
s1, acquiring an original weld image of the weldment to be processed through a camera based on the ToF technology, wherein the original weld image comprises an image of the original weld; an amplitude image and a depth image;
specifically, the original weld image is a weld image before or after welding, in this embodiment, a camera based on the ToF technology is used for direct shooting and acquisition, and sensors in the camera based on the ToF technology are in an array type, so that target three-dimensional information can be rapidly acquired in the process of acquiring an image of each frame; a sensor in the camera based on the ToF technology can emit modulated infrared light, the light is subjected to diffuse reflection after encountering a weld joint, and a receiving end can obtain corresponding weld joint depth information by analyzing the phase difference or time difference between emitted light and received light, so that the depth information, point cloud information, gray scale information and the like of a target image can be obtained.
Step S2, preprocessing the amplitude image obtained in the step S1 to obtain a preprocessed amplitude image;
specifically, the pretreatment comprises: the amplitude image is cropped to obtain an image containing a weld joint area, and the image is subjected to filtering processing, wherein the purpose of the filtering processing is to weaken the influence of light on the amplitude image in the environment.
More specifically, the weld acquired by the camera based on the ToF technology contains various data information and is easily affected by light in the environment. The acquired image contains a large amount of noise and image information interference irrelevant to the welding seam, and pixels in the point cloud picture contain 3 data, namely XYZ three-dimensional coordinates. The three-dimensional point cloud data of the space points in the collected point cloud image can be converted into a two-dimensional matrix with depth information as an index. All the two-dimensional matrixes are arranged according to the spatial sequence to obtain a central matrix, and the average difference value of the depth information indexed by the central matrix and the depth information indexed by the surrounding matrixes is calculated. The average difference serves as a global threshold for depth information. And removing the three-dimensional point cloud with the depth information in the point cloud image being too far away from the global threshold value as a noise point, thereby improving the subsequent calculation efficiency and accuracy.
Step S3, local threshold value binarization processing is carried out on the amplitude image after preprocessing obtained in the step S2 to obtain a corresponding binarized image;
specifically, the threshold is obtained by calculating a local image Gaussian weighted average, and the preprocessed amplitude image is used for determining a binarization threshold by using a histogram method, so that a binarization image capable of reflecting the overall and local characteristics of the image is obtained.
S4, extracting the edge characteristics of the binary image through a Gabor filter, and acquiring the edge image of the welding seam;
specifically, step S4 specifically includes:
the principle of the Gabor filtering algorithm is as follows:
Figure BDA0003045996640000051
wherein x 'xcos θ + ysin θ, y' xsin θ + ycos θ, f is the center frequency, and θ is the selected direction;
the imaginary part of the Gabor filter function is expressed as formula (1), 4 scales are selected, f is 0.15, 0.3, 0.15 and 0.6 respectively, 6 directions are selected, theta is 0,
Figure BDA0003045996640000052
Pi and
Figure BDA0003045996640000053
constructing 24 filter banks;
Figure BDA0003045996640000054
wherein x is a Gaussian scale in the direction of the main shaft; y is a Gaussian scale orthogonal to the main shaft direction; f is expressed as the filter center frequency; θ represents the rotation angle of the gaussian main shaft; η and γ are constants, and in this embodiment, η is 1 and γ is 2.
S402, performing space domain convolution on the 24 filter banks obtained in the S401 and the binary image obtained in the S3 to obtain preliminary edge detection images with 4 scales and 6 directions;
specifically, a 3 × 3 convolution kernel is defined
Figure BDA0003045996640000055
In the originalAnd sliding the convolution kernel on the binary image, and performing summation operation until the convolution kernel slides over pixels of the whole image to obtain values output by all the pixels, so as to obtain primary edge detection images with different scales and different directions, wherein each image represents edge information of welding seams with different scales and different directions.
Step S403, performing non-maximum value suppression on the preliminary edge detection image obtained in step S402, comparing two points near the corresponding image according to the detection direction, if the two points are the maximum value, reserving the two points, and if the two points are not the maximum value, changing the two points to 0;
and S404, fusing the preliminary edge detection images of 4 scales and 6 directions, and then performing edge connection on the fused images to obtain edge images of the welding seams.
Step S5, performing radon transformation on the edge image acquired in the step S4 to obtain a horizontally corrected edge image, and identifying a weld image based on the appearance condition of the weld;
specifically, the image subjected to edge feature extraction is subjected to Radon Transform (RT):
rotating the image by any theta angle (the rotation angle is between 0 and 180 degrees) by taking the center of the image as an origin to obtain a corresponding horizontal projection value r in rho-theta space; different R forms a projection set R, the maximum value of elements in the R is solved, and corresponding values theta and rho are solved, wherein theta is the horizontal rotation angle, and rho is the distance from the corresponding origin to the straight line;
then, converting theta and rho values obtained in the rho-theta space to a certain point Q through which the edge of the welding seam passes in an image plane coordinate system;
solving the position of the edge of the welding seam in the image coordinate plane according to a linear equation solving method;
and finally, integrally identifying the welding seam according to the prior knowledge of the appearance conditions (such as the width and the type) of the welding seam and the like.
S6, acquiring two-dimensional information of the welding seam through the welding seam image identified in the S5, and solving a three-dimensional coordinate of the welding seam by combining the depth image acquired in the S1;
specifically, since the acquisition of the weld image in the amplitude image has been identified in step S5, two-dimensional information of the weld can be directly acquired. Since the amplitude image and the depth image acquired by the ToF camera are directly related, the depth information of the weld joint can be acquired from the depth image. And combining the two-dimensional information and the depth information of the welding seam to obtain the three-dimensional coordinate information of the welding seam.
Step S7, constructing a conversion relation among a world coordinate system, a camera coordinate system, an image coordinate system and a pixel coordinate system;
specifically, a world coordinate system is converted into a camera coordinate system through rigid body transformation; then, converting the camera coordinate system into an image coordinate system through perspective projection; and finally, discretizing the image coordinate system to obtain a pixel coordinate system.
More specifically:
world coordinate system (X)w,Yw,Zw) -a three-dimensional coordinate system in the real world describing the location of the object in the real world;
camera coordinate system (X)c,Yc,Zc) A three-dimensional rectangular coordinate system is established by taking the focusing center of the camera as an origin and taking the optical axis as Z;
image coordinate system (x, y) -to describe how the image in the camera coordinate system is projected onto the camera's negative;
pixel coordinates (u, v) -the image is composed of pixels, so the pixel coordinate system is used to determine the position of the pixel in the image.
As shown in fig. 1, rigid transformation is required to convert the world coordinate system to the camera coordinate system, and the rigid transformation is a transformation that only translates, rotates, and inverts an object without deforming the object.
The related transformation between the world coordinate system and the camera coordinate system can be completed only by performing rotation transformation and translation transformation.
The transformation of the world coordinate system to the camera coordinate system can be represented by a rotation matrix R and a translation matrix t:
Figure BDA0003045996640000061
expressed in homogeneous coordinate system as:
Figure BDA0003045996640000062
wherein [ r ]11,r12,r13]T,[r21,r22,r23]T,[r31,r32,r33]TBase vectors respectively representing original coordinate systems
Figure BDA0003045996640000063
Figure BDA0003045996640000064
tx,ty,tzIndicating the amount of translation in the x, y, z direction for the transformation to another coordinate system.
From the camera coordinate system to the image coordinate system, belonging to the projection perspective, i.e. from 3D to 2D, the schematic diagram is shown in fig. 2, where P is a point in space corresponding to P in the image coordinate system, whose coordinates are (x.y), according to the triangle-like principle:
Figure BDA0003045996640000071
Figure BDA0003045996640000072
expressed in homogeneous coordinates as:
Figure BDA0003045996640000073
where f denotes the focal length of the camera in fig. 2.
This step completes the conversion of the camera coordinate system to the ideal image coordinate system.
From the image coordinate system to the pixel coordinate system, the image coordinate system and the pixel coordinate system are on the same plane, but the origins of the two are different, so that a transformation is required, and the principle is as shown in fig. 3, where the transformation relationship between the pixel coordinate and the image coordinate is:
Figure BDA0003045996640000074
and (3) homogenizing to obtain:
Figure BDA0003045996640000075
the conversion matrix can be obtained by combining the four conversions:
Figure BDA0003045996640000076
in a world coordinate system, assuming that the position coordinate of one point on a welding seam is (x, y, z), combining a rotation matrix and a translation matrix, obtaining the coordinate of the point under the camera coordinate system based on the ToF technology through rigid body conversion, and then using a similar triangle principle, completing the conversion of the point on the welding seam in the three-dimensional camera image coordinate system based on the ToF technology.
And step S8, converting the three-dimensional coordinates of the welding seam acquired in the step S6 into space coordinates in the world coordinate system according to the conversion relation between the pixel coordinate system and the world coordinate system in the step S7.
In order to perform three-dimensional positioning on the weld joint, coordinate transformation is required. The position of the real-world weld obtained by the camera based on the ToF technology can be established with the corresponding relation with the pixel on the imaging plane of the ToF camera according to the conversion method.
Radon Transform (RT), which is a projection transform of the resulting digital image in various angular directions, is mathematically understood as a linear integral of a two-dimensional function f (x, y), and the resulting integral is projected onto the RT plane.
The integrated value obtained by linear projective transformation is also called Radon curve, which is determined by the distance ρ of the straight line in the image from the origin of the image coordinate system and the inclination angle θ of the straight line.
The digital image in the plane is linearly integrated along a straight line ρ ═ xcos θ + ysin θ, and F (θ, ρ) obtained by the linear integration is Radon transform of the digital image, that is, a certain point (θ, ρ) in the transform plane corresponds to a certain line integral value of the original image F (x, y). The Radon transform formula for a digital image f (x, y) is:
F(θ,ρ)=∫∫f(x,y)δ(ρ-xcosθ-ysinθ)dxdy
wherein:
Figure BDA0003045996640000081
f (x, y) is the pixel gray value of a certain point (x, y) on the image, delta is the Dirac function, and rho is the distance from the projection line to the origin in the (x, y) plane; theta is the angle between the normal of the projection line and the x-axis.
The characteristic function δ linearly integrates the image along a straight line ρ ═ xcos θ + ysin θ from the definition of RT, which can be seen as a linear projection of the digital image in the ρ - θ coordinate system, with each point in the coordinate system corresponding to each straight line in the image coordinate system; RT can also be seen as a linear projection on the horizontal axis of the image obtained after rotating the digital image clockwise by an angle theta.
So RT can be used for edge line detection in digital images: in the digital image coordinate system, a line with a high gray value will form a point with relatively high brightness in the rho-theta space, while a line with a low gray value will form a point with relatively dark brightness in the rho-theta space.
In this embodiment, fig. 4 is an original weld image obtained by using a camera based on the ToF technology in embodiment 1, and is processed by the method in this embodiment to obtain fig. 5, and fig. 5 is a point cloud image of a weld centerline finally obtained in embodiment 1.
The invention is not described in detail, but is well known to those skilled in the art.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (5)

Translated fromChinese
1.一种基于ToF的焊缝识别方法,其特征在于,包括如下步骤:1. a welding seam identification method based on ToF, is characterized in that, comprises the steps:步骤S1、通过基于ToF技术的摄像头获取待处理焊件的原始焊缝图像,所述原始焊缝图像包括;振幅图像和深度图像;Step S1, obtaining an original weld image of the weldment to be processed through a camera based on the ToF technology, the original weld image includes: an amplitude image and a depth image;步骤S2、对步骤S1中获取的振幅图像进行预处理,得到预处理后的振幅图像;Step S2, preprocessing the amplitude image acquired in step S1 to obtain a preprocessed amplitude image;步骤S3、对步骤S2中获取的预处理后的振幅图像进行局部阈值二值化处理得到对应的二值化图像;Step S3, performing local threshold binarization processing on the preprocessed amplitude image obtained in step S2 to obtain a corresponding binarized image;步骤S4、通过Gabor滤波器提取所述二值化图像的边缘特征,并且获取焊缝的边缘图像;Step S4, extracting the edge feature of the binarized image through the Gabor filter, and obtaining the edge image of the weld;步骤S5、对步骤S4中获取的边缘图像进行拉东变换,获得水平矫正后的边缘图像,并且基于焊缝的外观条件,识别出焊缝图像;Step S5, performing Ladon transformation on the edge image obtained in step S4, obtaining the edge image after horizontal correction, and identifying the weld image based on the appearance condition of the weld;步骤S6、通过步骤S5中识别出的焊缝图像,获取焊缝的二维信息,再结合步骤S1中获取的深度信息,求出焊缝的三维坐标;Step S6, obtaining the two-dimensional information of the welding seam through the welding seam image identified in step S5, and then combining the depth information obtained in step S1 to obtain the three-dimensional coordinates of the welding seam;步骤S7、构建世界坐标系、相机坐标系、图像坐标系和像素坐标系之间的转换关系;Step S7, constructing the conversion relationship between the world coordinate system, the camera coordinate system, the image coordinate system and the pixel coordinate system;步骤S8、根据步骤S7中像素坐标系与世界坐标系之间的转换关系,将步骤S6中获取的焊缝三维坐标转换为世界坐标系下的空间坐标。Step S8, according to the conversion relationship between the pixel coordinate system and the world coordinate system in step S7, convert the three-dimensional coordinates of the weld obtained in step S6 into space coordinates in the world coordinate system.2.根据权利要求1所述的一种基于ToF的焊缝识别方法,其特征在于,所述基于ToF技术的摄像头中的传感器为阵列式。2 . The ToF-based welding seam identification method according to claim 1 , wherein the sensors in the ToF-based camera are of an array type. 3 .3.根据权利要求2所述的一种基于ToF的焊缝识别方法,其特征在于,在所述步骤S2中,所述预处理包括:对振幅图像进行裁剪获得包含焊缝区域的图像,并且对该图像进行滤波处理。3. A ToF-based weld identification method according to claim 2, wherein in the step S2, the preprocessing comprises: cropping the amplitude image to obtain an image including the weld area, and Filter the image.4.根据权利要求3所述的一种基于ToF的焊缝识别方法,其特征在于,所述步骤S4具体包括:4. a kind of welding seam identification method based on ToF according to claim 3, is characterized in that, described step S4 specifically comprises:步骤S401、通过Gabor滤波函数的虚部,其表达式如公式(1)所示,并且选用4个尺度,f分别为0.15、0.3、0.15和0.6,再选用6个方向,θ分别为0、
Figure FDA0003045996630000011
π和
Figure FDA0003045996630000012
构建24个滤波器组;Step S401: Pass through the imaginary part of the Gabor filter function, whose expression is shown in formula (1), and select 4 scales, f is 0.15, 0.3, 0.15 and 0.6 respectively, and then select 6 directions, θ is 0,
Figure FDA0003045996630000011
pi and
Figure FDA0003045996630000012
Build 24 filter banks;
Figure FDA0003045996630000013
Figure FDA0003045996630000013
公式(1)中,x为主轴方向上的高斯尺度;y为主轴方向正交的高斯尺度,f表示为滤波器中心频率,θ表示为高斯主轴的旋转角度,η和γ为常数,x'=x cosθ+y sinθ,y'=-x sinθ+y cosθ。In formula (1), x is the Gaussian scale in the principal axis direction; y is the Gaussian scale orthogonal to the principal axis direction, f is the filter center frequency, θ is the rotation angle of the Gaussian principal axis, η and γ are constants, x' =x cosθ+y sinθ, y'=-x sinθ+y cosθ.步骤S402、将步骤S401中得到的24个滤波器组与步骤S3中得到的二值化图像进行空域卷积,得到4个尺度,6个方向的初步边缘检测图像;Step S402, performing spatial domain convolution on the 24 filter banks obtained in step S401 and the binarized image obtained in step S3 to obtain preliminary edge detection images of 4 scales and 6 directions;步骤S403、对步骤S402中得到的初步边缘检测图像进行非极大值抑制,按照检测的方向,对相应图像附近的两个点进行比较,若为最大值则保留,若不是则变为0;Step S403, performing non-maximum value suppression on the preliminary edge detection image obtained in step S402, and comparing two points near the corresponding image according to the direction of detection, if it is the maximum value, it is retained, if not, it becomes 0;步骤S404、将4个尺度,6个方向的初步边缘检测图像进行融合,再将融合后的图像进行边缘连接,即得到焊缝的边缘图像。Step S404 , fuse the preliminary edge detection images of 4 scales and 6 directions, and then perform edge connection on the fused images to obtain the edge image of the welding seam.5.根据权利要求4所述的一种基于ToF的焊缝识别方法,其特征在于,所述步骤S7具体包括:5. a kind of welding seam identification method based on ToF according to claim 4, is characterized in that, described step S7 specifically comprises:首先将世界坐标系经过刚体变换转化为相机坐标系;First, the world coordinate system is transformed into the camera coordinate system through rigid body transformation;然后再将相机坐标系经过透视投影转化为图像坐标系;Then, the camera coordinate system is transformed into an image coordinate system through perspective projection;最后将图像坐标系经过离散化处理得到像素坐标系。Finally, the image coordinate system is discretized to obtain the pixel coordinate system.
CN202110472422.9A2021-04-292021-04-29ToF-based weld joint identification methodActiveCN113192029B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110472422.9ACN113192029B (en)2021-04-292021-04-29ToF-based weld joint identification method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110472422.9ACN113192029B (en)2021-04-292021-04-29ToF-based weld joint identification method

Publications (2)

Publication NumberPublication Date
CN113192029Atrue CN113192029A (en)2021-07-30
CN113192029B CN113192029B (en)2024-07-19

Family

ID=76980591

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110472422.9AActiveCN113192029B (en)2021-04-292021-04-29ToF-based weld joint identification method

Country Status (1)

CountryLink
CN (1)CN113192029B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113954085A (en)*2021-09-082022-01-21重庆大学Intelligent positioning and control method of welding robot based on binocular vision and linear laser sensing data fusion
CN113989199A (en)*2021-10-132022-01-28南京理工大学Binocular narrow butt weld detection method based on deep learning
CN114092411A (en)*2021-10-282022-02-25东华大学 An efficient and fast binocular 3D point cloud solder joint defect detection method
CN114453707A (en)*2022-03-162022-05-10南通大学 A multi-scene small automatic welding robot based on ToF technology
CN115741687A (en)*2022-11-152023-03-07深圳市泰达机器人有限公司Method, system and storage medium for visual recognition, tracking and processing of welding line
CN116416183A (en)*2021-12-292023-07-11广东利元亨智能装备股份有限公司Weld quality detection area determination method, device, computer and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105665970A (en)*2016-03-012016-06-15中国科学院自动化研究所System and method for automatic generation for path points of welding robot
CN109658456A (en)*2018-10-292019-04-19中国化学工程第六建设有限公司Tank body inside fillet laser visual vision positioning method
CN111489436A (en)*2020-04-032020-08-04北京博清科技有限公司Three-dimensional reconstruction method, device and equipment for weld joint and storage medium
CN112037189A (en)*2020-08-272020-12-04长安大学Device and method for detecting geometric parameters of steel bar welding seam
CN112053376A (en)*2020-09-072020-12-08南京大学Workpiece weld joint identification method based on depth information
CN112238304A (en)*2019-07-182021-01-19山东淄博环宇桥梁模板有限公司Method for automatically welding small-batch customized special-shaped bridge steel templates by mechanical arm based on image visual recognition of welding seams
CN112308872A (en)*2020-11-092021-02-02西安工程大学Image edge detection method based on multi-scale Gabor first-order derivative
CN112308873A (en)*2020-11-092021-02-02西安工程大学Edge detection method for multi-scale Gabor wavelet PCA fusion image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105665970A (en)*2016-03-012016-06-15中国科学院自动化研究所System and method for automatic generation for path points of welding robot
CN109658456A (en)*2018-10-292019-04-19中国化学工程第六建设有限公司Tank body inside fillet laser visual vision positioning method
CN112238304A (en)*2019-07-182021-01-19山东淄博环宇桥梁模板有限公司Method for automatically welding small-batch customized special-shaped bridge steel templates by mechanical arm based on image visual recognition of welding seams
CN111489436A (en)*2020-04-032020-08-04北京博清科技有限公司Three-dimensional reconstruction method, device and equipment for weld joint and storage medium
CN112037189A (en)*2020-08-272020-12-04长安大学Device and method for detecting geometric parameters of steel bar welding seam
CN112053376A (en)*2020-09-072020-12-08南京大学Workpiece weld joint identification method based on depth information
CN112308872A (en)*2020-11-092021-02-02西安工程大学Image edge detection method based on multi-scale Gabor first-order derivative
CN112308873A (en)*2020-11-092021-02-02西安工程大学Edge detection method for multi-scale Gabor wavelet PCA fusion image

Cited By (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113954085A (en)*2021-09-082022-01-21重庆大学Intelligent positioning and control method of welding robot based on binocular vision and linear laser sensing data fusion
CN113989199A (en)*2021-10-132022-01-28南京理工大学Binocular narrow butt weld detection method based on deep learning
CN113989199B (en)*2021-10-132025-08-01南京理工大学Deep learning-based binocular narrow butt weld detection method
CN114092411A (en)*2021-10-282022-02-25东华大学 An efficient and fast binocular 3D point cloud solder joint defect detection method
CN116416183A (en)*2021-12-292023-07-11广东利元亨智能装备股份有限公司Weld quality detection area determination method, device, computer and storage medium
CN114453707A (en)*2022-03-162022-05-10南通大学 A multi-scene small automatic welding robot based on ToF technology
CN114453707B (en)*2022-03-162024-08-13南通大学ToF technology-based multi-scene small automatic welding robot
CN115741687A (en)*2022-11-152023-03-07深圳市泰达机器人有限公司Method, system and storage medium for visual recognition, tracking and processing of welding line

Also Published As

Publication numberPublication date
CN113192029B (en)2024-07-19

Similar Documents

PublicationPublication DateTitle
CN113192029B (en)ToF-based weld joint identification method
Koide et al.General, single-shot, target-less, and automatic lidar-camera extrinsic calibration toolbox
CN111046776B (en) Obstacle detection method for mobile robot travel path based on depth camera
Yan et al.Joint camera intrinsic and lidar-camera extrinsic calibration
VeľasCalibration of rgb camera with velodyne lidar
US7376262B2 (en)Method of three dimensional positioning using feature matching
CN116188558B (en)Stereo photogrammetry method based on binocular vision
CN114029946A (en)Method, device and equipment for guiding robot to position and grab based on 3D grating
CN111123242B (en)Combined calibration method based on laser radar and camera and computer readable storage medium
KR102683455B1 (en)Object detection system and method using multi-coordinate system features of lidar data
CN110243311A (en) A high-precision dynamic angle measurement system and method based on vision
Boroson et al.3D keypoint repeatability for heterogeneous multi-robot SLAM
CN119251303A (en) A spatial positioning method based on multimodal vision fusion
Zhao et al.Extrinsic calibration of a small FoV LiDAR and a camera
Park et al.Global map generation using LiDAR and stereo camera for initial positioning of mobile robot
Swadzba et al.A comprehensive system for 3D modeling from range images acquired from a 3D ToF sensor
CN119832151A (en)Open three-dimensional reconstruction method, automatic depth positioning method, equipment and robot
CN118191873A (en)Multi-sensor fusion ranging system and method based on light field image
CN117409386A (en)Garbage positioning method based on laser vision fusion
CN115909274A (en) Dynamic obstacle detection method for automatic driving
Xin et al.Geometric interpretation of ellipse projection and disambiguating in pose estimation
CN115797433A (en)Dimension measuring method and device based on deep learning
Xie et al.Real-time Reconstruction of unstructured scenes based on binocular vision depth
Li et al.Road edge and obstacle detection on the SmartGuard navigation system
Chu et al.3d perception and reconstruction system based on 2d laser scanner

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp