Movatterモバイル変換


[0]ホーム

URL:


CN112775975A - Vision-guided multi-station robot welding deviation correcting device and method - Google Patents

Vision-guided multi-station robot welding deviation correcting device and method
Download PDF

Info

Publication number
CN112775975A
CN112775975ACN202110139220.2ACN202110139220ACN112775975ACN 112775975 ACN112775975 ACN 112775975ACN 202110139220 ACN202110139220 ACN 202110139220ACN 112775975 ACN112775975 ACN 112775975A
Authority
CN
China
Prior art keywords
welding
robot
vision
image
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110139220.2A
Other languages
Chinese (zh)
Inventor
张毅
廖伟
周立奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Pexor Automation Technology Co ltd
Original Assignee
Chongqing Pexor Automation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Pexor Automation Technology Co ltdfiledCriticalChongqing Pexor Automation Technology Co ltd
Priority to CN202110139220.2ApriorityCriticalpatent/CN112775975A/en
Publication of CN112775975ApublicationCriticalpatent/CN112775975A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The invention provides a vision-guided multi-station robot welding deviation correcting device and a method, belongs to the technical field of robot welding, and solves the problem of low welding point positioning accuracy. The PLC is respectively connected with the robot controller, the displacement sensor, the light source controller and the vision control system. A welding deviation rectifying method of a vision-guided multi-station robot comprises the steps of (10) welding preparation; (20) performing deviation rectification; (30) running a program; (40) reading a program; (50) and (5) deviation rectifying judgment. The invention can accurately position by depending on a tool clamp, can solve the problem that a welding workpiece deforms after being welded and clamped, improves the accuracy and consistency of welding, and improves the accurate positioning of a welding point by vision and a laser displacement sensor.

Description

Vision-guided multi-station robot welding deviation correcting device and method
Technical Field
The invention belongs to the technical field of robot welding, and particularly relates to a vision-guided multi-station robot welding deviation correcting device and method.
Background
The welding robot is widely applied to the automobile manufacturing industry at present, the welding of an automobile chassis, a seat framework, a guide rail, a silencer, a hydraulic torque converter and the like, and particularly the welding robot is widely applied to the welding production of the automobile chassis. After the robot is used for welding, the appearance and the internal quality of a welded part are greatly improved, the stability of the quality is ensured, the labor intensity is reduced, and the labor environment is improved. Although the welding robot plays a crucial role in modern industry, when the welding precision needs to be improved, the requirement on a work fixture of a workpiece is particularly high, and the work fixture required by some large-sized welding workpieces cannot achieve the required clamping precision at all to ensure the welding accuracy. The material of some welding pieces is soft partially in addition, and the material takes place some deformation that are difficult for perceiving through frock clamp centre gripping back.
In the above technical problem, if the robot is to weld a welding part under a high-precision condition, it is difficult to ensure that the welding position of the current workpiece is not changed by only using the fixture, so that the welding precision is directly affected, the yield of a welding product is reduced, and even the welding part is directly scrapped.
Disclosure of Invention
Aiming at the technical defects, the invention discloses a vision-guided multi-station robot welding deviation rectifying method, which is characterized in that welding deviation rectification is carried out on the current welding point through a vision and laser displacement sensor, the robot is provided with the laser displacement sensor and an industrial camera, and the welding precision of the welding line of the current welding workpiece is ensured through the laser displacement sensor and the industrial camera. Therefore, the technical defects of poor welding and welding scrappage caused by low clamping and positioning precision of the clamp or deformation of the workpiece to be welded after clamping in the prior art are overcome.
In order to solve the technical problems, the invention provides the following technical scheme:
the utility model provides a multistation robot welding deviation correcting device of vision guide, includes wherein:
the PLC is used for controlling the working states of the robot controller, the laser displacement sensor and the vision control system;
the robot controller is used for directly controlling the motion track of the robot and simultaneously communicating with the welding machine and the PLC controller;
the displacement sensor is used for detecting whether the welding point and the teaching point of the camera shooting unit are positioned at the same height for image capture, and detecting local depression of the welding point and deviation of welding precision on the welding position;
the vision control system controls the photographing action of the image capturing device by setting the parameters of the image capturing device;
the light source controller is used for controlling the on or off of the light source, or adjusting the brightness or darkness of the light source, and controlling the brightness of the light source by changing the magnitude of the current; and
the image capturing device is used for shooting and capturing images through soft triggering, and the shot images are uploaded to an appointed image processing area of the upper computer through a TCP/IP protocol; wherein:
and the PLC is respectively connected with the robot controller, the displacement sensor, the light source controller and the vision control system.
As a further embodiment of the present invention, the displacement sensor is a non-contact laser displacement sensor.
As a further embodiment of the present invention, the image capturing device is an industrial camera.
As a further embodiment of the present invention, the PLC controller is connected to the robot controller, the displacement sensor, the light source controller, and the vision control system in a ProfiNet communication, a ProfiBus communication, a CAN bus communication, an Ethernet communication, a DeviceNet communication, or a TCP/IP communication, respectively.
In order to solve the technical problems, the invention also provides the following technical scheme:
a visual-guided multi-station robot welding deviation rectifying method comprises the following steps:
(10) welding preparation, namely loading and clamping a workpiece to be welded, wherein all robot positions are at original positions;
the working states of the robot controller, the laser displacement sensor and the vision control system are in a starting state;
the motion trail of the robot is directly controlled through the robot controller, and meanwhile, the robot controller is communicated with a welding machine and the PLC;
detecting whether the welding point and the teaching point of the camera unit are at the same height through a displacement sensor for image capture, and detecting local depression of the welding point and deviation of welding precision at the welding position;
setting parameters of the image capturing device through a vision control system to control the photographing action of the image capturing device;
the light source controller is used for controlling the light source to be turned on or off, or is used for adjusting the brightness or the darkness of the light source, and the brightness of the light source is controlled by changing the magnitude of the current;
soft triggering, photographing and image taking are carried out through an image taking device, and a shot picture is uploaded to an appointed image processing area of an upper computer through a TCP/IP protocol;
(20) executing deviation correction, selecting a welding formula and the model of a workpiece to be welded on a human-computer interaction interface HMI by an operator, automatically running a program, and communicating with a PLC (programmable logic controller) in a TCP/IP (transmission control protocol/Internet protocol) or RS485 communication mode to realize human-computer face-to-face interaction;
(30) the method comprises the steps that a program is operated, a robot controller, a laser displacement sensor and a vision control system start to work, the robot controller controls the work of a robot, the robot firstly reaches a preset image taking position and then triggers the laser displacement sensor to act, the laser displacement sensor reads a current image taking height value of a camera, and the vision control system sets parameters of an image taking device and controls the image taking action of the image taking device;
(40) reading a program, after the camera finishes image capture, firstly judging whether the image is the image of a workpiece needing to be welded at present and finishing image analysis by a vision system, and obtaining coordinate values and angle values of the welding starting point X and Y directions of the welding seam;
(50) deviation rectifying judgment, namely judging whether the photographed image is matched with the model of the selected welding part or not through a vision control system, if the photographed image is not matched with the selected model, stopping welding by the robot immediately, turning off the enabling of the robot after returning to the original point, and alarming by the whole set of equipment to prompt an operator of wrong feeding; the welded model is ensured to be correct through image outline comparison, so that a vicious event of welding scrapping caused by wrong materials is avoided; when the camera image matches the selected model for welding, the vision control system will send the calculated correct X, Y offset coordinates to the PLC controller for distribution to each welding robot controller.
As a further embodiment of the present invention, the visual welding guidance process under the control of the visual control system comprises the following steps:
(200) the robot carries a camera and a light source to reach a photographing position after the robot teaching, and the position is obtained during the robot teaching before welding;
(210) when the robot reaches a photographing point, triggering a laser displacement sensor to acquire the current photographing point and the surface of a workpiece needing to be welded; the laser displacement sensor transmits the acquired analog quantity signal of the current height value to the PLC, and the PLC performs analog-to-digital conversion on the value acquired by the laser displacement sensor into an actual height value;
(220) the PLC controller compares the collected height of the laser displacement sensor with a taught height value after calculation, wherein the calculation formula is as follows:
deviation value H ═ acquisition value H-calibration value H' (1)
(230) Calculating the height deviation from the current laser displacement sensor to the workpiece according to a formula (1), and calling a robot height compensation program if the height deviation is greater than a threshold value;
(240) after the current photographing heights are consistent, the PLC triggers the light source to be started, the camera photographs, and the current photographed image is uploaded to the designated position of the industrial personal computer after the photographing;
(250) after the image capture device finishes image capture, the image is uploaded to an image processing position of the industrial personal computer, the image processing software finds out a welding line needing to be guided according to the shot image, then data such as an angle and an initial point coordinate of the welding line needing to be guided are calculated through a visual algorithm, the calculated value is transmitted to the PLC, and then the PLC transmits the value to the robot to guide welding of the welding line.
As a further embodiment of the present invention, the vision algorithm comprises the steps of:
(1) conversion: firstly, shooting a flat plate or a calibration plate with a pattern with a fixed distance by a camera, and converting pixel coordinates into actual coordinate sizes by a calibration algorithm.
(2) Calibration: after a welded standard workpiece is placed on a clamp, a PatMax algorithm is adopted for an image photographed by a camera, a standard outline of a welding seam to be welded is searched out through the image photographed by the camera according to the PatMax algorithm, and an X coordinate, a Y coordinate or an origin coordinate of a welding starting point is obtained from the standard outline.
(3) And (3) calculating: when the guiding welding is formally implemented, the camera shoots an image to obtain an actual coordinate value of the welding seam starting point according to the algorithm, and the deviation of the actual coordinate value and the origin coordinate is the welding seam deviation required to guide the welding.
As a further embodiment of the present invention, the robot calculates parameters such as current new welding coordinates and welding pose by euler angle formula according to the welding angle and coordinate position.
As a further embodiment of the present invention, the method of calculating the euler angle formula includes the steps of:
(300) acquiring a current height deviation value of a current welding Z axis through a laser displacement sensor;
(310) after the image capturing device reaches a photographing position, photographing a real-time picture needing welding guidance at present, uploading the obtained image to vision processing software, and calculating the angle Z of the current welding line and related data of an X, Y axis by the vision processing software through an Euler angle formula algorithm;
(320) the robot calculates X, Y coordinates of the current welding point and current posture data;
wherein:
or _ X ═ Euler (X, teaching point position data) obtains the X angle of the teaching point.
And Or _ Y ═ Euler (Y, teaching point position data) obtains the Y angle of the teaching point.
Or _ Z ═ Euler (Z, teaching point position data) obtains the Z angle of the teaching point.
(330) Assigning the current X, Y deviation value obtained by the camera to the welding parameters of the current robot X, Y;
the current welding position data is trans.x, which is the X value after the shooting processing of a camera;
the current welding position data is trans.y, which is a Y value after camera shooting processing;
the current welding line angle Or _ z is Or _ z + camera shooting angle;
(340) obtaining a new welding attitude of the robot according to a formula (330);
rot ═ OrientZYX (Or _ z, Or _ y, Or _ x);
as a further embodiment of the present invention, a method for controlling a cooperative welding guidance by a vision control system includes the steps of:
(400) the method comprises the following steps that a plurality of robots arrive at a place taught in advance, when the plurality of robots arrive at each place needing visual welding guidance, a PLC judges and controls a visual system on the robots to operate, in order to guarantee production rhythm, the mode of running sequence of the visual system of the robots is that the visual system is preferentially arrived at the visual system which is preferentially operated, and when the plurality of robots arrive at the same time, the photographing sequence is controlled according to the sequence number of the robots;
(410) after the robots reach the appointed photographing point, the PLC controls the photographing system to take photos, and after the photographing is finished, the images are sequentially transmitted to different positions of the vision processing system for analysis and processing according to the internal numbers of the cameras;
(420) the vision control system calculates the angle and the coordinate information of the image acquired by the camera on each robot through a vision calculation method, and then binds and packages the data information and the robot number of the corresponding camera;
(430) the vision control system transmits the bundled and packaged data to a PLC (programmable logic controller) through a TCP/IP (transmission control protocol/Internet protocol), the PLC unpacks the received data, judges which corresponding robot the data package is acquired by vision, and transmits the corresponding data to the corresponding robot controller;
(440) and the welding robot receives the data of the current welding seam sent by the vision control system, calculates and obtains the coordinates and the posture of the current new welding point through the Euler angle orientation formula of the robot, and then accurately welds the welding seam through the new coordinates and the posture.
Compared with the prior art, the invention has the beneficial effects that:
the invention can accurately position by depending on a tool clamp, can solve the problem that a welding workpiece deforms after being welded and clamped, improves the accuracy and consistency of welding, and improves the accurate positioning of a welding point by vision and a laser displacement sensor.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention without limiting the invention in which:
FIG. 1 is a schematic diagram of a hardware structure of a vision-guided multi-station robot welding deviation correcting device of the present invention;
FIG. 2 is a schematic flow structure diagram of a welding deviation rectifying method of a vision-guided multi-station robot according to the present invention;
FIG. 3 is a schematic view of a process of ensuring consistent shooting heights of cameras by using a laser displacement sensor according to the present invention;
FIG. 4 is a flowchart illustrating a robot calculating a new welding gesture according to the coordinate values processed by the vision system according to the present invention;
FIG. 5 is a schematic diagram of a data binding process for ensuring correctness of deviation correcting data of multiple robots in the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention are described in further detail below with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the objects and technical solutions of the present invention, and are not limitative of the objects and technical solutions of the present invention.
It should be further noted that, for the convenience of description, only some but not all of the relevant aspects of the present invention are shown in the drawings. Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Example one
As shown in fig. 1, a vision-guided multi-station robot welding deviation rectifying device includes:
the PLC S120 is used for controlling the working states of the robot controller S100, the laser displacement sensor S110 and the vision control system S130; the PLC S120 is the central control core part of the welding guide system, not only effectively controls the clamps such as the air cylinder and the like, but also can communicate with the robot through TCP/IP; the PLC S120 sends the welding seam coordinates of the detection points sent by the vision control system S130 to the robot, and meanwhile, the start and stop of the robot can be controlled; the PLC S120 is communicated with the vision control system S130 through UDP, and the vision control system S130 sends the welding seam coordinates acquired by the camera to the PLC S120; the PLC S120 can also control the light source controller through an I/O point signal, when the robot with the camera reaches a photographing position, the robot has an in-place signal to the PLC, and the PLC S120 controls the input of the light source controller through the in-place signal;
the robot controller S100 is used for directly controlling the motion track of the robot and simultaneously communicating with the welding machine and the PLC S120; the PLC S120 controls the welding parameters of the welding machine such as arc starting, wire feeding, welding current and the like, and directly performs point-to-point communication with the welding machine, and the PLC S120 is respectively connected with the robot controller S100, the displacement sensor S110, the light source controller S140 and the vision control system S130 in a ProfiNet communication mode, a ProfiBus communication mode, a CAN bus communication mode, an Ethernet communication mode, a DeviceNet communication mode or a TCP/IP communication mode; in this embodiment, an ABB robot is selected for welding, when it performs TCP/IP communication with a PLC, it must be ensured that the robot starts 616-1PC interface service, the TCP/IP communication is used for a communication Protocol of Data exchange between a user program of an OSI reference model Application Layer (Application Layer) and a network Transport Layer (Transport Layer), the Data exchange between the Transport Layer (Transport Layer) and a Data Link Layer (Data Link Layer) adopts IP (Internet Protocol), a PLC controller S120 controls the start and stop of the robot through the TCP/IP Protocol, and sends visual offset Data to tasks such as the robot;
the displacement sensor S110 is used for detecting whether the welding point and the teaching point of the camera shooting unit S150 are positioned at the same height for image capture, and detecting local depression of the welding point and deviation of welding precision on the welding position; after the laser displacement sensor S110 finishes measuring data, the data are transmitted to a PLC controller analog input module through 4-20MA analog current, the PLC controller S120 converts the analog quantity into a readable digital quantity, and the PLC controller S120 judges a detection value to control the up-and-down offset of the Z axis of the robot;
the vision control system S130 controls the photographing operation of the image capturing device S150 by setting parameters of the image capturing device S150; the vision control system S130 is the key point of the invention, the vision control system S130 is composed of image processing software and data processing software, and can firstly distinguish photos uploaded by a camera to judge whether a welding point is a correct welding point, then obtain the coordinates of a starting point of the welding line needing to be welded currently from the image, and then send the detected data to the PLC S120 through a UDP protocol; the vision control system S130 may also directly control some parameters of the camera, such as adjusting the exposure of the camera, the triggering time, and other parameter values;
a light source controller S140 for controlling on/off of the light source, or for adjusting brightness or darkness of the light source, and controlling brightness of the light source by changing magnitude of the current; the light sources are also classified into ring light sources, strip light sources, and panel light sources according to the light irradiation range of the light sources. And may be classified into an infrared light source and a visible light source according to the type of the light source. In the embodiment, the robot is welded by aluminum materials, and a blue panel light source is selected; the light source controller S140 has a plurality of communication modes with an upper computer, including I/O control, serial communication control and network control, and the PLC controls the on/off of the light source in the embodiment through the I/O control mode;
the image capturing device S150 is used for shooting and capturing images through soft triggering, and the shot images are uploaded to an appointed image processing area of the upper computer through a TCP/IP protocol; wherein:
the PLC controller S120 is connected to the robot controller S100, the displacement sensor S110, the light source controller S140, and the vision control system S130, respectively.
In the above embodiment, the displacement sensor S110 is a non-contact laser displacement sensor.
In the above embodiments, the image capturing device S150 is an industrial camera.
Example two
As shown in fig. 2, a visual-guided multi-station robot welding deviation rectifying method includes the following steps:
(10) welding preparation, namely loading and clamping a workpiece to be welded, wherein all robot positions are at original positions;
the working states of a robot controller S100, a laser displacement sensor S110 and a vision control system S130 are in a starting state;
the motion trail of the robot is directly controlled through the robot controller S100, and meanwhile, the robot is communicated with a welding machine and the PLC S120;
detecting whether the welding point and the teaching point of the camera unit S150 are at the same height through a displacement sensor S110 for image capture, and detecting local depression of the welding point and deviation of welding precision on the welding position;
setting parameters of the image capturing device S150 through the vision control system S130 to control the photographing operation of the image capturing device S150;
controlling the light source to be turned on or off through the light source controller S140, or adjusting the brightness or darkness of the light source, and controlling the brightness of the light source by changing the magnitude of the current;
soft triggering, photographing and image taking are carried out through an image taking device S150, and a shot picture is uploaded to an appointed image processing area of an upper computer through a TCP/IP protocol;
in this step, as can be understood by those skilled in the art, the re-welding of the workpiece to be welded in the state of being clamped by the fixture is only one initial positioning relative to the robot visual guidance welding, and simultaneously, the workpiece is ensured to be stable and reliable all the time in the welding process. The workpiece clamping system can be realized by using actuating elements such as an air cylinder, an electric cylinder and the like, and is controlled by a PLC control unit. All welding robots must be in the original position before welding and wait for the welding command to trigger.
(20) And executing deviation correction, selecting a welding formula and the model of a workpiece to be welded on a human-computer interaction interface HMI by an operator, automatically running a program, and communicating with a PLC (programmable logic controller) in a TCP/IP (transmission control protocol/Internet protocol) or RS485 communication mode to realize human-computer face-to-face interaction.
Generally, in order to make the selection of the welding workpiece and the welding recipe intuitive and simple, the present invention selects a human-machine interface, i.e., HMI, to implement. The man-machine interaction device has the program and formula storage function, can be directly communicated with the PLC through a TCP/IP or RS485 communication mode, and is used for realizing the man-machine face-to-face interaction. After the model and the formula of the to-be-welded part are selected, a welding starting button on a man-machine exchange interface is directly clicked, and the robot starts welding the to-be-welded part.
(30) The robot controller S100 controls the robot to work, so that the robot first reaches a preset image capturing position and then triggers the laser displacement sensor S110 to act, the laser displacement sensor S110 reads a current image capturing height value of the camera, and the vision control system S130 sets parameters of the image capturing device S150 and controls the image capturing device S150 to take pictures.
Generally, the height of the camera image directly affects the size of the camera view and the precision of the specific coordinate value of the guided welding seam, and if the shooting height of the camera is greatly different from the height of the robot camera image teaching, the change of the camera shooting view is caused, so that the coordinate precision of the detected welding seam is affected, and the guided welding failure is caused.
In this embodiment, the height of the current image capturing position of the camera is kept consistent with the height of the image capturing position during the teaching of the camera through the laser displacement sensor S110, if the height is deviated, the laser displacement sensor sends deviation data to the PLC, and the PLC obtains data and converts an analog value into a height deviation value through a/D conversion and transmits the height deviation value to the robot controller S100.
The robot controller S100 determines a new photographing position through the height deviation acquired by the laser displacement sensor S110, and photographs after the robot moves to the new photographing position, so that the height value photographed by the current camera can be effectively and accurately guaranteed to be consistent with the initially set height value, and the photographed image is guaranteed to be more accurate.
(40) Reading a program, after the camera finishes image capture, firstly judging whether the image is the image of a workpiece needing to be welded at present and finishing image analysis by a vision system, and obtaining coordinate values and angle values of the welding starting point X and Y directions of the welding seam; when the image is shot, the control light source needs to be lightened, the photo-shot image is clearer after the image is shot, the image processing unit processes the image after the image is obtained, and whether the image is correct or not and coordinate information of the position of the welding line in the image are judged.
(50) Deviation rectifying judgment, namely judging whether the photographed image is matched with the model of the selected welding part or not through a visual control system S130, if the photographed image is not matched with the selected model, stopping welding by the robot immediately, turning off the enabling of the robot after returning to the original point, and alarming by the whole set of equipment to prompt an operator of wrong feeding; the welded model is ensured to be correct through image outline comparison, so that a vicious event of welding scrapping caused by wrong materials is avoided; when the camera image matches the selected model for welding, the vision control system S130 sends the calculated correct X, Y offset coordinates to the PLC controller S120 for distribution to each welding robot controller. Before welding, each welding piece with different models needs to be subjected to visual image capture, and whether the current image is matched with the model selected by welding or not is judged through image outline comparison.
As shown in fig. 3, the visual welding guidance process under the control of the visual control system S130 includes the following steps:
(200) the robot carries the camera and the light source to a photographing position after the robot teaching, and the photographing position is obtained during the robot teaching before welding. In the step, when the robot is taught, firstly, a position needing welding guidance is determined, the robot takes a camera to reach a proper place above the designated position, the focal length and the aperture value of the camera are adjusted, a light source is manually turned on, the camera is triggered to shoot a picture, and the shot image should contain the whole current area needing welding guidance. The clear image shot after the current teaching needs to be ensured. After the camera is adjusted, the current value obtained by the laser displacement sensor is recorded, and the current value is determined as the height initial value of the teaching point.
(210) When the robot reaches a photographing point, triggering a laser displacement sensor S110 to acquire the current photographing point and to guide the welding workpiece to obtain the surface; the laser displacement sensor S110 transmits the acquired analog quantity signal of the current height value to the PLC S120, and the PLC S120 converts the value acquired by the laser displacement sensor S110 into an actual height value through analog-to-digital conversion;
(220) the PLC S120 compares the collected height of the laser displacement sensor with the taught height value after calculation, wherein the calculation formula is as follows:
deviation value H ═ acquisition value H-calibration value H' (1)
(230) Calculating the height deviation from the current laser displacement sensor S110 to the workpiece according to a formula (1), and calling a robot height compensation program if the height deviation is greater than a threshold value; the robot linearly moves the deviation height in the Z-axis direction through the height deviation value, and the photographing height of the current camera is guaranteed to be consistent with the photographing height of the robot during teaching calibration. The problem that the camera shooting measurement data is inaccurate due to image distortion caused by the shooting height error is avoided.
(240) After the current photographing heights are consistent, the PLC S120 triggers the light source to be turned on, the camera photographs, and the current photographed image is uploaded to the designated position of the industrial personal computer after the photographing;
(250) after the image is taken by the image taking device S150, the image is uploaded to an image processing position of the industrial personal computer, the image processing software finds out a welding seam needing to be guided according to the shot image, then data such as an angle and an initial point coordinate of the welding seam needing to be guided are calculated through a visual algorithm, the calculated value is transmitted to the PLC, and the PLC transmits the data to the robot to guide welding seam welding.
In the present invention: the visual algorithm is the visual algorithm and comprises the following steps:
(1) conversion: firstly, shooting a flat plate or a calibration plate with a pattern with a fixed distance by a camera, and converting pixel coordinates into actual coordinate sizes by a calibration algorithm.
(2) Calibration: after a welded standard workpiece is placed on a clamp, a PatMax algorithm is adopted for an image photographed by a camera, a standard outline of a welding seam to be welded is searched out through the image photographed by the camera according to the PatMax algorithm, and an X coordinate, a Y coordinate or an origin coordinate of a welding starting point is obtained from the standard outline.
(3) And (3) calculating: when the guiding welding is formally implemented, the camera shoots an image to obtain an actual coordinate value of the welding seam starting point according to the algorithm, and the deviation of the actual coordinate value and the origin coordinate is the welding seam deviation required to guide the welding.
As shown in fig. 4, the robot calculates parameters such as current new welding coordinates and welding postures according to the weld angle and coordinate position deviation by using the euler angle formula. The method comprises the following steps:
(300) acquiring a current height deviation value of a current welding Z axis through a laser displacement sensor S110;
(310) after the image capturing device S150 reaches a photographing position, a real-time picture of the current welding guide is photographed, the acquired picture is uploaded to the vision processing software, and the vision processing software calculates the angle Z of the current welding line and related data of an X, Y axis through an Euler angle formula algorithm;
(320) the robot calculates X, Y coordinates of the current welding point and current posture data;
wherein:
or _ X ═ Euler (X, teaching point position data) obtains the X angle of the teaching point.
And Or _ Y ═ Euler (Y, teaching point position data) obtains the Y angle of the teaching point.
Or _ Z ═ Euler (Z, teaching point position data) obtains the Z angle of the teaching point.
(330) Assigning the current X, Y deviation value obtained by the camera to the welding parameters of the current robot X, Y;
the current welding position data is trans.x, which is the X value after the shooting processing of a camera;
the current welding position data is trans.y, which is a Y value after camera shooting processing;
the current welding line angle Or _ z is Or _ z + camera shooting angle;
(340) obtaining a new welding attitude of the robot according to a formula (330);
rot ═ OrientZYX (Or _ z, Or _ y, Or _ x);
in the invention, the method for controlling the cooperative welding guidance of the plurality of robots by the vision control system S130 comprises the following steps:
(400) the method comprises the following steps that a plurality of robots arrive at a place taught in advance, when the plurality of robots arrive at each place needing visual welding guidance, a PLC judges and controls a visual system on the robots to operate, in order to guarantee production rhythm, the mode of running sequence of the visual system of the robots is that the visual system is preferentially arrived at the visual system which is preferentially operated, and when the plurality of robots arrive at the same time, the photographing sequence is controlled according to the sequence number of the robots;
(410) after the robots reach the appointed photographing point, the PLC controls the photographing system to take photos, and after the photographing is finished, the images are sequentially transmitted to different positions of the vision processing system for analysis and processing according to the internal numbers of the cameras;
(420) the vision control system S130 calculates the angle and coordinate information of the image acquired by the camera on each robot through a vision calculation method, and then binds and packages the data information and the robot number of the corresponding camera;
(430) the vision control system S130 transmits the bundled and packaged data to the PLC S120 through a TCP/IP protocol, the PLC S120 unpacks the received data, judges which robot vision acquisition the data package corresponds to, and transmits the corresponding data to the corresponding robot controller S100;
(440) the welding robot receives the data of the current welding seam sent by the vision control system S130, the coordinates and the posture of the current new welding point are obtained through calculation of the Euler angle orientation formula of the robot, and then the welding seam is accurately welded through the new coordinates and the posture.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

(50) deviation rectifying judgment, namely judging whether the photographed image is matched with the model of the selected welding part or not through a vision control system (S130), if the photographed image is not matched with the selected model, stopping welding by the robot immediately, turning off the robot after returning to the original point, and alarming by the whole set of equipment to prompt an operator of wrong feeding; the welded model is ensured to be correct through image outline comparison, so that a vicious event of welding scrapping caused by wrong materials is avoided; when the camera image matches the selected model for the weld, the vision control system (S130) sends the calculated correct X, Y offset coordinates to the PLC controller (S120) for distribution to the robot controller for each weld.
CN202110139220.2A2021-02-012021-02-01Vision-guided multi-station robot welding deviation correcting device and methodPendingCN112775975A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110139220.2ACN112775975A (en)2021-02-012021-02-01Vision-guided multi-station robot welding deviation correcting device and method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110139220.2ACN112775975A (en)2021-02-012021-02-01Vision-guided multi-station robot welding deviation correcting device and method

Publications (1)

Publication NumberPublication Date
CN112775975Atrue CN112775975A (en)2021-05-11

Family

ID=75760372

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110139220.2APendingCN112775975A (en)2021-02-012021-02-01Vision-guided multi-station robot welding deviation correcting device and method

Country Status (1)

CountryLink
CN (1)CN112775975A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114309934A (en)*2021-12-292022-04-12北京航星机器制造有限公司 A method for automatic laser welding of frame skin box-like structures
CN114500828A (en)*2021-12-242022-05-13珠海博杰电子股份有限公司Position latching-based high-precision flight shooting positioning method for Mark point of dispenser
CN114545862A (en)*2022-02-252022-05-27浪潮电子信息产业股份有限公司 A method, device, equipment and storage medium for determining the operation point of a fixture
CN114841959A (en)*2022-05-052022-08-02广州东焊智能装备有限公司Automatic welding method and system based on computer vision
CN115533394A (en)*2022-12-052022-12-30深圳市镭沃自动化科技有限公司Control method of welding apparatus, and storage medium
CN115542866A (en)*2022-11-282022-12-30江苏未来网络集团有限公司Welding production monitoring method and system based on industrial internet full-connection management
CN116237932A (en)*2022-12-302023-06-09佛山米塔技术有限公司Robot motion control system, method and storage medium based on edge controller
CN117400260A (en)*2023-11-292024-01-16重庆衍数自动化设备有限公司Multi-station robot control system, control method and multi-station robot

Cited By (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114500828A (en)*2021-12-242022-05-13珠海博杰电子股份有限公司Position latching-based high-precision flight shooting positioning method for Mark point of dispenser
CN114500828B (en)*2021-12-242023-10-13珠海博杰电子股份有限公司High-precision flyswatter positioning method for Mark point of dispensing machine based on position latching
CN114309934A (en)*2021-12-292022-04-12北京航星机器制造有限公司 A method for automatic laser welding of frame skin box-like structures
CN114545862A (en)*2022-02-252022-05-27浪潮电子信息产业股份有限公司 A method, device, equipment and storage medium for determining the operation point of a fixture
CN114545862B (en)*2022-02-252024-05-03浪潮电子信息产业股份有限公司Clamp operation point position determining method, device, equipment and storage medium
CN114841959A (en)*2022-05-052022-08-02广州东焊智能装备有限公司Automatic welding method and system based on computer vision
CN115542866A (en)*2022-11-282022-12-30江苏未来网络集团有限公司Welding production monitoring method and system based on industrial internet full-connection management
CN115533394A (en)*2022-12-052022-12-30深圳市镭沃自动化科技有限公司Control method of welding apparatus, and storage medium
CN116237932A (en)*2022-12-302023-06-09佛山米塔技术有限公司Robot motion control system, method and storage medium based on edge controller
CN117400260A (en)*2023-11-292024-01-16重庆衍数自动化设备有限公司Multi-station robot control system, control method and multi-station robot

Similar Documents

PublicationPublication DateTitle
CN112775975A (en)Vision-guided multi-station robot welding deviation correcting device and method
KR102280663B1 (en)Calibration method for robot using vision technology
CN111571190B (en)Three-dimensional visual automatic assembly system and method
CN110125926B (en) Automated workpiece pick and place method and system
JP3946711B2 (en) Robot system
CN104175031B (en) A method for welding with a welding robot system with autonomous deviation correction capability
CN108500979B (en)Robot grabbing method and system based on camera communication connection
US20160257002A1 (en)Robot system having robot operated in synchronization with bending machine
CN110370316B (en)Robot TCP calibration method based on vertical reflection
WO2021039829A1 (en)Production system
CN210818050U (en)Welding system
CN101896321A (en)Determining the position of an object
CN112958959A (en)Automatic welding and detection method based on three-dimensional vision
JP2015136770A (en)Data creation system of visual sensor, and detection simulation system
CN112238453B (en) Vision-guided robot arm calibration method
TWI699264B (en) Correction method of vision guided robotic arm
JP3191563B2 (en) Automatic correction method for offline teaching data
CN114800574A (en)Robot automatic welding system and method based on double three-dimensional cameras
CN110980276A (en)Method for implementing automatic casting blanking by three-dimensional vision in cooperation with robot
CN106737688A (en)A kind of collecting box pipe seat robot automatic setup system and method based on multisensor
CN114101850A (en)Intelligent welding system based on ROS platform and working method thereof
CN115026384A (en)Laser vision guided moving steel pipe bundle welding device and arc starting control method
JP6550985B2 (en) Robot joining system
CN112440018A (en)Welding system and welding method
WO2022091767A1 (en)Image processing method, image processing device, robot mounted-type conveyance device, and system

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
WD01Invention patent application deemed withdrawn after publication
WD01Invention patent application deemed withdrawn after publication

Application publication date:20210511


[8]ページ先頭

©2009-2025 Movatter.jp