Carrying system, control method thereof and floor tile paving systemTechnical Field
The invention relates to the field of robots, in particular to a carrying system, a control method of the carrying system and a floor tile paving system.
Background
Tiles have a wide range of applications in the construction field, work on current tiles is usually done manually, manual measurements are often inaccurate, and construction progress is slow, so in the field of tiles, tile robots have been gradually introduced.
In the process of paving floor tiles, a current tile robot grabs the floor tiles from a tile transport AGV (Automated guided vehicle), carries the floor tiles to a paving position, aligns reference tiles, and then paves the floor tiles. However, the accuracy of AGV positioning is limited (error 20mm), and paving of the floor tiles has high requirements on accuracy, so that the method is difficult to meet the requirements of paving of the floor tiles on accuracy.
Aiming at the problem that a robot used for tiling in the prior art can not meet the accuracy requirement of tiling, an effective solution is not provided at present.
Disclosure of Invention
The embodiment of the invention provides a carrying system, a control method thereof and a floor tile paving system, which at least solve the technical problem that a robot for tile pasting in the prior art cannot meet the tile pasting precision requirement.
According to an aspect of an embodiment of the present invention, there is provided a control method of a conveyance system including: the control method of the carrying system comprises the following steps: the robot predicts a grabbing position according to a first image collected by a first image collecting device in a grabbing area and predicts a placing position according to a second image collected by the first image collecting device in a first placing area; the robot controls the second image acquisition device to acquire a third image of the second placement area, and controls the third image acquisition device to acquire a fourth image of the third placement area, wherein the second placement area and the third placement area comprise two different characteristic positions of the placed object; the robot acquires a first offset determined according to a first characteristic position in the third image and a second characteristic position in the fourth image, wherein the first offset is used for representing the deviation between an actual placement position and a predicted placement position; the robot adjusts the predicted placement position according to the first offset.
Further, acquiring a first image acquired by the first image acquisition device in the grabbing area and a second image acquired by the first image acquisition device in the first placing area; acquiring a second offset determined according to a third feature position in the first image and a preset grabbing feature position, and acquiring a third offset according to a fourth feature position in the second image and a preset placing feature position; and acquiring a grabbing position predicted according to the grabbing feature position and the second offset, and acquiring a placing position predicted according to a placing reference position and the third offset, wherein the placing reference position is acquired according to the placing feature position.
Further, after the robot predicts a grabbing position according to a first image collected by the first image collecting device in a grabbing area and predicts a placing position according to a second image collected by the first image collecting device in a first placing area, the robot controls the mechanical arm to move to the predicted grabbing position to grab the object to be conveyed; the robot control robot arm carries the object to be conveyed to move to the predicted placing position.
Further, the robot compares the first offset with a preset value; if the first offset is larger than the preset value, the first offset is added on the basis of the third offset to obtain a fourth offset; the predicted placement bits are adjusted according to a fourth offset.
Further, the robot controls the mechanical arm to move to the grabbing area and sends a first image acquisition instruction to the first image acquisition device, wherein the first image acquisition device acquires a first image according to the first image acquisition instruction; and the robot controls the mechanical arm to move to the first placing area and sends a second image acquisition instruction to the first image acquisition device, wherein the first image acquisition device acquires a second image according to the second image acquisition instruction.
Further, before acquiring the first image acquired by the first image acquisition device in the capture area and the second image acquired by the first image acquisition device in the first placement area, acquiring a capture feature position and a placement feature position, wherein the acquiring the capture feature position and the placement feature position comprises: controlling the mechanical arm to move to a preset initial grabbing area, and acquiring an initial first image through a first image acquisition device; extracting a fifth characteristic position in the initial first image to obtain a capture characteristic position; controlling the mechanical arm to move to a preset initial placement area, and acquiring an initial second image through a first image acquisition device; and extracting a sixth characteristic position in the initial second image to obtain a placement characteristic position.
Further, the object to be carried is a tile to be tiled, the first characteristic position is a position where a tile corner of the tile to be tiled is located in the first image, and the second characteristic position is a position where a tile corner of the tile already tiled is located in the second image.
According to an aspect of an embodiment of the present invention, there is provided a carrying system including: the first image acquisition device is used for acquiring a first image in the grabbing area, acquiring a second image in the first placing area, predicting the grabbing position according to the first image and predicting the placing position according to the second image; the second image acquisition device is used for acquiring a third image of the second placement area and determining a first characteristic position in the third image; the third image acquisition device is used for acquiring a fourth image of a third placement area and determining a second characteristic position in the fourth image, wherein the second placement area and the third placement area comprise two different characteristic positions of the placed object; and the robot is used for acquiring a first offset determined according to the first characteristic position and the second characteristic position and adjusting the predicted placement position according to the first offset.
Further, the first image acquisition device is further configured to determine a second offset according to a third feature position in the first image and a preset capture feature position, determine a third offset according to a fourth feature position in the second image and a preset placement feature position, predict a capture position according to the capture feature position and the second offset, and predict a placement position according to a placement reference position and the third offset, where the placement reference position is obtained according to the placement feature position.
Further, the robot also comprises a mechanical arm, and the robot is used for controlling the mechanical arm to move to the predicted grabbing position to grab the object to be carried and controlling the mechanical arm to move to the predicted placing position according to the object to be carried.
Further, the robot also comprises a mechanical arm, and the first image acquisition device is arranged at the tail end of the mechanical arm.
Further, the robot includes branch, and second image acquisition device and third image acquisition device set up in the branch end.
Further, the object to be carried is a tile to be tiled, the first characteristic position is a position where a tile corner of the tile to be tiled is located in the first image, and the second characteristic position is a position where a tile corner of the tile already tiled is located in the second image.
Further, the robot is further configured to compare the first offset with a preset value, and if the first offset is greater than the preset value, increase the first offset on the basis of the third offset to obtain a fourth offset, and adjust the predicted placement position according to the fourth offset.
According to an aspect of an embodiment of the present invention, there is provided a tile paving system comprising the handling system described above.
According to an aspect of an embodiment of the present invention, there is provided a control method of a conveyance system, including: predicting a grabbing position according to a first image acquired from a grabbing area, and predicting a placing position according to a first placing area; acquiring a third image of a second placement area and a fourth image of a third placement area, wherein the second placement area and the third placement area comprise two different characteristic positions of a placed object; a first offset value determined according to the first characteristic position in the third image and the second characteristic position in the fourth image, wherein the first offset value is used for representing the deviation of the actual placement position and the predicted placement position; the predicted placement position is adjusted according to the first offset.
In the embodiment of the invention, the grabbing position is predicted according to a first image acquired by a first image acquisition device in a grabbing area, and the placing position is predicted according to a second image acquired by the first image acquisition device in a first placing area; controlling a second image acquisition device to acquire a third image of a second placement area, and controlling a third image acquisition device to acquire a fourth image of the third placement area, wherein the second placement area and the third placement area comprise two different characteristic positions of a placed object; acquiring a first offset determined according to a first characteristic position in the third image and a second characteristic position in the fourth image, wherein the first offset is used for representing the deviation between an actual placement position and a predicted placement position; and adjusting the predicted placement position according to the first offset. According to the scheme, the grabbing position and the placing position are predicted based on the first image acquisition device, and the predicted placing position is corrected based on the second image acquisition device and the third image acquisition device, so that the purpose of improving the paving precision is achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a flowchart of a control method of a handling system according to an embodiment of the present application;
FIG. 2 is a schematic illustration of a second placement area and a third placement area in accordance with an embodiment of the present application;
FIG. 3 is a schematic illustration of the operation of a handling system according to an embodiment of the present application;
FIG. 4 is a control flow diagram of a handling system according to an embodiment of the present application;
FIG. 5 is a schematic view of a handling system according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating calibration of an image capture device according to an embodiment of the present application;
FIG. 7 is a frame diagram of a handling system according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a data communication module of an image capture device according to an embodiment of the present disclosure; and
fig. 9 is a flowchart of a control method of a further conveyance system according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
In accordance with an embodiment of the present invention, there is provided an embodiment of a method of controlling a handling system, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than that presented herein.
Fig. 1 is a flowchart of a control method of a handling system according to an embodiment of the present application, the handling system including: a first image capturing device, a second image capturing device, a third image capturing device and a robot, as shown in fig. 1, the method comprising the steps of:
and S102, predicting a grabbing position according to a first image acquired by the first image acquisition device in the grabbing area, and predicting a placing position according to a second image acquired by the first image acquisition device in the first placing area by the robot.
Specifically, the image capturing devices may be all smart cameras, and the first image capturing device may be a smart camera disposed at a distal end of the robot arm. The robot may include a support bar, and the second image capturing device and the third image capturing device may be disposed at a distal end of the support bar.
In an alternative embodiment, taking the tile paving of the carrying system as an example, the robot may control the robot arm to move to the placement area, the first image capturing device at the end of the robot arm captures the second image, and calculates an offset between the feature point of the reference tile placed in the second image and the placement feature position according to the second image, and transmits the offset to the controller, and the controller may predict the placement position according to the offset. The robot controls the mechanical arm to move to the grabbing area, the first image acquisition device at the tail end of the mechanical arm acquires a first image, the offset of the characteristic point of the floor tile to be carried in the first image and the grabbing characteristic position is calculated according to the first image, the offset is transmitted to the controller, and the controller can determine the grabbing position according to the offset.
And step S104, the robot controls the second image acquisition device to acquire a third image of the second placement area, and controls the third image acquisition device to acquire a fourth image of the third placement area, wherein the second placement area and the third placement area comprise two different characteristic positions of the placed object.
Specifically, the robot may be provided with a supporting rod, and the second image capturing device and the third image capturing device may be disposed at the end of the supporting rod, and are configured to capture images of different placement areas, that is, the third image and the fourth image. The two different feature positions of the placed object are used to represent the coordinates of at least two feature points of the placed object, for example, in the case of the placed object being a tiled tile, the two different feature positions may be two corners of the tiled tile.
The second placement area is different from the third placement area, but the first placement area may be the same as the second placement area or the same as the third placement area.
Fig. 2 is a schematic diagram of a second placement area and a third placement area according to an embodiment of the present application, and referring to fig. 2, the reference tile is a tiled tile. After the first image and the second image collected by the first image collecting device are predicted to obtain the placing position, the mechanical arm moves the floor tile to be paved to the position right above the predicted placing position (fig. 2 is a top view, and the floor tile to be paved and the reference tile are not on the same plane), at the moment, the second image collecting device collects the image of the photographing area 2 (a second placing area), and the third image collecting device collects the image of the photographing area 3 (a third placing area), so that the effect of double-view observation of the placing area is achieved.
It should be noted that, in the case of a large size of the floor tile, the first image capturing device may have a poor view field, and may cause a reduction in accuracy due to the poor view field (for example, one corner of the floor tile is aligned, but other corners of the floor tile are not aligned), so that the second image capturing device and the third image capturing device are used to measure placement areas having different characteristic positions, so that two corners of the floor tile on adjacent sides can be aligned, thereby improving the accuracy of paving the floor tile.
In step S106, the robot obtains a first offset determined according to the first feature position in the third image and the second feature position in the fourth image, where the first offset is used to indicate a deviation between the actual placement position and the predicted placement position.
Specifically, the first offset amount may be an offset amount of the first feature position from a feature position in the second image, and/or an offset amount of the second feature position from a feature position in the second image, so that a deviation between an actual placement position and a predicted placement position can be represented.
In an alternative embodiment, a fourth feature position is included in the second image, and the robot predicts the placement position based on a deviation of the fourth feature position from the reference placement position. After the second image acquisition device and the third image acquisition device acquire the third image and the fourth image, the first characteristic position and the second characteristic position extracted from the third image and the fourth image are sent to the controller, and the controller can calculate to obtain the first offset according to the fourth characteristic position, the first characteristic position and the second characteristic position included in the second image.
The first offset amount will be described more specifically. Taking fig. 2 as an example, when the first placement area is the photographedarea 2, the fourth feature position in the second image is the coordinate P1_ ref (1) of the tile corner of the tile already laid in the photographedarea 2, and the coordinate P2_ ref (1) of the tile corner of the reference tile in the photographed area 3 in the second image can be calculated according to the coordinate and the size of the tile.
The third image may be an image captured by the third image capturing device in the photographingarea 2, and the first characteristic position is a brick angle coordinate P1_ ref (2) of the reference brick in the photographingarea 2; the fourth image may be an image captured by the third image capturing device in the photographing region 3, and the second characteristic position is the brick angle coordinate P2_ ref (2) of the reference brick in the photographing region 3. The host computer may then calculate the first offset from P1_ ref (1), P2_ ref (1), P1_ ref (2), and P2_ ref (2), for example, the first offset P ═ a, B, where a ═ P1_ ref (2) -P1_ ref (1) and B ═ P2_ ref (2) -P2_ ref (1).
And step S108, the robot adjusts the predicted placing position according to the first offset.
Since the first offset is used to represent the deviation between the actual placement position and the predicted placement position, the robot can adjust the predicted placement position according to the first offset, thereby obtaining the final placement position. A specific adjustment manner may be to increase the first offset amount based on the predicted placement bit.
Fig. 3 is a schematic diagram of the operation of a carrying system according to an embodiment of the present application, and with reference to fig. 3, a first image capturing device captures a first image in a photographingregion 2, captures a second image in a photographingregion 1, and predicts a capturing position and a placing position according to the first image and the second image. The robot moves to the raw material warehouse according to the grabbing position, and after the paving bricks are grabbed through the suckers, the paving bricks are moved to the area to be paved, namely the predicted placing position. The second image acquisition device acquires a third image in the photographingarea 2 and then determines a first characteristic position of a reference brick in the third image, and the third image acquisition device acquires a fourth image in the photographing area 3 and then determines a second characteristic position of the reference brick in the fourth image. The second image acquisition device and the third image acquisition device respectively upload the first characteristic position and the second characteristic position to the host computer, and the host computer calculates the offset (namely the first offset) of the first characteristic position and the second characteristic position, thereby adjusting the placement position. It should be noted that the width of the gap between the reference brick and the adjacent brick in the area to be tiled is in accordance with 2 ± 0.5 mm.
It should be noted that, since the first image capturing device only captures one feature point of the placement area, there may be a deviation in the predicted placement position, and in the process of capturing the object to be carried by the robot arm and moving the object to be carried to the predicted placement position, a certain error may be introduced due to the actions of the robot arm such as the suction cup capturing, and therefore, it is difficult to accurately lay the floor tile. The predicted placing position is corrected by adopting the third image and the fourth image acquired by the second image acquisition device and the third image acquisition device, so that errors caused by various reasons are eliminated, and the paving precision is improved.
Therefore, in the above embodiments of the present application, the capturing position is predicted according to the first image acquired by the first image acquisition device in the capturing area, and the placing position is predicted according to the second image acquired by the first image acquisition device in the first placing area; controlling a second image acquisition device to acquire a third image of a second placement area, and controlling a third image acquisition device to acquire a fourth image of the third placement area, wherein the second placement area and the third placement area comprise two different characteristic positions of a placed object; acquiring a first offset determined according to a first characteristic position in the third image and a second characteristic position in the fourth image, wherein the first offset is used for representing the deviation between an actual placement position and a predicted placement position; and adjusting the predicted placement position according to the first offset. According to the scheme, the grabbing position and the placing position are predicted based on the first image acquisition device, and the predicted placing position is corrected based on the second image acquisition device and the third image acquisition device, so that the purpose of improving the paving precision is achieved.
As an alternative embodiment, the robot predicts the grasping position based on a first image captured by the first image capturing device in the grasping area and predicts the placing position based on a second image captured by the first image capturing device in the first placing area, including: acquiring a first image acquired by a first image acquisition device in a grabbing area and a second image acquired by the first image acquisition device in a first placing area; acquiring a second offset determined according to a third feature position in the first image and a preset grabbing feature position, and acquiring a third offset according to a fourth feature position in the second image and a preset placing feature position; and acquiring a grabbing position predicted according to the grabbing feature position and the second offset, and acquiring a placing position predicted according to a placing reference position and the third offset, wherein the placing reference position is acquired according to the placing feature position.
Specifically, the placement feature position and the capture feature position may be positions determined in the teaching process, or positions designated by a person. Taking teaching as an example, in the teaching process, the movement of the robot and the movement of the mechanical arm are manually controlled, so that the carrying process of an object is completed, and the placement characteristic position and the grabbing characteristic position are determined in the process. The above-mentioned placement reference position may be a position aligned therewith on the basis of the placement feature position and having a prescribed distance (e.g., 2 mm).
The third characteristic position may be a position of a characteristic point of the object to be conveyed in the first image, and the characteristic point may be a center point of the object to be conveyed in the first image or a position of a corner of the object to be conveyed in the first image. The fourth characteristic position is the same as the first characteristic position, and is not described herein again.
The second offset may be a difference between the third feature position and a preset capture feature position, and the third offset may be a difference between the fourth feature position and a placement feature position.
As shown in fig. 2, the object to be carried may be a floor tile, and in the drawing, the tile taking photographing area is a current grabbing area, and the tile photographing area is a current placing area.
In an alternative embodiment, the third offset and the capture feature position may be adjusted to the same coordinate system, and then the third offset may be added based on the capture feature position, so as to obtain the capture position. Similarly, the fourth offset and the placement reference may be adjusted to the same coordinate system, and then the placement position may be obtained by adding the fourth offset on the basis of the placement base position.
As an alternative embodiment, after the robot predicts the grabbing position according to a first image acquired by the first image acquisition device in the grabbing area and predicts the placing position according to a second image acquired by the first image acquisition device in the first placing area, the method further comprises: the robot controls the mechanical arm to move to the predicted grabbing position to grab the object to be carried; the robot control robot arm carries the object to be conveyed to move to the predicted placing position.
In the above steps, after the grabbing position and the placing position are obtained through prediction, the mechanical arm can be controlled to move to the grabbing position to grab the object to be transported, and then the mechanical arm is controlled to move to the placing position, so that the object to be transported is moved to the placing position.
It should be noted that, in the above steps, after the robot arm carries the object to be transported and moves to the predicted placement position, the object to be transported is not placed by loosening the suction cup, but the placement position needs to be adjusted according to the second image capturing device and the third image capturing device.
As an alternative embodiment, the robot adjusts the predicted placement position according to the first offset, including: the robot compares the first offset with a preset value; if the first offset is larger than the preset value, the first offset is added on the basis of the third offset to obtain a fourth offset; the predicted placement bits are adjusted according to a fourth offset.
Specifically, the preset value may be 0.5mm, and in the step, if the first offset is greater than the preset value, it is determined that the deviation is large and needs to be adjusted, so that the first offset is added on the basis of the third offset to obtain a fourth offset, and the placement position is predicted again according to the fourth offset and the placement base position. And if the first offset is smaller than or equal to the preset value, the deviation is small and no adjustment is needed.
After the updated placement positions are obtained according to the adjustment prediction, the second image acquisition device and the third image acquisition device can be controlled to continue to acquire images of the second placement area and the third placement area, and the steps are repeated until the offset determined according to the images continuously acquired by the second image acquisition device and the third image acquisition device is smaller than the preset value.
Since the above adjustment process can be performed repeatedly, errors in tile deviation introduced by subsequent processes (e.g., the vibrating process) can also be overcome.
Fig. 4 is a control flow chart of a handling system according to an embodiment of the present application, and with reference to fig. 3 and 4, the process starts by setting the user coordinates as world and the tool coordinates as flange. In this embodiment, the first placement area and the second placement area are photographingareas 2, the third placement area is photographingarea 2, the first image capturing device is a hand-eye camera disposed at the end of the robot arm, and the second image capturing device and the third image capturing device are fixed cameras.
First, initialization is performed, and values of some registers used by a program are set to 0, for example: r < 1 >, R < 2 >, R < 3 >, PR < 1 >, PR < 4 >, PR < 5 >, PR < 6 >, PR < 7 > and the like, wherein,
r1 is used to represent whether there is image collecting device in the photographingarea 1;
r2 is used to represent whether there is image collecting device in the photographingarea 2;
r3 is used to indicate whether the tile is moved to the alignment position corresponding to the placement position.
PR 1 is used to record the offset information obtained by each calculation, the offset information includes translation amount and angle value;
PR 4 is used to record the translation amount of offset determined by the photographingarea 2 and the photographing area 3 inPR 1;
PR 5 is used to record the angle value of offset determined by the photographingarea 2 and the photographing area 3 inPR 1;
PR 6 is used to record the shift amount of the shift amount determined by the photographedarea 1 inPR 1;
PR 7 is used to record the angle value of the offset determined according to the photographingarea 1 inPR 1.
The robot arm moves to the first placement area (photographing area 2) and sets R [2] to 1. After PC controls the hand-eye camera to take a picture and calculate the offset, the value of R2 is set to 0, and then the offset information can be acquired fromPR 1. Since the translation is offset in the user and the rotation is offset in the tool, the translation and rotation data for PR [1] are placed into the 2 position registers PR [4] and PR [5], respectively. The mechanical arm moves to a grabbing area (a photographing area 1), R [1] is set to be 1, and the hand-eye camera is controlled to photograph. After the photographing and offset calculation is completed, R1 is set to 0, and similarly, the displacement and rotation data inPR 1 are stored in PR 6 and PR 7 respectively.
The mechanical arm moves to the grabbing position to grab the floor tiles, and concretely, the grabbing position can be obtained by adding offsets in PR 6 and PR 7 to the grabbing characteristic position determined according to teaching. After the tile is moved in place, the air pressure of the sucking disc is opened to suck the tiles.
The sucking disc grabs the tile and moves to the alignment position (right above the placing position) of paving and pasting, and concretely, the placing position can be obtained by adding offsets in PR 4 and PR 5 to the placing basic position determined in teaching. And after the camera moves to the position R3, the camera sends a photographing instruction to the fixed double camera through the PC part. After the two fixed cameras acquire the image, locate the feature, and calculate the offset, R3 is set back to 0.
And judging the offset values in the PR [1], if all the offset values are 0, representing that the deviation is smaller than a set value (translation is less than 0.3mm, and the angle is less than 0.1 degree), and finishing the paving and aligning process. Otherwise, the value inPR 1 is added to PR 4 and PR 5. Then moving to the updated paving alignment position, i.e. adding the offset in updated PR 4 and PR 5 to the position of the placement base determined according to the teaching. And setting R [3] to be 1, sending a photographing instruction to the fixed double cameras again, and repeating the adjustment process until the offset in PR [1] is 0. When the alignment process of two edges of the adjacent floor tiles is finished, the mechanical arm is moved to the tile sticking position, and the position of the mechanical arm is right below the alignment position. And closing the suction disc, putting down the floor tiles and finishing the paving process.
As an alternative embodiment, acquiring a first image acquired by a first image acquisition device in a grabbing area and a second image acquired by the first image acquisition device in a first placing area includes: the robot control mechanical arm moves to a grabbing area and sends a first image acquisition instruction to a first image acquisition device, wherein the first image acquisition device acquires a first image according to the first image acquisition instruction; and the robot controls the mechanical arm to move to the first placing area and sends a second image acquisition instruction to the first image acquisition device, wherein the first image acquisition device acquires a second image according to the second image acquisition instruction.
Fig. 2 is a schematic diagram of the operation of a handling system according to an embodiment of the present application, and with reference to fig. 2, a first image capturing device may be driven by a robot arm of a robot to move to a grabbing area and a placing area, respectively, and send a designated signal to a controller (for example, a value of a designated location in a register of the controller may be modified) after moving to the grabbing area or the placing area, the controller sends an image capturing instruction to the first image capturing device after receiving the designated signal, and the first image capturing device captures a first image or a second image after receiving the image capturing instruction.
As an alternative embodiment, before acquiring the first image acquired by the first image acquisition device at the capture area and the second image acquired by the first image acquisition device at the first placement area, the method further comprises: acquiring a grabbing feature position and a placing feature position, wherein the steps of acquiring the grabbing feature position and placing the feature position comprise: controlling the mechanical arm to move to a preset initial grabbing area, and acquiring an initial first image through a first image acquisition device; extracting a fifth characteristic position in the initial first image to obtain a capture characteristic position; controlling the mechanical arm to move to a preset initial placement area, and acquiring an initial second image through a first image acquisition device; and extracting a sixth characteristic position in the initial second image to obtain a placement characteristic position.
When the image acquisition device acquires the first image of the current placement area and the second image of the current grabbing area, the object to be transported is transported, before this, the controller needs to acquire the grabbing characteristic position and the placing characteristic position, so that when the object to be transported is transported, the current grabbing position and placing position can be determined according to the grabbing characteristic position and the placing characteristic position.
In an alternative embodiment, the step of controlling the robot arm to move and the image capturing device to capture the initial first image and the initial second image may be completed by a manual teaching method, so as to obtain the position of the captured feature and the position reference point based on the manual teaching process.
Still taking paving tiles as an example, the capturing characteristic positions and the placing characteristic positions can be obtained at the initial stage of paving tiles indoors, that is, when paving tiles indoors, the capturing characteristic positions and the placing characteristic positions can be obtained in a teaching mode firstly, and then paving tiles.
Specifically, the initial placement area and the initial grabbing area may be pre-designated areas, for example, an area where a first tile to be tiled needs to be placed (for example, a corner in a room) when the tiles are tiled is the initial placement area, and an area where the first tile to be tiled is grabbed is the initial grabbing area.
As an alternative embodiment, the object to be handled is a tile to be tiled, the first characteristic position is a position of a corner of the tile to be tiled in the third image, and the second characteristic position is a position of a corner of the tile already tiled in the fourth image.
Specifically, in the above scheme, the floor tile to be tiled may be square, rectangular, equilateral hexagon, etc. with an angle. The tile corners of the tiled tiles in the third and fourth images may be designated as two different tile corners.
Example 2
The embodiment of the present application further provides a handling system, which may be a system for performing any of the steps inembodiment 1. Fig. 5 is a schematic view of a handling system according to an embodiment of the present application, shown in conjunction with fig. 5, the system comprising:
the first image acquisition device 10 is configured to acquire a first image in the capture area, acquire a second image in the first placement area, predict a capture position according to the first image, and predict a placement position according to the second image.
Specifically, the image capturing devices may be all smart cameras, and the first image capturing device may be a smart camera disposed at a distal end of the robot arm. The robot may include a support bar, and the second image capturing device and the third image capturing device may be disposed at a distal end of the support bar.
In an alternative embodiment, taking the tile paving of the carrying system as an example, the robot may control the robot arm to move to the placement area, the first image capturing device at the end of the robot arm captures the second image, and calculates an offset between the feature point of the reference tile placed in the second image and the placement feature position according to the second image, and transmits the offset to the controller, and the controller may predict the placement position according to the offset. The robot controls the mechanical arm to move to the grabbing area, the first image acquisition device at the tail end of the mechanical arm acquires a first image, the offset of the characteristic point of the floor tile to be carried in the first image and the grabbing characteristic position is calculated according to the first image, the offset is transmitted to the controller, and the controller can determine the grabbing position according to the offset.
And the second image acquisition device 20 is used for acquiring a third image of the second placement area and determining the position of the first feature in the third image.
And a third image acquiring device 30 for acquiring a fourth image of the third placement region and determining a second feature position in the fourth image, wherein the second placement region and the third placement region include two different feature positions of the placed object.
Specifically, the robot may be provided with a supporting rod, and the second image capturing device and the third image capturing device may be disposed at the end of the supporting rod, and are configured to capture images of different placement areas, that is, the third image and the fourth image. The two different feature positions of the placed object are used to represent the coordinates of at least two feature points of the placed object, for example, in the case of the placed object being a tiled tile, the two different feature positions may be two corners of the tiled tile.
The second placement area is different from the third placement area, but the first placement area may be the same as the second placement area or the same as the third placement area.
Fig. 2 is a schematic diagram of a second placement area and a third placement area according to an embodiment of the present application, and referring to fig. 2, the reference tile is a tiled tile. After the first image and the second image collected by the first image collecting device are predicted to obtain the placing position, the mechanical arm moves the floor tile to be paved to the position right above the predicted placing position (fig. 2 is a top view, and the floor tile to be paved and the reference tile are not on the same plane), at the moment, the second image collecting device collects the image of the photographing area 2 (a second placing area), and the third image collecting device collects the image of the photographing area 3 (a third placing area), so that the effect of double-view observation of the placing area is achieved.
It should be further noted that the image capturing device needs to be calibrated before use, fig. 6 is a schematic diagram of calibrating the image capturing device according to the embodiment of the present application, and with reference to fig. 6, taking the first image capturing device as an example, for the first image capturing device, a feature in a photographing region is selected, a camera is moved to different positions (which may be 9), pixel coordinates of the feature in an image and corresponding robot coordinates are recorded, and thus a relationship between the image coordinates and world coordinates of the robot is established, and a calibration matrix is obtained. For the second image acquisition device and the third image acquisition device, selecting the features in the photographing area, moving the camera to different positions (12 can be selected), and recording the pixel coordinates of the features in the image and the corresponding robot coordinates, so as to establish the relationship between the image coordinates and the world coordinates of the robot and further obtain a calibration matrix.
And therobot 40 is used for acquiring a first offset determined according to the first characteristic position and the second characteristic position and adjusting the predicted placing position according to the first offset.
Specifically, the first offset amount may be an offset amount of the first feature position from a feature position in the second image, and/or an offset amount of the second feature position from a feature position in the second image, so that a deviation between an actual placement position and a predicted placement position can be represented.
In an alternative embodiment, a fourth feature position is included in the second image, and the robot predicts the placement position based on a deviation of the fourth feature position from the reference placement position. After the second image acquisition device and the third image acquisition device acquire the third image and the fourth image, the first characteristic position and the second characteristic position extracted from the third image and the fourth image are sent to the controller, and the controller can calculate to obtain the first offset according to the fourth characteristic position, the first characteristic position and the second characteristic position included in the second image.
The first offset amount will be described more specifically. Taking fig. 2 as an example, when the first placement area is the photographedarea 2, the fourth feature position in the second image is the coordinate P1_ ref (1) of the tile corner of the tile already laid in the photographedarea 2, and the coordinate P2_ ref (1) of the tile corner of the reference tile in the photographed area 3 in the second image can be calculated according to the coordinate and the size of the tile.
The third image may be an image captured by the third image capturing device in the photographingarea 2, and the first characteristic position is a brick angle coordinate P1_ ref (2) of the reference brick in the photographingarea 2; the fourth image may be an image captured by the third image capturing device in the photographing region 3, and the second characteristic position is the brick angle coordinate P2_ ref (2) of the reference brick in the photographing region 3. The host computer may then calculate the first offset from P1_ ref (1), P2_ ref (1), P1_ ref (2), and P2_ ref (2), for example, the first offset P ═ a, B, where a ═ P1_ ref (2) -P1_ ref (1) and B ═ P2_ ref (2) -P2_ ref (1).
Since the first offset is used to represent the deviation between the actual placement position and the predicted placement position, the robot can adjust the predicted placement position according to the first offset, thereby obtaining the final placement position. A specific adjustment manner may be to increase the first offset amount based on the predicted placement bit.
Therefore, in the above embodiments of the present application, the capturing position is predicted according to the first image acquired by the first image acquisition device in the capturing area, and the placing position is predicted according to the second image acquired by the first image acquisition device in the first placing area; controlling a second image acquisition device to acquire a third image of a second placement area, and controlling a third image acquisition device to acquire a fourth image of the third placement area, wherein the second placement area and the third placement area comprise two different characteristic positions of a placed object; acquiring a first offset determined according to a first characteristic position in the third image and a second characteristic position in the fourth image, wherein the first offset is used for representing the deviation between an actual placement position and a predicted placement position; and adjusting the predicted placement position according to the first offset. According to the scheme, the grabbing position and the placing position are predicted based on the first image acquisition device, and the predicted placing position is corrected based on the second image acquisition device and the third image acquisition device, so that the purpose of improving the paving precision is achieved.
Fig. 7 is a block diagram of a handling system according to an embodiment of the present application, and each of the devices of the system is described in detail below with reference to fig. 7:
the first image acquisition device, the second image acquisition device and the third image acquisition device are all intelligent cameras, the intelligent cameras mainly comprise image acquisition, camera calibration, characteristic positioning and data communication modules, and the hand-eye camera, namely the first image acquisition device, further comprises an offset calculation module. The PC part of the upper computer consists of two modules of offset calculation (a fixed double camera, namely a second image acquisition device and a third image acquisition device) and data communication, and the robot part is provided with a robot control module.
The robot control module is responsible for the floor tile grabbing and paving actions of the mechanical arm and sends a photographing instruction to the PC (by modifying the value of the corresponding register) at the corresponding photographing position. After receiving the photographing instruction, a communication module of the PC part analyzes the instruction, sends the photographing instruction (the instruction is T1, T2 and the like through a TCP/IP protocol) to a corresponding camera according to the difference of the instruction, and triggers the camera to acquire images. And the communication module of the intelligent camera analyzes the instruction and triggers the image acquisition module to acquire the image. The hand-eye camera is arranged in the feature positioning module, the pixel coordinates of the hand-eye camera are obtained by matching and positioning features (such as corners of floor tiles) in the image through the template, corresponding robot coordinates are obtained by using the camera calibration module, coordinate offset information of mechanical arm movement is further obtained by using the offset calculation module, and the coordinate offset information is processed by using the communication module and then sent to the PC. The fixed double cameras directly transmit the robot coordinates of the features in the images to the PC after the robot coordinates are processed by the communication module (converted into character strings in a specific format).
The PC part receives data from the intelligent camera by using a communication module, extracts coordinate offset information from information sent by the hand-eye camera, and extracts coordinates from coordinates of the fixed double cameras. And an offset calculation module in the PC calculates corresponding coordinate offset information according to the coordinate information of the features in the image. The communication module writes the offset data into a position register of the robot controller, modifies the value of a corresponding numerical value register and informs the controller that the photographing and offset calculation process is finished. And the robot control module reads coordinate offset information in the corresponding position register after learning that the photographing process is finished, and controls the mechanical arm to move, so that the floor tile is grabbed and paved in a visual guidance mode.
Fig. 8 is a schematic diagram of a data communication module of an image capturing device according to an embodiment of the present application, and in conjunction with fig. 8, the data communication module in the image capturing device includes two parts, namely, a data receiving part and a data transmitting part. The data receiving part receives a photographing instruction from the PC part through a TCP/IP protocol, reads instruction information and triggers the camera to acquire images. The data sending part carries out format arrangement on the coordinates or the offset data of the coordinates and sends the coordinates or the offset data of the coordinates to the PC part through a TCP/IP protocol.
Specifically, the input data includes a photographing instruction from the PC section and coordinates or offset information thereof from a module for feature localization or the like. The photographing instruction includes "T1" (captured region photographing), "T2" (placed region photographing), and "T3" (stationary dual camera photographing).
The data output module firstly receives the photographing instruction and outputs a trigger signal to the image acquisition module. After obtaining the coordinates or the offset information thereof, the coordinates or the offset information thereof are sent to the PC part through the TCP/IP protocol. And after the PC part obtains the coordinate information, calculating corresponding offset and sending the offset to the robot to complete the grabbing and paving actions of the floor tiles.
For the data communication module of the upper computer PC, the communication module of the PC part comprises communication with the robot and communication with the intelligent camera. The communication between the PC part and the robot is carried out through communication software carried by the robot.
For the offset calculation module in the PC, because the size of the floor tile is larger (the side length is 800mm), the view field of one camera (about 250mm) can not cover the whole floor tile, so that two cameras are adopted to observe 2 corners of the floor tile. By capturing the images, the corners of the tiles are located, and the robot coordinates of the vertices P1_ put, P2_ put, P1_ ref, P2_ ref of the tiles and the reference tiles in the photographingareas 2 and 3 are obtained. The edges of the tiling and reference tiles are determined by the vertices and the gap width of the two tiles is 2 + -0.5 mm by keeping the 2 edges parallel (angular deviation <0.05 degrees) and at a distance of 2mm (deviation <0.3 mm).
As an optional embodiment, the first image capturing device is further configured to predict the capturing position according to a second offset determined by a third feature position in the first image and a preset capturing feature position, a third offset determined by a fourth feature position in the second image and a preset placing feature position, and the capturing position and the second offset, and predict the placing position according to a placing reference position and the third offset, where the placing reference position is obtained according to the placing feature position.
Specifically, the placement feature position and the capture feature position may be positions determined in the teaching process, or positions designated by a person. Taking teaching as an example, in the teaching process, the movement of the robot and the movement of the mechanical arm are manually controlled, so that the carrying process of an object is completed, and the placement characteristic position and the grabbing characteristic position are determined in the process.
The third characteristic position may be a position of a characteristic point of the object to be conveyed in the first image, and the characteristic point may be a center point of the object to be conveyed in the first image or a position of a corner of the object to be conveyed in the first image. The fourth characteristic position is the same as the first characteristic position, and is not described herein again.
The second offset may be a difference between the third feature position and a preset capture feature position, and the third offset may be a difference between the fourth feature position and a placement feature position.
As shown in fig. 2, the object to be carried may be a floor tile, and in the drawing, the tile taking photographing area is a current grabbing area, and the tile photographing area is a current placing area.
In an alternative embodiment, the third offset and the capture feature position may be adjusted to the same coordinate system, and then the third offset may be added based on the capture feature position, so as to obtain the capture position. Similarly, the fourth offset and the placement reference may be adjusted to the same coordinate system, and then the placement position may be obtained by adding the fourth offset on the basis of the placement reference position.
As an optional embodiment, the robot further includes a robot arm, and the robot is configured to control the robot arm to move to the predicted grabbing position to grab the object to be transported, and control the robot arm to move to the predicted placing position according to the object to be transported carried.
In the above scheme, after the grabbing position and the placing position are obtained through prediction, the mechanical arm can be controlled to move to the grabbing position to grab the object to be carried, and then the mechanical arm is controlled to move to the placing position, so that the object to be carried is moved to the placing position.
It should be noted that, in the above-mentioned scheme, after the robot arm carries the object to be transported and moves to the predicted placement position, the object to be transported is placed without loosening the suction cup, but the placement position needs to be adjusted according to the second image capturing device and the third image capturing device.
As an alternative embodiment, the robot further includes a mechanical arm, and the first image capturing device is disposed at a distal end of the mechanical arm.
As an alternative embodiment, the robot comprises a support rod, and the second image acquisition device and the third image acquisition device are arranged at the tail end of the support rod.
As an alternative embodiment, the object to be handled is a tile to be tiled, the first characteristic position is a position of a corner of the tile to be tiled in the third image, and the second characteristic position is a position of a corner of the tile already tiled in the fourth image.
Specifically, in the above scheme, the floor tile to be tiled may be square, rectangular, equilateral hexagon, etc. with an angle. The tile corners of the tiled tiles in the third and fourth images may be designated as two different tile corners.
As an optional embodiment, the robot is further configured to compare the first offset with a preset value, and if the first offset is greater than the preset value, add the first offset on the basis of the third offset to obtain a fourth offset, and adjust the predicted placement position according to the fourth offset.
Specifically, the preset value may be 0.5mm, and in the step, if the first offset is greater than the preset value, it is determined that the deviation is large and needs to be adjusted, so that the first offset is added on the basis of the third offset to obtain a fourth offset, and the placement position is predicted again according to the fourth offset and the placement reference position. And if the first offset is smaller than or equal to the preset value, the deviation is small and no adjustment is needed.
After the updated placement positions are obtained according to the adjustment prediction, the second image acquisition device and the third image acquisition device can be controlled to continue to acquire images of the second placement area and the third placement area, and the steps are repeated until the offset determined according to the images continuously acquired by the second image acquisition device and the third image acquisition device is smaller than the preset value.
Since the above adjustment process can be performed repeatedly, errors in tile deviation introduced by subsequent processes (e.g., the vibrating process) can also be overcome.
Example 3
According to an embodiment of the present invention, there is provided an embodiment of a control method of a handling system, and fig. 9 is a flowchart of a control method of a handling system according to an embodiment of the present application, which is shown in fig. 9 and includes:
and S902, predicting a grabbing position according to a first image collected in the grabbing area, and predicting a placing position according to the first placing area.
Step S904, a third image of the second placement region and a fourth image of the third placement region are acquired, wherein the second placement region and the third placement region include two different feature positions where the object has been placed.
Step S906 is a first offset amount determined according to the first characteristic position in the third image and the second characteristic position in the fourth image, wherein the first offset amount is used for representing a deviation between the actual placement position and the predicted placement position.
Step S908, the predicted placement position is adjusted according to the first offset.
It should be noted that the first image and the second image may be acquired by the same image acquisition device, and the third image and the fourth image may be acquired by different image acquisition devices, respectively. However, this embodiment is not limited thereto.
It should be further noted that the upper computer may control the robot to perform the above steps, and the solution provided in this embodiment may further include other steps inembodiment 1, and a specific implementation manner is shown inembodiment 1, which is not described herein again.
Example 4
According to an embodiment of the present invention, there is provided a tile paving system comprising the handling system ofembodiment 2.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.