Three-dimensional modeling method and system based on 3D vision sensorTechnical Field
The application relates to the technical field of robots, in particular to a three-dimensional modeling method and a three-dimensional modeling system based on a 3D vision sensor.
Background
In the process of three-dimensional modeling of an object based on a basic coordinate system by a robot, when the position of the object is greatly changed, because the origin of a user coordinate system is not on the object and the absolute positioning accuracy of the robot is lower than the repeated positioning accuracy of the robot, a large error is generated in the process of three-dimensional modeling of the object based on the basic coordinate system by the robot, and the three-dimensional modeling and subsequent positioning and grabbing operations are influenced.
Disclosure of Invention
In view of this, embodiments of the present application provide a three-dimensional modeling method and system based on a 3D vision sensor, so as to solve the problem that when the position of an object changes, the object is positioned by using a model that is built in the prior art, which may cause inaccurate positioning.
In a first aspect, an embodiment of the present application provides a three-dimensional modeling method based on a 3D vision sensor, where the three-dimensional modeling method includes:
selecting the positions of the tail end of the mechanical arm and an object to be modeled in the visual range of the 3D visual sensor, establishing a first user coordinate system, and determining the position relation between the first user coordinate system and the 3D visual sensor;
if the object to be modeled exists at the position of the object to be modeled, acquiring depth data of the object to be modeled;
determining three intersection points of the object to be modeled and the first user coordinate system according to the depth data, and respectively recording coordinates of the three intersection points in the first user coordinate system;
controlling the tail end of the mechanical arm to move to the three intersection points, and recording the pose of the tail end of the mechanical arm when the output value of a force sensor arranged in the mechanical arm changes;
establishing a second user coordinate system according to the pose of the tail end of the mechanical arm and the three intersection points;
and mapping the object to be modeled into a basic coordinate system of the mechanical arm through the second user coordinate system to complete three-dimensional modeling of the object to be modeled.
Optionally, the establishing a position relationship between the first user coordinate system and the 3D vision sensor includes:
placing a calibration plate at the position of the object to be modeled, and establishing a first user coordinate system through the calibration plate;
determining, by the 3D vision sensor, a positional relationship between the first user coordinate system and the 3D vision sensor.
Optionally, the calibration plate is a checkerboard calibration plate;
accordingly, the establishing of the first user coordinate system through the calibration board comprises:
adjusting the chessboard pattern calibration plate to the central point of the position of the object to be modeled;
and calibrating the checkerboard in the field of view range through the 3D vision sensor so as to establish a first user coordinate system on the checkerboard calibration plate.
Optionally, the determining three intersection points of the object to be modeled and the first user coordinate system according to the depth data includes:
converting the depth data into point cloud data, and triangulating the point cloud data;
establishing a triangulation network data model of the object to be modeled according to triangulation results;
and determining three intersection points of the object to be modeled and the first user coordinate system through the triangular network data model.
Optionally, the controlling the end of the mechanical arm to move to the three intersection points, and recording the pose of the end of the mechanical arm when the output value of the force sensor arranged in the mechanical arm changes includes:
controlling the tail end of the mechanical arm to move to an original point along preset positions of x, y and z axes of the first user coordinate system respectively;
monitoring whether an output value changes in the moving process of the tail end of the mechanical arm;
and if so, recording the pose of the tail end of the mechanical arm at the moment.
A second aspect of embodiments of the present application provides a three-dimensional modeling system based on a 3D vision sensor, the three-dimensional modeling system including:
the position relation determining module is used for selecting the positions of the tail end of the mechanical arm and an object to be modeled in the visual range of the 3D visual sensor, establishing a first user coordinate system, and determining the position relation between the first user coordinate system and the 3D visual sensor;
the depth data acquisition module is used for acquiring the depth data of the object to be modeled when the object to be modeled exists at the position of the object to be modeled;
the intersection point coordinate acquisition module is used for determining three intersection points of the object to be modeled and the first user coordinate system according to the depth data and respectively recording coordinates of the three intersection points in the first user coordinate system;
the pose acquisition module is used for controlling the tail end of the mechanical arm to move to the three intersection points and recording the pose of the tail end of the mechanical arm when the output value of the force sensor arranged in the mechanical arm changes;
the user coordinate system establishing module is used for establishing a second user coordinate system according to the pose of the tail end of the mechanical arm and the three intersection points;
and the mapping module is used for mapping the object to be modeled into a basic coordinate system of the mechanical arm through the second user coordinate system so as to complete three-dimensional modeling of the object to be modeled.
Optionally, the position relationship determining module is specifically configured to:
placing a calibration plate at the position of the object to be modeled, and establishing a first user coordinate system through the calibration plate;
determining, by the 3D vision sensor, a positional relationship between the first user coordinate system and the 3D vision sensor.
Optionally, the calibration plate is a checkerboard calibration plate;
accordingly, the establishing of the first user coordinate system through the calibration board comprises:
adjusting the chessboard pattern calibration plate to the central point of the position of the object to be modeled;
and calibrating the checkerboard in the field of view range through the 3D vision sensor so as to establish a first user coordinate system on the checkerboard calibration plate.
Optionally, when determining three intersection points of the object to be modeled and the first user coordinate system according to the depth data, the intersection point coordinate obtaining module is specifically configured to:
converting the depth data into point cloud data, and triangulating the point cloud data;
establishing a triangulation network data model of the object to be modeled according to triangulation results;
and determining three intersection points of the object to be modeled and the first user coordinate system through the triangular network data model.
Optionally, the pose acquisition module is specifically configured to:
controlling the tail end of the mechanical arm to move to an original point along preset positions of x, y and z axes of the first user coordinate system respectively;
monitoring whether an output value changes in the moving process of the tail end of the mechanical arm;
and if so, recording the pose of the tail end of the mechanical arm at the moment.
According to the method for re-calibrating the user coordinate system on the object to be modeled in the embodiment provided by the application, the characteristic that the repeated positioning precision of the robot is higher is utilized, when the position of the object to be modeled is changed greatly, the error caused by lower absolute positioning precision of the robot is reduced, and the three-dimensional modeling precision of the robot is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below.
FIG. 1 is a schematic diagram of modeling based on a 3D vision sensor provided by an embodiment of the present application;
FIG. 2 is a schematic flow chart of an implementation of a three-dimensional modeling method based on a 3D vision sensor according to an embodiment of the present application;
fig. 3 is a structural diagram of a three-dimensional modeling system based on a 3D vision sensor according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the application and do not constitute a limitation on the application.
The system for modeling based on 3D vision sensor provided by the present application as shown in fig. 1 comprises: 1-moving mechanical arm, 2-3D vision sensor, 3-force sensor and 4-object to be modeled. Wherein the 3D vision sensor is independent from the mobile robot arm, and the force sensor is assembled at the tail end of the mobile robot arm.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
The first embodiment is as follows:
fig. 2 shows a schematic implementation flow diagram of a three-dimensional modeling method based on a 3D vision sensor provided in an embodiment of the present application, including steps S21-S24, where:
step S21, selecting the positions of the tail end of the mechanical arm and the object to be modeled in the visual range of the 3D visual sensor, establishing a first user coordinate system, and determining the position relation between the first user coordinate system and the 3D visual sensor.
According to the modeling method, a modeling scene is set at first, specifically, the positions of the tail end of the mechanical arm and an object to be modeled are selected in a modeling range, or the tail end of the mechanical arm provided with the three-dimensional force sensor and the object to be modeled are fixed to one point in a visual field range of the 3D visual sensor respectively and are not shielded. And moving the object to be modeled, placing a calibration plate for calibration, establishing a coordinate system, and determining the position relation between the object and the 3D vision sensor through the established coordinate system.
Optionally, the establishing a position relationship between the first user coordinate system and the 3D vision sensor includes:
placing a calibration plate at the position of the object to be modeled, and establishing a first user coordinate system through the calibration plate;
determining, by the 3D vision sensor, a positional relationship between the first user coordinate system and the 3D vision sensor.
Optionally, the calibration plate is a checkerboard calibration plate;
accordingly, the establishing of the first user coordinate system through the calibration board comprises:
adjusting the chessboard pattern calibration plate to the central point of the position of the object to be modeled;
and calibrating the checkerboard in the field of view range through the 3D vision sensor so as to establish a first user coordinate system on the checkerboard calibration plate.
Specifically, the checkerboard calibration plate is arranged at the center point of the original position of the object to be modeled facing the 3D visual sensor, then the checkerboard in the visual field range is positioned through the 3D visual sensor, then the user coordinate system of the robot is calibrated on the checkerboard, and finally the position relation between the 3D visual sensor and the user coordinate system is determined through the steps.
Step S22, if the object to be modeled exists at the position of the object to be modeled, acquiring the depth data of the object to be modeled.
In the embodiment provided by the application, the object to be modeled is put back to the original position, and then the depth data of the object to be modeled can be acquired by using the RGBD camera.
Step S23, determining three intersection points of the object to be modeled and the first user coordinate system according to the depth data, and respectively recording coordinates of the three intersection points in the first user coordinate system.
Optionally, the determining three intersection points of the object to be modeled and the first user coordinate system according to the depth data includes:
converting the depth data into point cloud data, and triangulating the point cloud data;
establishing a triangulation network data model of the object to be modeled according to triangulation results;
and determining three intersection points of the object to be modeled and the first user coordinate system through the triangular network data model.
In the method, the depth data are converted into point cloud data, triangulation is performed on the point cloud data again, and finally a triangular mesh data model of the object is formed. The first user coordinate system is then mapped onto the object to be modeled.
Specifically, if the object to be modeled is in situ (the position selected in step S21), an intersection CPx (x) on the object to be modeled with three axes of the first user coordinate system is determined by the triangulated data model of the object to be modeled0,y0,z0)、CPy(x1,y1,z1) And CPz (x)2,y2,z2) The three points are all points in the 3D vision sensor coordinate system.
If the object to be modeled is not in the original position, determining an intersection point CP ' x (x ') of the object to be modeled and three axes of a user coordinate system superimposed with the offset of the object relative to the original position given by the vision sensor through a triangular grid data model of the object to be modeled '0,y’0,z’0)、CP’y(x’1,y’1,z’1) And CP 'z (x'2,y’2,z’2) These three points are all points in the 3D vision sensor coordinate system.
And step S24, controlling the tail end of the mechanical arm to move to the three intersection points, and recording the pose of the tail end of the mechanical arm when the output value of a force sensor arranged in the mechanical arm changes.
Optionally, the controlling the end of the mechanical arm to move to the three intersection points, and recording the pose of the end of the mechanical arm when the output value of the force sensor arranged in the mechanical arm changes includes:
controlling the tail end of the mechanical arm to move to an original point along preset positions of x, y and z axes of the first user coordinate system respectively;
monitoring whether the output value changes in the moving process of the tail end of the mechanical arm;
and if so, recording the pose of the tail end of the mechanical arm at the moment.
In the step, if the object to be modeled is in the original position, the tail ends of the movable mechanical arms with the force sensors respectively move to the original point O of the user coordinate system from a proper far place (without touching the object to be modeled) along the X axis, the Y axis and the Z axis of the first user coordinate system.
If the object to be modeled is not in the original position, the tail end of the movable mechanical arm with the force sensor moves to the original point O 'of the user coordinate system respectively from a proper far place (without touching the object to be modeled) through the X' axis, the Y 'axis and the Z' axis of the first user coordinate system which are superposed with the offset of the object, relative to the original position, given by the visual sensor.
And step S25, establishing a second user coordinate system according to the pose of the tail end of the mechanical arm and the three intersection points.
And step S26, mapping the object to be modeled into a basic coordinate system of the mechanical arm through the second user coordinate system so as to complete three-dimensional modeling of the object to be modeled.
According to the method for re-calibrating the user coordinate system on the object to be modeled in the embodiment provided by the application, the characteristic that the repeated positioning precision of the robot is higher is utilized, when the position of the object to be modeled is changed greatly, the error caused by lower absolute positioning precision of the robot is reduced, and the three-dimensional modeling precision of the robot is improved.
Example two:
fig. 3 shows a schematic structural diagram of a three-dimensional modeling system based on a 3D vision sensor according to another embodiment of the present application, the system including:
the positionrelation determining module 31 is configured to select a position where the tail end of the mechanical arm and the object to be modeled are located within a visual range of the 3D visual sensor, establish a first user coordinate system, and determine a position relation between the first user coordinate system and the 3D visual sensor;
the depthdata acquisition module 32 is configured to acquire depth data of the object to be modeled when the object to be modeled exists at the position of the object to be modeled;
an intersection coordinate obtainingmodule 33, configured to determine three intersections of the object to be modeled and the first user coordinate system according to the depth data, and record coordinates of the three intersections in the first user coordinate system respectively;
apose acquisition module 34, configured to control the end of the mechanical arm to move to the three intersection points, and record a pose of the end of the mechanical arm when an output value of a force sensor provided in the mechanical arm changes;
a user coordinatesystem establishing module 35, configured to establish a second user coordinate system according to the pose of the end of the mechanical arm and the three intersection points;
and themapping module 36 is configured to map the object to be modeled into the base coordinate system of the robot arm through the second user coordinate system to complete three-dimensional modeling of the object to be modeled.
Optionally, the positionrelation determining module 31 is specifically configured to:
placing a calibration plate at the position of the object to be modeled, and establishing a first user coordinate system through the calibration plate;
determining, by the 3D vision sensor, a positional relationship between the first user coordinate system and the 3D vision sensor.
Optionally, the calibration plate is a checkerboard calibration plate;
accordingly, the establishing of the first user coordinate system through the calibration board comprises:
adjusting the chessboard pattern calibration plate to the central point of the position of the object to be modeled;
and calibrating the checkerboard in the field of view range through the 3D vision sensor so as to establish a first user coordinate system on the checkerboard calibration plate.
Optionally, when determining three intersection points of the object to be modeled and the first user coordinate system according to the depth data, the intersection point coordinate obtaining module is specifically configured to:
converting the depth data into point cloud data, and triangulating the point cloud data;
establishing a triangulation network data model of the object to be modeled according to triangulation results;
and determining three intersection points of the object to be modeled and the first user coordinate system through the triangular network data model.
Optionally, thepose acquisition module 34 is specifically configured to:
controlling the tail end of the mechanical arm to move to an original point along preset positions of x, y and z axes of the first user coordinate system respectively;
monitoring whether an output value changes in the moving process of the tail end of the mechanical arm;
and if so, recording the pose of the tail end of the mechanical arm at the moment.
According to the method for re-calibrating the user coordinate system on the object to be modeled in the embodiment provided by the application, the characteristic that the repeated positioning precision of the robot is higher is utilized, when the position of the object to be modeled is changed greatly, the error caused by lower absolute positioning precision of the robot is reduced, and the three-dimensional modeling precision of the robot is improved.
The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention. Any other corresponding changes and modifications made according to the technical idea of the present invention should be included in the protection scope of the claims of the present invention.