技术领域Technical field
本申请属于机器人技术领域,尤其涉及一种位姿确定方法及建筑机器人。The present application belongs to the field of robot technology, and in particular relates to a position and posture determination method and a construction robot.
背景技术Background technique
室外用的建筑机器人由可移动底盘、伸缩式的机械臂和位于该机械臂末端的执行机构(如喷涂、打磨、抹灰等)构成。机械臂可转向、伸缩,能将机械臂末端安装的执行机构移动到工作范围内的指定位置。建筑机器人末端的执行机构,用于为构筑物表面进行自动化施工,如喷涂作业,打磨作业和抹灰作业等。The outdoor construction robot consists of a movable chassis, a telescopic robotic arm and an actuator (such as spraying, grinding, plastering, etc.) located at the end of the robotic arm. The robotic arm can be turned and telescopic, and can move the actuator installed at the end of the robotic arm to a designated position within the working range. The actuator at the end of the construction robot is used to perform automated construction on the surface of the structure, such as spraying, grinding and plastering.
建筑机器人在施工场景下,需要进行位姿调整、导航定位等,但是传统技术中,建筑机器人的功能比较简单,无法进行精确的位姿信息获取,导致后续的导航、作业等受到影响。Construction robots need to perform posture adjustment, navigation and positioning in construction scenarios. However, in traditional technology, construction robots have relatively simple functions and cannot obtain precise posture information, which affects subsequent navigation and operations.
发明内容Contents of the invention
本申请所要解决的技术问题在于提供一种位姿确定方法及建筑机器人,旨在解决传统建筑机器人的功能单一,无法精确获取位姿信息的问题。The technical problem to be solved by this application is to provide a posture determination method and a construction robot, aiming to solve the problem that traditional construction robots have a single function and cannot accurately obtain posture information.
本申请是这样实现的,一种位姿确定方法,所述位姿确定方法适用于建筑机器人,所述建筑机器人包括可移动底盘、与所述可移动底盘相连接的机械臂和位于所述机械臂末端的执行机构,所述执行机构包括激光雷达和深度相机,所述位姿确定方法包括:The present application is implemented as follows: a method for determining posture, which method is suitable for a construction robot. The construction robot includes a movable chassis, a mechanical arm connected to the movable chassis, and a robot located on the machine. An actuator at the end of the arm, the actuator includes a lidar and a depth camera, and the pose determination method includes:
获取所述深度相机拍摄的深度图中的特征信息;Obtain feature information in the depth map captured by the depth camera;
将所述特征信息附加到所述激光雷达拍摄的初始点云图中,得到包含有所述特征信息的目标点云图;Add the characteristic information to the initial point cloud image captured by the lidar to obtain a target point cloud image containing the characteristic information;
根据所述目标点云图中的特征信息,对所述目标点云图进行三维点云定位,得到所述执行机构在所述目标点云图中的位姿。According to the characteristic information in the target point cloud image, three-dimensional point cloud positioning is performed on the target point cloud image to obtain the pose of the actuator in the target point cloud image.
优选的,所述位姿确定方法还包括:Preferably, the pose determination method further includes:
获取所述特征信息中的目标特征信息;Obtain target feature information in the feature information;
根据所述目标特征信息对所述目标点云图进行修正,得到修正后的目标点云图。The target point cloud image is corrected according to the target feature information to obtain a corrected target point cloud image.
优选的,所述将所述特征信息附加到所述激光雷达拍摄的初始点云图中包括:Preferably, adding the feature information to the initial point cloud image captured by the lidar includes:
将所述深度图沿所述激光雷达的中心轴位移,使得所述深度图和所述激光雷达的坐标系重合;Displace the depth map along the central axis of the lidar so that the depth map and the coordinate system of the lidar coincide;
获取所述特征信息在所述深度图中的像素坐标和深度值;Obtain the pixel coordinates and depth value of the feature information in the depth map;
以所述像素坐标和所述深度值作为参考值,将所述特征信息叠加到所述初始点云图的对应坐标位置。Using the pixel coordinates and the depth value as reference values, the feature information is superimposed on the corresponding coordinate position of the initial point cloud image.
优选的,所述目标特征信息包括待作业区域的围挡的成像高度,所述根据所述目标特征信息对所述目标点云图进行修正包括:Preferably, the target feature information includes the imaging height of the enclosure of the area to be worked on, and the correction of the target point cloud image according to the target feature information includes:
根据所述围挡的实测高度和所述成像高度确定校正系数;Determine a correction coefficient based on the actual measured height of the enclosure and the imaging height;
获取所述激光雷达拍摄的所述围挡的高度点云数据;Obtain the height point cloud data of the enclosure captured by the lidar;
根据所述校正系数、所述成像高度和所述高度点云数据对所述目标点云图进行修正。The target point cloud image is corrected according to the correction coefficient, the imaging height and the height point cloud data.
优选的,所述目标点云图包括若干目标点云子图;所述根据所述目标点云图中的特征信息,对所述目标点云图进行三维点云定位之前,还包括:Preferably, the target point cloud image includes several target point cloud sub-images; before performing three-dimensional point cloud positioning on the target point cloud image according to the characteristic information in the target point cloud image, the method further includes:
获取所述目标点云子图中的特征信息;Obtain feature information in the target point cloud sub-image;
根据各个所述目标点云子图中的特征信息,进行点云全局配准;Perform global point cloud registration based on the feature information in each target point cloud sub-image;
对完成点云全局配准的目标点云子图进行局部点云配准,得到两个目标点云子图的变换矩阵;Perform local point cloud registration on the target point cloud sub-image that has completed the global point cloud registration, and obtain the transformation matrices of the two target point cloud sub-images;
根据所述变换矩阵对目标点云子图进行拼接,得到所述目标点云图。The target point cloud sub-images are spliced according to the transformation matrix to obtain the target point cloud image.
优选的,采用ICP迭代最近点算法对完成点云全局配准的目标点云子图进行局部点云配准。Preferably, the ICP iterative closest point algorithm is used to perform local point cloud registration on the target point cloud subgraph that has completed the global point cloud registration.
优选的,所述根据所述目标点云图中的特征信息,对所述目标点云图进行三维点云定位,得到所述执行机构在所述目标点云图中的位姿包括:Preferably, the three-dimensional point cloud positioning of the target point cloud map according to the characteristic information in the target point cloud map to obtain the pose of the actuator in the target point cloud map includes:
根据所述特征信息,获取所述执行机构在待作业区域的点位信息;According to the characteristic information, obtain the point information of the actuator in the area to be operated;
根据所述点位信息和所述变换矩阵,得到所述执行机构在所述目标点云图中的位姿。According to the point information and the transformation matrix, the pose of the actuator in the target point cloud image is obtained.
优选的,所述位姿确定方法还包括:Preferably, the pose determination method further includes:
根据所述执行机构在所述目标点云图中的位姿,确定所述执行机构的空间位姿,其中在所述空间位姿下,所述执行机构中与待作业区域的表面距离为预设值,且所述执行机构的作业面与所述待作业区域的表面的法线垂直。According to the posture of the actuator in the target point cloud image, the spatial posture of the actuator is determined, wherein in the spatial posture, the surface distance between the actuator and the area to be operated is a preset value, and the working surface of the actuator is perpendicular to the normal line of the surface of the area to be worked.
本申请实施例还提供了一种建筑机器人,包括控制器、可移动底盘、与所述可移动底盘相连接的机械臂和位于所述机械臂末端的执行机构,所述执行机构包括激光雷达和深度相机,所述控制器用于执行上述任意一项所述的位姿确定方法。An embodiment of the present application also provides a construction robot, including a controller, a movable chassis, a robotic arm connected to the movable chassis, and an actuator located at the end of the robotic arm. The actuator includes a laser radar and a Depth camera, the controller is used to execute any one of the pose determination methods described above.
优选的,所述执行机构还包括二轴云台,所述激光雷达和所述深度相机固定在所述二轴云台上,且所述激光雷达与所述深度相机的竖直方向中心轴重合,所述激光雷达的0度线与所述深度相机的光轴方向平行。Preferably, the actuator further includes a two-axis pan/tilt, the laser radar and the depth camera are fixed on the two-axis pan/tilt, and the vertical center axis of the laser radar and the depth camera coincides with each other. , the 0-degree line of the lidar is parallel to the optical axis direction of the depth camera.
本申请与现有技术相比,有益效果在于:本申请实施例提供的位姿确定方法适用于建筑机器人,该建筑机器人包括可移动底盘、与该可移动底盘相连接的机械臂和位于该机械臂末端的执行机构,该执行机构上设置有激光雷达和深度相机,通过获取该深度相机拍摄的深度图中的特征信息,将该特征信息附加到该激光雷达拍摄的初始点云图中,得到包含有特征信息的目标点云图,最后根据该目标点云图中的特征信息进行三维点云定位,得到该执行机构在该目标点云图中的位姿。本申请实施例通过在该建筑机器人的执行机构上设置深度相机和激光雷达,将激光雷达和深度相机相结合,有效利用待作业区域,如建筑场景特有的特征信息进行融合及定位,准确识别该执行机构的位姿,方便后续进行建筑机器人的建图、定位和导航。Compared with the prior art, the beneficial effect of this application is that: the position and orientation determination method provided by the embodiment of this application is suitable for a construction robot. The construction robot includes a movable chassis, a mechanical arm connected to the movable chassis, and a robot located on the machine. The actuator at the end of the arm is equipped with a laser radar and a depth camera. By obtaining the feature information in the depth map captured by the depth camera, the feature information is appended to the initial point cloud image captured by the laser radar, and the result contains The target point cloud image with characteristic information is finally used for three-dimensional point cloud positioning based on the characteristic information in the target point cloud image to obtain the pose of the actuator in the target point cloud image. The embodiment of the present application sets a depth camera and a lidar on the actuator of the construction robot, and combines the lidar and the depth camera to effectively utilize the area to be worked on, such as the unique characteristic information of the construction scene for fusion and positioning, and accurately identify the area. The position and posture of the actuator facilitates subsequent mapping, positioning and navigation of the construction robot.
附图说明Description of drawings
图1是本申请实施例提供的位姿确定方法的流程图;Figure 1 is a flow chart of a pose determination method provided by an embodiment of the present application;
图2是本申请实施例提供的建筑机器人的立体结构示意图;Figure 2 is a schematic three-dimensional structural diagram of a construction robot provided by an embodiment of the present application;
图3是本申请实施例提供的建筑机器人的左视示意图;Figure 3 is a schematic left view of the construction robot provided by the embodiment of the present application;
图4是本申请实施例提供的建筑机器人的主视示意图;Figure 4 is a schematic front view of the construction robot provided by the embodiment of the present application;
图5是本申请实施例提供的执行机构的结构示意图。Figure 5 is a schematic structural diagram of an actuator provided by an embodiment of the present application.
具体实施方式Detailed ways
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。In order to make the purpose, technical solutions and advantages of the present application more clear, the present application will be further described in detail below with reference to the drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present application and are not used to limit the present application.
室外执行相关作业的建筑机器人一般由可移动底盘、伸缩式的机械臂和位于该机械臂末端的执行机构(如喷涂、打磨、抹灰等)构成。机械臂可转向、伸缩,能将机械臂末端安装的执行机构移动到工作范围内的指定位置。建筑机器人末端的执行机构,用于为构筑物表面进行自动化施工,如喷涂作业,打磨作业和抹灰作业等。该执行机构通常由机器人协作臂、一维导轨模组和二维导轨模组等构成。Construction robots that perform related tasks outdoors generally consist of a movable chassis, a telescopic robotic arm, and an actuator (such as spraying, grinding, plastering, etc.) located at the end of the robotic arm. The robotic arm can be turned and telescopic, and can move the actuator installed at the end of the robotic arm to a designated position within the working range. The actuator at the end of the construction robot is used to perform automated construction on the surface of the structure, such as spraying, grinding and plastering. The actuator usually consists of a robot collaborative arm, a one-dimensional guide rail module, a two-dimensional guide rail module, etc.
通常情况下施工现场构筑物表面的施工面积较大,需要进一步划分成若干区域进行施工。第一级子区域由移动底盘在不同点位所能施工的范围构成。第二级子区域由移动底盘固定时,机械臂在不同点位所能施工的范围构成。机械臂在某一点位所能施工的范围,即为末端的执行机构的施工范围。为了达到施工质量要求,通常需要保证末端的执行机构与构筑物表面保持给定的距离且与表面垂直。由于施工的构筑物表面通常会有阴阳角、转角和曲面等形状,需要得到构筑物表面的形状信息和末端执行机构相对于构筑物表面的空间位姿信息。因此,需要进行建筑机器人的建图、定位和导航工作。在建筑施工场景下,末端的执行机构的位置精度要求一般在5cm左右。Usually, the construction area on the surface of the structure at the construction site is relatively large and needs to be further divided into several areas for construction. The first-level sub-area consists of the range that the mobile chassis can construct at different points. The second-level sub-area consists of the range that the robotic arm can construct at different points when the mobile chassis is fixed. The range that the robotic arm can perform construction at a certain point is the construction range of the end actuator. In order to meet construction quality requirements, it is usually necessary to ensure that the end actuator maintains a given distance from the surface of the structure and is perpendicular to the surface. Since the surface of the construction structure usually has shapes such as yin and yang angles, corners, and curved surfaces, it is necessary to obtain the shape information of the structure surface and the spatial posture information of the end effector relative to the structure surface. Therefore, mapping, positioning and navigation of construction robots are required. In construction scenarios, the position accuracy requirement of the end actuator is generally around 5cm.
本申请属于建筑机器人领域和智能建造领域,涉及一种面向室外场景的建筑机器人(如桥梁喷涂机器人、建筑外墙机器人等)的位姿确定方法。该位姿确定方法采用激光雷达和深度相机相融合,并利用建筑场景所特有的特征信息,进行建筑机器人的位姿识别及后续的建图、导航、定位。This application belongs to the field of construction robots and intelligent construction, and relates to a method for determining the position and posture of construction robots oriented to outdoor scenes (such as bridge spraying robots, building exterior wall robots, etc.). This pose determination method uses the fusion of lidar and depth cameras, and uses the unique characteristic information of the architectural scene to perform pose recognition and subsequent mapping, navigation, and positioning of the construction robot.
图1示出了本申请实施例提供的位姿确定方法,一种位姿确定方法,该位姿确定方法适用于建筑机器人,该建筑机器人包括可移动底盘、与该可移动底盘相连接的机械臂和位于该机械臂末端的执行机构,该执行机构包括激光雷达和深度相机,该位姿确定方法包括:Figure 1 shows a posture determination method provided by an embodiment of the present application. It is a posture determination method. The posture determination method is suitable for a construction robot. The construction robot includes a movable chassis and a machine connected to the movable chassis. The arm and an actuator located at the end of the robotic arm. The actuator includes a laser radar and a depth camera. The pose determination method includes:
S101,获取所述深度相机拍摄的深度图中的特征信息。S101. Obtain feature information in the depth map captured by the depth camera.
在本步骤中,该建筑机器人的可移动底盘位于待作业区域中的某一点位,在该可移动底盘固定后,与该可移动底盘连接的机械臂也会固定,在该机械臂末端的可执行机构上的激光雷达和深度相机在某一点位分别扫描待作业区域中某一建筑施工场景,用该深度相机识别该建筑施工场景中的围挡、桥墩、地面、桥底面及底面分隔缝等建筑施工场景特有的特征信息。In this step, the movable chassis of the construction robot is located at a certain point in the area to be worked. After the movable chassis is fixed, the mechanical arm connected to the movable chassis will also be fixed. The movable chassis at the end of the mechanical arm The lidar and depth camera on the actuator scan a certain building construction scene in the area to be worked at a certain point, and use the depth camera to identify the fence, pier, ground, bridge bottom and bottom separation joints in the building construction scene. Characteristic information unique to building construction scenes.
S102,将所述特征信息附加到所述激光雷达拍摄的初始点云图中,得到包含有所述特征信息的目标点云图。S102: Add the feature information to the initial point cloud image captured by the lidar to obtain a target point cloud image containing the feature information.
在本步骤中,将该特征信息附加到该激光雷达拍摄的初始点云图中包括:将该深度图沿该激光雷达的中心轴位移,使得该深度图和该激光雷达的坐标系重合;获取该特征信息在该深度图中的像素坐标和深度值;以该像素坐标和该深度值作为参考值,将该特征信息叠加到该初始点云图的对应坐标位置。In this step, appending the feature information to the initial point cloud image captured by the lidar includes: displacing the depth map along the central axis of the lidar so that the depth map coincides with the coordinate system of the lidar; obtaining the The pixel coordinates and depth value of the feature information in the depth map; using the pixel coordinate and the depth value as reference values, the feature information is superimposed on the corresponding coordinate position of the initial point cloud map.
在本步骤中,在得到该目标点云图后,可以根据该建筑施工场景中的某一已知的特征信息对该目标点云图进行修正,具体包括:获取该特征信息中的目标特征信息;根据该目标特征信息对该目标点云图进行修正,得到修正后的目标点云图。其中,该目标特征信息包括待作业区域的围挡的成像高度,根据该目标特征信息对该目标点云图进行修正包括:根据该围挡的实测高度和该成像高度确定校正系数;获取该激光雷达拍摄的该围挡的高度点云数据;根据该校正系数、该成像高度和该高度点云数据对该目标点云图进行修正。In this step, after obtaining the target point cloud image, the target point cloud image can be modified according to a certain known feature information in the building construction scene, which specifically includes: obtaining the target feature information in the feature information; The target feature information corrects the target point cloud image to obtain a corrected target point cloud image. Among them, the target characteristic information includes the imaging height of the enclosure in the area to be operated, and correcting the target point cloud map according to the target characteristic information includes: determining the correction coefficient according to the measured height of the enclosure and the imaging height; obtaining the laser radar The captured height point cloud data of the enclosure; the target point cloud image is corrected based on the correction coefficient, the imaging height and the height point cloud data.
S103,根据所述目标点云图中的特征信息,对所述目标点云图进行三维点云定位,得到所述执行机构在所述目标点云图中的位姿。S103. According to the characteristic information in the target point cloud image, perform three-dimensional point cloud positioning on the target point cloud image to obtain the pose of the actuator in the target point cloud image.
在本实施例中,该目标点云图包括若干目标点云子图;在对该目标点云图进行三维点云定位之前,还包括:获取该目标点云子图中的特征信息;In this embodiment, the target point cloud image includes several target point cloud sub-images; before performing three-dimensional point cloud positioning on the target point cloud image, the method further includes: obtaining feature information in the target point cloud sub-image;
根据各个目标点云子图中的特征信息,进行点云全局配准;对完成点云全局配准的目标点云子图进行局部点云配准,得到两个目标点云子图的变换矩阵;根据该变换矩阵对目标点云子图进行拼接,得到该目标点云图。在一些实施例中,采用ICP迭代最近点算法对完成点云全局配准的目标点云子图进行局部点云配准。According to the characteristic information in each target point cloud sub-image, perform global point cloud registration; perform local point cloud registration on the target point cloud sub-image that has completed the global point cloud registration, and obtain the transformation matrix of the two target point cloud sub-images. ; Splice the target point cloud sub-images according to the transformation matrix to obtain the target point cloud image. In some embodiments, the ICP iterative closest point algorithm is used to perform local point cloud registration on the target point cloud subgraph that completes the global point cloud registration.
在一些实施例中,根据该目标点云图中的特征信息,对该目标点云图进行三维点云定位,得到该执行机构在该目标点云图中的位姿可以通过以下步骤实现:根据该特征信息,获取该执行机构在待作业区域的点位信息;根据该点位信息和该变换矩阵,得到该执行机构在该目标点云图中的位姿。In some embodiments, performing three-dimensional point cloud positioning on the target point cloud map based on the feature information in the target point cloud map, and obtaining the pose of the actuator in the target point cloud map can be achieved through the following steps: based on the feature information , obtain the point position information of the actuator in the area to be operated; according to the point position information and the transformation matrix, obtain the pose of the actuator in the target point cloud map.
在一些实施例中,在确定该执行机构在该目标点云图中的位姿后,还可以根据该执行机构在该目标点云图中的位姿,确定该执行机构的空间位姿,其中在该空间位姿下,该执行机构中与待作业区域的表面距离为预设值,且该执行机构的作业面与该待作业区域的表面的法线垂直。在确定该执行机构的空间位姿后,可以进一步对该待作业区域进行建图、定位和导航,并执行后续的作业。In some embodiments, after determining the pose of the actuator in the target point cloud image, the spatial pose of the actuator can also be determined based on the pose of the actuator in the target point cloud image, wherein in the In the spatial posture, the distance between the actuator and the surface of the area to be operated is a preset value, and the working surface of the actuator is perpendicular to the normal line of the surface of the area to be operated. After determining the spatial posture of the actuator, the area to be operated can be further mapped, positioned and navigated, and subsequent operations can be performed.
本申请提供的上述实施例提出了一种将3D激光雷达与深度相机相融合,并有效结合建筑施工场景的特征信息,进行室外作业的建筑机器人的位姿确定,并进行后续建图、定位和导航的方法。该方法便捷、高效,对操作人员要求低、精度适合建筑表面施工(5cm以内),适用于桥梁、厂房、房屋等典型的建筑机器人的工作场景。在一些实施例中,该位姿确定方法可在操作人员的平板电脑上,实时显示深度相机拍摄的施工场景视频和深度相机与激光雷达融合生成的带有颜色的目标点云图,只需操作人员在平板电脑上指定施工范围,机器人底盘、机械臂和末端执行机构即可自行移动进行全自动化施工作业,例如可全自动进行构筑物表面施工作业,并且能够保证末端执行机构与表面垂直且为预设值。The above-mentioned embodiments provided by this application propose a method that integrates 3D laser radar and depth camera, and effectively combines the characteristic information of the construction scene to determine the pose of the construction robot for outdoor operations, and perform subsequent mapping, positioning and Navigation methods. This method is convenient and efficient, has low requirements on operators, and is suitable for building surface construction (within 5cm) with high precision. It is suitable for typical construction robot working scenarios such as bridges, factories, and houses. In some embodiments, the pose determination method can display the construction scene video captured by the depth camera and the colored target point cloud image generated by the fusion of the depth camera and lidar in real time on the operator's tablet computer, and only the operator needs to By specifying the construction scope on the tablet, the robot chassis, robotic arm and end actuator can move on their own to perform fully automated construction operations. For example, construction operations on the surface of the structure can be fully automated, and the end actuator can be ensured to be perpendicular to the surface and preset. value.
下面以桥梁施工场景来对本申请实施例进行进一步地说明,本申请实施例主要包含以下内容:The following uses a bridge construction scenario to further describe the embodiment of the present application. The embodiment of the present application mainly includes the following contents:
(1)激光雷达和深度相机在该机械手的某一点位分别扫描建筑施工场景,具体地,该激光雷达为3D激光雷达。(1) The lidar and the depth camera respectively scan the construction scene at a certain point of the manipulator. Specifically, the lidar is a 3D lidar.
(2)用深度相机识别该建筑施工场景中的围挡、桥墩、地面、桥底面及底面分隔缝等建筑施工场景特有的特征信息。(2) Use the depth camera to identify the unique characteristic information of the building construction scene such as the fence, piers, ground, bridge bottom and bottom separation joints in the building construction scene.
(3)将深度相机识别的建筑场景特征,附加到激光雷达拍摄到的初始点云图的相应位置上,生成带颜色特征的三维激光点云图(本申请中称之为目标点云图)。(3) Attach the architectural scene features recognized by the depth camera to the corresponding positions of the initial point cloud image captured by the lidar to generate a three-dimensional laser point cloud image with color features (referred to as the target point cloud image in this application).
(4)根据建筑围挡的实际高度,对上述生成的带颜色特征的三维激光点云图进行修正。(4) According to the actual height of the building enclosure, the three-dimensional laser point cloud image with color features generated above is corrected.
(5)建筑机器人以带颜色特征的三维激光点云图中的建筑围挡、地面、桥墩、桥底面及底面分隔缝为参照,进行三维点云定位,包括全局点云配准(特征粗匹配)和局部点云配准(ICP迭代最近点算法),实现机器人不同点位的点云地图拼接,确定机械臂末端的执行机构在点云地图中的位姿。(5) The construction robot uses the building enclosure, ground, bridge pier, bridge bottom and bottom separation seam in the three-dimensional laser point cloud image with color features as a reference to perform three-dimensional point cloud positioning, including global point cloud registration (feature coarse matching) And local point cloud registration (ICP iterative nearest point algorithm), realize the point cloud map splicing of different points of the robot, and determine the position and posture of the actuator at the end of the robot arm in the point cloud map.
(6)调整机械手末端的执行机构的空间位姿,实现对构筑物表面指定位置进行施工,并保证该执行机构与待作业区域的表面距离为给定值,且角度垂直。如此,该建筑机器人可以自动规划路径并按照给定的路径全自动进行施工。(6) Adjust the spatial posture of the actuator at the end of the manipulator to implement construction at the designated position on the surface of the structure, and ensure that the surface distance between the actuator and the area to be worked is a given value and the angle is perpendicular. In this way, the construction robot can automatically plan the path and carry out construction according to the given path fully automatically.
在本申请提供的实施例中,该执行机构包括二轴云台,将激光雷达和深度相机安装在该二轴云台上,并且设置该激光雷达与深度相机的竖直方向中心轴重合,并设置该激光雷达的0度线与深度相机光轴方向平行。该二轴云台安装在机械臂末端的执行机构的合适位置。本申请实施例具体是这样实现的:In the embodiment provided by this application, the actuator includes a two-axis gimbal, a lidar and a depth camera are installed on the two-axis gimbal, and the lidar is set to coincide with the vertical central axis of the depth camera, and Set the 0-degree line of the lidar to be parallel to the optical axis of the depth camera. The two-axis pan/tilt is installed at a suitable position on the actuator at the end of the robotic arm. The embodiments of this application are specifically implemented as follows:
A、在该机械臂末端的执行机构的某一空间点位,控制激光雷达扫描周围环境。并控制深度相机旋转360度边旋转边扫描,生成该深度相机的RGB-D深度图。将该RGB-D深度图沿激光雷达的中心轴位移,使该RGB-D深度图和激光雷达坐标系重合。A. At a certain spatial point of the actuator at the end of the robotic arm, control the laser radar to scan the surrounding environment. And control the depth camera to rotate 360 degrees while scanning to generate an RGB-D depth map of the depth camera. Displace the RGB-D depth map along the central axis of the lidar so that the RGB-D depth map coincides with the lidar coordinate system.
B、在该深度相机的RGB-D深度图中,识别出施工围挡、桥墩、地面、桥底面及底面分隔缝等特征信息,并获取这些特征信息在RGB-D深度图中的像素坐标及深度值等信息。B. In the RGB-D depth map of the depth camera, identify characteristic information such as construction enclosures, bridge piers, ground, bridge bottom and bottom separation joints, and obtain the pixel coordinates and pixel coordinates of these feature information in the RGB-D depth map. Depth value and other information.
C、将识别出的施工围挡、桥墩、地面、桥底面及底面分隔缝等建筑施工场景的特征信息,以RGB-D深度图中的像素坐标和深度值为参考,叠加到激光雷达拍摄得到的初始点云图的相应部分。叠加部分的激光雷达的初始点云图,同时包含颜色及建筑场景语义等特征信息。C. The identified characteristic information of the construction scene, such as the construction enclosure, bridge piers, ground, bridge bottom and bottom separation joints, is superimposed on the laser radar shot using the pixel coordinates and depth values in the RGB-D depth map as a reference. The corresponding part of the initial point cloud image. The initial point cloud image of the superimposed lidar also contains feature information such as color and architectural scene semantics.
D、由于建筑施工采用施工围挡,该施工围挡的围挡高度通常是一致的,可以很方便地事先直接测量得到。根据施工围挡的高度实测值,对叠加的颜色特征点云进行修正,确保激光雷达的点云图和深度图中的施工围挡高度为实测值,从而保证该深度相机识别的建筑施工特征精确地叠加到激光雷达的点云图的相应的正确位置上。D. Since construction enclosures are used in building construction, the enclosure heights of the construction enclosures are usually the same and can be easily measured directly in advance. According to the actual measured value of the height of the construction enclosure, the superimposed color feature point cloud is corrected to ensure that the height of the construction enclosure in the lidar point cloud image and depth map is the actual measured value, thereby ensuring that the building construction features recognized by the depth camera are accurately Superimposed on the corresponding correct position of the lidar point cloud image.
E、保持机械臂末端的执行机构空间点位不变,改变二轴云台的倾角,控制该激光雷达和该深度相机进行新一轮的扫描,重复步骤A至步骤D,得到相应的带颜色语义特征的三维激光点云图。E. Keep the spatial position of the actuator at the end of the robotic arm unchanged, change the inclination of the two-axis gimbal, control the laser radar and the depth camera to perform a new round of scanning, repeat steps A to D, and obtain the corresponding color band 3D laser point cloud images of semantic features.
F、将该二轴云台在不同倾角生成的带有颜色语义特征的三维激光点云图进行拼接。在本步骤中,根据识别的施工围挡、桥墩、地面、桥底面及底面分隔缝等建筑施工的特征信息,进行点云全局配准(粗拼接),为下一步的局部点云配准(精拼接)打下基础。由于局部点云配准采用的ICP迭代最近点算法,要求待局部配准的两个点云图相互错开的距离不能太大,所以需要先进行点云图的全局配准。F. Splice the three-dimensional laser point cloud images with color semantic features generated by the two-axis gimbal at different tilt angles. In this step, based on the identified characteristic information of building construction such as construction enclosures, bridge piers, ground, bridge bottom and bottom separation joints, global point cloud registration (coarse splicing) is performed to prepare for the next step of local point cloud registration ( Fine splicing) lays the foundation. Since the ICP iterative closest point algorithm used in local point cloud registration requires that the two point cloud images to be locally registered cannot be too far apart from each other, it is necessary to perform global registration of the point cloud images first.
G、采用ICP迭代最近点算法,对步骤F进行全局点云配准后的点云图进行局部点云配准,得到两个点云图之间的变换矩阵,将点云图进行相应的变换之后拼接在一起,就得到了拼接的点云图。G. Use the ICP iterative closest point algorithm to perform local point cloud registration on the point cloud image after the global point cloud registration in step F, obtain the transformation matrix between the two point cloud images, perform the corresponding transformation on the point cloud image and then splice it in Together, a spliced point cloud image is obtained.
H、重复步骤E到步骤G,将机械臂末端的执行机构在不同空间点位的带有颜色语义特征的三维激光点云图进行拼接,可最终得到整个施工场景带有颜色语义特征的目标点云图。H. Repeat steps E to G to splice the three-dimensional laser point cloud images with color semantic features of the actuator at the end of the robotic arm at different spatial points. Finally, a target point cloud image with color semantic features of the entire construction scene can be obtained. .
I、根据步骤G的点云图之间的变换矩阵,可得到该机械臂末端的执行机构在整个施工场景带有颜色语义特征的目标点云图中的位姿。I. According to the transformation matrix between the point cloud images in step G, the pose of the actuator at the end of the robotic arm in the target point cloud image with color semantic features of the entire construction scene can be obtained.
J、根据步骤I获得的执行机构的位姿,调整执行机构的空间位姿,以控制该建筑机器人对构筑物表面指定位置进行施工,并保证机械臂末端的执行机构与构筑物表面距离为给定值,且角度垂直。J. According to the pose of the actuator obtained in step I, adjust the spatial pose of the actuator to control the construction robot to perform construction at the specified position on the surface of the structure, and ensure that the distance between the actuator at the end of the robotic arm and the surface of the structure is a given value , and the angle is vertical.
K、根据该建筑机器人生成的整个施工现场带有颜色语义特征的目标点云图,以及执行机构在该目标点云图的实时位姿,建筑机器人可以自动规划路径或者按照任意指定的路径,进行全自动的施工作业。K. According to the target point cloud image with color semantic features of the entire construction site generated by the construction robot, and the real-time pose of the actuator in the target point cloud image, the construction robot can automatically plan the path or follow any specified path to perform fully automatic construction work.
本申请提供的上述实施例,可以在多种典型的建筑施工场景下,通过3D激光雷达和深度相机融合,充分利用建筑施工场景特有的特征信息,进行建筑机器人的建图、定位、导航,对所需施工的区域进行自动施工作业。操作人员只需在平板电脑上选择需要施工的区域,即可进行全自动施工作业,操作简单便捷,精度满足施工要求。The above-mentioned embodiments provided by this application can make full use of the unique characteristic information of the construction scene through the fusion of 3D lidar and depth cameras in a variety of typical construction scenarios to carry out mapping, positioning, and navigation of construction robots. Automatic construction operations are carried out in the areas where construction is required. Operators only need to select the area to be constructed on the tablet computer to carry out fully automatic construction operations. The operation is simple and convenient, and the accuracy meets construction requirements.
本申请实施例还提供了一种建筑机器人,参见图2-图4,该建筑机器人包括控制器、可移动的底盘1、与该底盘1相连接的机械臂2和位于该机械臂2末端的执行机构3,该执行机构3包括激光雷达31和深度相机32,该控制器用于执行上述任意一个实施例提供的位姿确定方法。The embodiment of the present application also provides a construction robot. See Figures 2-4. The construction robot includes a controller, a movable chassis 1, a robotic arm 2 connected to the chassis 1, and a robot at the end of the robotic arm 2. The execution mechanism 3 includes a laser radar 31 and a depth camera 32. The controller is used to execute the pose determination method provided in any of the above embodiments.
在一些实施例中,参见图5,该执行机构3还包括二轴云台,该激光雷达31和该深度相机32固定在该二轴云台上,且该激光雷达31与该深度相机32的竖直方向中心轴重合,该激光雷达31的0度线与该深度相机32的光轴方向平行。In some embodiments, see FIG. 5 , the actuator 3 also includes a two-axis pan/tilt, the laser radar 31 and the depth camera 32 are fixed on the two-axis pan/tilt, and the laser radar 31 and the depth camera 32 are connected to the two-axis pan/tilt. The vertical central axes coincide with each other, and the 0-degree line of the laser radar 31 is parallel to the optical axis direction of the depth camera 32 .
在本实施例中,在机械臂2末端安装执行机构3,执行机构3可以是机器人协作臂、一维导轨模组和二维导轨模组,可以安装喷枪、打磨头、和抹灰等装置,实现不同的功能。In this embodiment, an actuator 3 is installed at the end of the robotic arm 2. The actuator 3 can be a robot collaborative arm, a one-dimensional guide rail module or a two-dimensional guide rail module, and can be installed with spray guns, grinding heads, plastering and other devices. implement different functions.
该执行机构3上安装一个二轴云台,将3D激光雷达31和深度相机32固定在云台上,保证激光雷达31与深度相机32的竖直方向中心轴重合,激光雷达31的0度线与深度相机32光轴方向平行。保证二轴云台0位与具体执行机构3(如喷枪、打磨头等)平行。将深度相机32通过USB线与树莓派4B相连(深度相机32需要连接电脑读取深度相机32视频数据),深度相机32通过机械臂2控制柜的路由器,将RGB图、深度图和相机内参参数无线传输到平板电脑上。A two-axis pan/tilt is installed on the actuator 3, and the 3D lidar 31 and the depth camera 32 are fixed on the pan/tilt to ensure that the vertical center axes of the lidar 31 and the depth camera 32 coincide with the 0-degree line of the lidar 31. It is parallel to the optical axis direction of the depth camera 32 . Ensure that the 0 position of the two-axis pan/tilt is parallel to the specific actuator 3 (such as spray gun, grinding head, etc.). Connect the depth camera 32 to the Raspberry Pi 4B through a USB cable (the depth camera 32 needs to be connected to a computer to read the video data of the depth camera 32). The depth camera 32 uses the router in the control cabinet of the robot arm 2 to transfer the RGB image, depth image and camera internal parameters. Parameters are transferred wirelessly to the tablet.
将3D激光雷达31以上述类似的方式连接,将3D激光雷达31拍摄的点云图可以无线传输到平板电脑上。该平板电脑上安装有自己编写的软件,该软件可实时显示深度相机32拍摄的视频图及识别的颜色语义特征,实时显示原始3D激光雷达点云,实时显示某一点位的颜色语义特征点云图及整个施工现场的颜色语义特征点云图。可输入底盘1、机械臂2和末端执行机构3的相关参数,进行底盘1、机械臂2和末端执行机构3的所有控制工作。所有相关计算都在平板电脑上进行。Connect the 3D lidar 31 in a similar manner as above, and the point cloud image captured by the 3D lidar 31 can be wirelessly transmitted to the tablet computer. The tablet computer is installed with self-written software, which can display the video image captured by the depth camera 32 and the recognized color semantic features in real time, display the original 3D lidar point cloud in real time, and display the color semantic feature point cloud image of a certain point in real time. And the color semantic feature point cloud image of the entire construction site. The relevant parameters of chassis 1, robotic arm 2 and end effector 3 can be input to perform all control work of chassis 1, robotic arm 2 and end effector 3. All relevant calculations are performed on the tablet.
操作人员在平板电脑上选择所需的施工区域,即可全自动进行施工现场的建图、建筑机器人的定位和导航、末端执行机构施工作业等工作。施工完成后,已经完成的施工部分会转换为场地全局坐标进行储存,当出现已经施工过的点位时,会提醒操作者该位置已进行施工,有效避免重复施工的情况,提高施工效率和施工质量。The operator selects the required construction area on the tablet computer and can fully automatically carry out construction site mapping, positioning and navigation of the construction robot, and end-execution mechanism construction operations. After the construction is completed, the completed construction part will be converted into the global coordinates of the site and stored. When a point that has been constructed appears, the operator will be reminded that construction has been carried out at that location, effectively avoiding repeated construction and improving construction efficiency and construction efficiency. quality.
在本申请提供的上述实施例中,将3D激光雷达31和深度相机32结合,进行建筑机器人的建图和定位。通过建筑施工围挡、桥墩、地面、桥底面及底面分隔缝等建筑施工的特征信息,将这些颜色语义特征叠加到3D激光雷达点云上,生成颜色语义点云图进行建图定位导航。通过测量建筑施工围挡的实际高度值,以围挡的实际高度值为依据,进行颜色语义点云图的修正,保证深度相机32识别的建筑施工特征精确地叠加到激光点云的相应的正确位置上。以颜色语义点云图的建筑特征信息,进行点云全局配准(粗匹配),再进行ICP算法点云局部配准(精匹配),拼接得到整个施工现场的颜色语义点云图,并得到末端执行机构3相对于点云图的精确位姿。In the above embodiment provided by this application, the 3D lidar 31 and the depth camera 32 are combined to perform mapping and positioning of the construction robot. Through the characteristic information of building construction such as building construction enclosures, bridge piers, ground, bridge bottom and bottom separation joints, these color semantic features are superimposed on the 3D lidar point cloud to generate a color semantic point cloud map for mapping and positioning navigation. By measuring the actual height of the building construction enclosure and based on the actual height of the enclosure, the color semantic point cloud image is corrected to ensure that the building construction features identified by the depth camera 32 are accurately superimposed on the corresponding correct position of the laser point cloud. superior. Using the architectural feature information of the color semantic point cloud image, global point cloud registration (coarse matching) is performed, and then the ICP algorithm point cloud local registration (fine matching) is performed, and the color semantic point cloud image of the entire construction site is obtained by splicing, and the end execution is obtained The precise pose of mechanism 3 relative to the point cloud image.
在本申请所提供的几个实施例中,应该理解到,所揭露的方法和装置,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或模块的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed methods and devices can be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of modules is only a logical function division. In actual implementation, there may be other division methods. For example, multiple modules or components may be combined or can be integrated into another device, or some features can be ignored, or not implemented. On the other hand, the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, indirect coupling or communication connection of devices or modules, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。The modules described as separate components may or may not be physically separated, and the components shown as modules may or may not be physical modules, that is, they may be located in one place, or they may be distributed to multiple network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。In addition, each functional module in each embodiment of the present application can be integrated into one processing module, or each module can exist physically alone, or two or more modules can be integrated into one module. The above integrated modules can be implemented in the form of hardware or software function modules.
所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-OnlyMemory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated module is implemented in the form of a software function module and sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application is essentially or contributes to the existing technology, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of this application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code.
需要说明的是,对于前述的各方法实施例,为了简便描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其它顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定都是本申请所必须的。It should be noted that for the convenience of description, the foregoing method embodiments are all expressed as a series of action combinations. However, those skilled in the art should know that this application is not limited by the described action sequence. Because according to this application, certain steps may be performed in other orders or simultaneously. Secondly, those skilled in the art should also know that the embodiments described in the specification are all preferred embodiments, and the actions and modules involved are not necessarily necessary for this application.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其它实施例的相关描述。In the above embodiments, each embodiment is described with its own emphasis. For parts that are not described in detail in a certain embodiment, please refer to the relevant descriptions of other embodiments.
以上为对本申请所提供的一种位姿确定方法及建筑机器人的描述,对于本领域的技术人员,依据本申请实施例的思想,在具体实施方式及应用范围上均会有改变之处,综上,本说明书内容不应理解为对本申请的限制。The above is a description of a posture determination method and a construction robot provided by this application. For those skilled in the art, there will be changes in the specific implementation and application scope based on the ideas of the embodiments of this application. In summary, Above, the content of this description should not be construed as a limitation on this application.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311057968.3ACN117103295A (en) | 2023-08-22 | 2023-08-22 | Position determination method and construction robot |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311057968.3ACN117103295A (en) | 2023-08-22 | 2023-08-22 | Position determination method and construction robot |
| Publication Number | Publication Date |
|---|---|
| CN117103295Atrue CN117103295A (en) | 2023-11-24 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202311057968.3APendingCN117103295A (en) | 2023-08-22 | 2023-08-22 | Position determination method and construction robot |
| Country | Link |
|---|---|
| CN (1) | CN117103295A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118061205A (en)* | 2024-04-25 | 2024-05-24 | 福勤智能科技(昆山)有限公司 | Mobile robot control method, device, equipment and storage medium |
| CN118528269A (en)* | 2024-06-14 | 2024-08-23 | 北京积加科技有限公司 | Machine device control method, device, electronic apparatus, and computer-readable medium |
| CN118752498A (en)* | 2024-09-09 | 2024-10-11 | 湖南大学 | Autonomous navigation method and system for a steel bar tying robot |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118061205A (en)* | 2024-04-25 | 2024-05-24 | 福勤智能科技(昆山)有限公司 | Mobile robot control method, device, equipment and storage medium |
| CN118528269A (en)* | 2024-06-14 | 2024-08-23 | 北京积加科技有限公司 | Machine device control method, device, electronic apparatus, and computer-readable medium |
| CN118528269B (en)* | 2024-06-14 | 2024-12-06 | 北京积加科技有限公司 | Machine device control method, device, electronic device and computer readable medium |
| CN118752498A (en)* | 2024-09-09 | 2024-10-11 | 湖南大学 | Autonomous navigation method and system for a steel bar tying robot |
| Publication | Publication Date | Title |
|---|---|---|
| CN117103295A (en) | Position determination method and construction robot | |
| US8234011B2 (en) | Movable robot | |
| CN113333998B (en) | An automated welding system and method based on collaborative robots | |
| CN114474041B (en) | Welding automation intelligent guiding method and system based on cooperative robot | |
| CN109591011B (en) | Automatic tracking method of laser vision path for unilateral stitching of composite 3D structural parts | |
| CN112184812B (en) | Methods to improve AprilTag recognition and positioning accuracy of drone cameras and positioning methods and systems | |
| WO2023193362A1 (en) | Hybrid robot and three-dimensional vision based large-scale structural part automatic welding system and method | |
| CN112959329A (en) | Intelligent control welding system based on vision measurement | |
| CN113192054A (en) | Method and system for detecting and positioning complex parts based on 2-3D vision fusion | |
| CN108571971A (en) | A kind of AGV visual positioning system and method | |
| CN114800574B (en) | A robot automated welding system and method based on dual three-dimensional cameras | |
| WO2020024178A1 (en) | Hand-eye calibration method and system, and computer storage medium | |
| CN110517284B (en) | A Target Tracking Method Based on LiDAR and PTZ Camera | |
| CN112469967B (en) | Mapping system, mapping method, mapping device, mapping apparatus, and recording medium | |
| CN115042175B (en) | Method for adjusting tail end gesture of mechanical arm of robot | |
| CN112958959A (en) | Automatic welding and detection method based on three-dimensional vision | |
| CN108415434B (en) | Robot scheduling method | |
| CN111958593B (en) | Vision servo method and system for inspection operation robot of semantic intelligent substation | |
| TW202212081A (en) | Calibration apparatus and calibration method for coordinate system of robotic arm | |
| CN115709331A (en) | Welding robot full-autonomous visual guidance method and system based on target detection | |
| CN116977443A (en) | Target positioning method and device based on linkage of gun camera and cradle head camera | |
| CN110928311B (en) | Indoor mobile robot navigation method based on linear features under panoramic camera | |
| CN111283676B (en) | Tool coordinate system calibration method and calibration device of three-axis mechanical arm | |
| CN115091465A (en) | Mechanical arm path compensation method and device, electronic equipment and storage medium | |
| JPH0847881A (en) | Robot remote control method |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |