技术领域technical field
本申请涉及机器人技术领域,特别涉及一种地图生成方法。本申请同时还设计一种机器人。The present application relates to the field of robot technology, in particular to a map generation method. The present application also designs a robot simultaneously.
背景技术Background technique
机器人定位一直是机器人技术研究的重点和难点,也是机器人实现导航的前提,对提高机器人智能化水平具有重要的意义。Robot positioning has always been the focus and difficulty of robotics research, and it is also the premise for robots to achieve navigation, which is of great significance to improving the level of robot intelligence.
SLAM(Simultaneous Localization and Mapping,同步定位与建图)目前是实现真正全自主移动机器人的关键技术之一。该技术能够使机器人在未知环境中从一个未知位置开始移动,在移动过程中根据位置估计和地图进行自身定位,同时在自身定位的基础上建造增量式地图,从而实现机器人的自主定位和导航。SLAM (Simultaneous Localization and Mapping, simultaneous positioning and mapping) is currently one of the key technologies to realize a truly fully autonomous mobile robot. This technology enables the robot to move from an unknown position in an unknown environment, and to position itself according to position estimation and maps during the movement, and build an incremental map based on its own positioning, so as to realize the autonomous positioning and navigation of the robot .
SLAM的实现方式与难度和传感器的形式与安装方式密切相关。传感器分为激光和视觉两大类:The implementation of SLAM is closely related to the difficulty and the form and installation method of the sensor. Sensors are divided into two categories: laser and vision:
(1)激光雷达(激光SLAM)(1) LiDAR (Laser SLAM)
激光雷达是最古老,研究也最多的SLAM传感器。它们提供机器人本体与周围环境障碍物间的距离信息。常见的激光雷达,例如SICK、Velodyne还有我们国产的rplidar等,都可以拿来做SLAM。激光雷达能以很高精度测出机器人周围障碍点的角度和距离,从而很方便地实现SLAM、避障等功能。LiDAR is the oldest and most studied SLAM sensor. They provide distance information between the robot body and surrounding environment obstacles. Common laser radars, such as SICK, Velodyne, and our domestic rplidar, can all be used for SLAM. Lidar can measure the angle and distance of obstacles around the robot with high precision, so that it is very convenient to realize SLAM, obstacle avoidance and other functions.
(2)视觉SLAM(2) Visual SLAM
由于CPU、GPU处理速度的增长,使得许多以前被认为无法实时化的视觉算法,得以在10Hz以上的速度运行。硬件的提高也促进了视觉SLAM的发展。Due to the increase in CPU and GPU processing speed, many visual algorithms that were previously considered impossible to real-time can run at speeds above 10Hz. The improvement of hardware has also promoted the development of visual SLAM.
以传感器而论,视觉SLAM研究主要分为三大类:单目、双目(或多目)、RGBD。In terms of sensors, visual SLAM research is mainly divided into three categories: monocular, binocular (or multi-purpose), and RGBD.
发明人在实现本申请的过程中发现,现有技术通过激光slam技术建立2D地图的时候,得到的信息为地图的二维平面图。在通过视觉SLAM的稀疏点云建立3D地图时,得到的信息为稀疏的3D点云。然而,将这两种信息与现实世界对应起来,则需要依赖于人来根据想象来对应。如:当通过激光Slam扫描过一扇打开的门时,地图上只会现实一段空白,而不是一扇门。In the process of implementing the present application, the inventor found that when the prior art creates a 2D map through laser slam technology, the obtained information is a two-dimensional plan view of the map. When building a 3D map through the sparse point cloud of visual SLAM, the obtained information is a sparse 3D point cloud. However, matching these two kinds of information with the real world requires relying on people to match according to imagination. For example: when an open door is scanned by laser Slam, only a blank space will be displayed on the map instead of a door.
由此可见,如何使机器人在建立地图的时候实现物体的自动标注,成为本领域技术人员亟待解决的技术问题。It can be seen that how to enable the robot to realize automatic labeling of objects when building a map has become a technical problem to be solved urgently by those skilled in the art.
发明内容Contents of the invention
本申请提供了一种地图生成方法,以应对机器人在建立地图时,可以实现物体的自动标注。The present application provides a method for generating a map to deal with automatic labeling of objects when a robot builds a map.
为了达到以上目的,本申请提供了一种地图生成方法,应用于机器人中,其特征在于,所述机器人中设有摄像头以及SLAM系统,该方法包括:In order to achieve the above purpose, the present application provides a method for generating a map, which is applied to a robot, wherein the robot is provided with a camera and a SLAM system, and the method includes:
利用预设的物体模型对所述摄像头采集的物体进行识别;Using a preset object model to identify the object collected by the camera;
将所述摄像头获取的所述物体的坐标映射为指定维度的坐标;mapping the coordinates of the object acquired by the camera to coordinates of a specified dimension;
根据所述识别结果以及所述坐标对所述SLAM系统生成的地图进行标注,以生成标注有所述物体的地图。Marking the map generated by the SLAM system according to the recognition result and the coordinates, so as to generate a map marked with the object.
优选地,在利用预设的物体模型对所述摄像头采集的物体进行识别之前,还包括:Preferably, before using the preset object model to identify the object collected by the camera, it also includes:
对所述SLAM系统中所有需要标注的物体进行物体模型训练,以生成与各所述物体对应的物体模型。Object model training is performed on all objects that need to be marked in the SLAM system, so as to generate object models corresponding to each of the objects.
优选地,,当所述SLAM系统为激光SLAM系统时,所述地图通过所述SLAM系统采集的激光数据生成,所述指定维度为二维。Preferably, when the SLAM system is a laser SLAM system, the map is generated from laser data collected by the SLAM system, and the specified dimension is two-dimensional.
优选地,当所述SLAM系统为视觉SLAM系统时,所述地图通过所述SLAM系统通过所述摄像头采集的数据生成,所述指定维度为三维。Preferably, when the SLAM system is a visual SLAM system, the map is generated by the data collected by the camera by the SLAM system, and the specified dimension is three-dimensional.
优选地,与所述激光SLAM系统对应的所述摄像头的类型为深度摄像头;Preferably, the type of the camera corresponding to the laser SLAM system is a depth camera;
与所述视觉SLAM系统对应的所述摄像头的类型包括单目摄像头、双目摄像头以及所述深度摄像头。The type of the camera corresponding to the visual SLAM system includes a monocular camera, a binocular camera and the depth camera.
本申请提供了一种机器人,其特征在于,所述机器人中设有摄像头以及SLAM系统,该机器人还包括:The application provides a robot, characterized in that the robot is provided with a camera and a SLAM system, and the robot also includes:
识别模块,用于利用预设的物体模型对所述摄像头采集的物体进行识别;A recognition module, configured to use a preset object model to recognize objects collected by the camera;
映射模块,用于将所述摄像头获取的所述物体的坐标映射为指定维度的坐标;A mapping module, configured to map the coordinates of the object acquired by the camera to coordinates of a specified dimension;
标注模块,用于根据所述识别结果以及所述坐标对所述SLAM系统生成的地图进行标注,以生成标注有所述物体的地图。A marking module, configured to mark the map generated by the SLAM system according to the recognition result and the coordinates, so as to generate a map marked with the object.
优选地,在利用预设的物体模型对所述摄像头采集的物体进行识别之前,还包括:Preferably, before using the preset object model to identify the object collected by the camera, it also includes:
模型训练模块,用于对所述SLAM系统中所有需要标注的物体进行物体模型训练,以生成与各所述物体对应的物体模型。A model training module, configured to perform object model training on all objects that need to be marked in the SLAM system, so as to generate object models corresponding to each of the objects.
优选地,当所述SLAM系统为激光SLAM系统时,所述地图通过所述SLAM系统采集的激光数据生成,所述指定维度为二维。Preferably, when the SLAM system is a laser SLAM system, the map is generated from laser data collected by the SLAM system, and the specified dimension is two-dimensional.
优选地,当所述SLAM系统为视觉SLAM系统时,所述地图通过所述SLAM系统通过所述摄像头采集的数据生成,所述指定维度为三维。Preferably, when the SLAM system is a visual SLAM system, the map is generated by the data collected by the camera by the SLAM system, and the specified dimension is three-dimensional.
优选地,与所述激光SLAM系统对应的所述摄像头的类型为深度摄像头;Preferably, the type of the camera corresponding to the laser SLAM system is a depth camera;
与所述视觉SLAM系统对应的所述摄像头的类型包括单目摄像头、双目摄像头以及所述深度摄像头。The type of the camera corresponding to the visual SLAM system includes a monocular camera, a binocular camera and the depth camera.
与现有技术相比,本申请实施例所提出的技术方案的有益技术效果包括:Compared with the prior art, the beneficial technical effects of the technical solution proposed in the embodiment of the present application include:
该方法通过利用预设的物体模型对摄像头采集的物体进行识别,将摄像头获取的所述物体的坐标映射为指定维度的坐标,根据所述识别结果以及所述坐标对所述SLAM系统生成的地图进行标注,以生成标注有所述物体的地图。通过应用本申请实施例所提出的技术方案,可以实现机器人在建立地图时,自动将物体标注在地图上。使得用户可以更加便利的在地图上进行标注,解决了机器人在建立地图无法对物体标注的问题。The method recognizes the object collected by the camera by using the preset object model, maps the coordinates of the object obtained by the camera to coordinates of a specified dimension, and maps the map generated by the SLAM system according to the recognition result and the coordinates Annotate to generate a map annotated with the object. By applying the technical solutions proposed in the embodiments of the present application, the robot can automatically mark objects on the map when building the map. It allows users to mark on the map more conveniently, and solves the problem that the robot cannot mark objects when building a map.
附图说明Description of drawings
图1为本申请实施例提供的地图生成的流程示意图;Fig. 1 is a schematic flow chart of map generation provided by the embodiment of the present application;
图2为本申请实施例提供的对需要标注的物体训练的示意图;FIG. 2 is a schematic diagram of the training of objects that need to be marked provided by the embodiment of the present application;
图3为本申请实施例一提供的激光SLAM系统生成地图的流程的示意图;FIG. 3 is a schematic diagram of the process of generating a map by the laser SLAM system provided in Embodiment 1 of the present application;
图4为本申请实施例二提供的视觉SLAM系统生成地图的流程的示意图;FIG. 4 is a schematic diagram of the process of generating a map by the visual SLAM system provided in Embodiment 2 of the present application;
图5为本申请实施例提供的地图生成的模块架构图。FIG. 5 is a block diagram of map generation provided by the embodiment of the present application.
具体实施方式Detailed ways
正如本申请背景技术所陈述的,在现有技术通过激光slam技术建立2D地图的时候,得到的信息为地图的二维平面图。在通过视觉SLAM的稀疏点云建立3D地图时,得到的信息为稀疏的3D点云。缺少一种使机器人在建立地图的时对物体的自动标注的方案。As stated in the background technology of this application, when a 2D map is established by laser slam technology in the prior art, the obtained information is a two-dimensional plan view of the map. When building a 3D map through the sparse point cloud of visual SLAM, the obtained information is a sparse 3D point cloud. There is a lack of a solution for the robot to automatically label objects when building a map.
有鉴于背景技术中的问题,本申请提出了一种地图生成方法,该方法可以提供统一的、完备的方法论,实现将机器人在建立地图的时对物体的自动标注,从而清楚直接知道物体究竟是何物,进而解决了对机器人在建立地图的时对物体的自动标注的问题。In view of the problems in the background technology, this application proposes a map generation method, which can provide a unified and complete methodology to realize the automatic labeling of objects by the robot when building a map, so as to clearly and directly know whether the object is What objects, and then solve the problem of automatic labeling of objects when the robot is building a map.
如图1所示,为本申请实施例所提出的的一种机器人在建立地图的时对物体的自动标注方法的流程示意图,所述方法应用于机器人中,所述机器中设有摄像头以及SLAM系统,具体地,该方法包括以下步骤:As shown in Figure 1, it is a schematic flow chart of a method for automatically labeling objects when a robot is building a map proposed in the embodiment of the present application. The method is applied to a robot, and the machine is equipped with a camera and SLAM System, specifically, the method includes the following steps:
S101,利用预设的物体模型对所述摄像头采集的物体进行识别。S101. Using a preset object model to identify an object collected by the camera.
在优选实施例中,在利用预设的物体模型对所述摄像头采集的物体进行识别之前,要对所述SLAM系统中所有需要标注的物体进行物体模型训练,以生成与各所述物体对应的物体模型。In a preferred embodiment, before using the preset object model to identify the objects collected by the camera, object model training should be performed on all objects that need to be marked in the SLAM system, so as to generate the object corresponding to each object. object model.
本申请提供了多摄像头的机器人或者SLAM系统,在机器人身体上安装前摄像头,后摄像头,左摄像头,后摄像头。显然,所描述的摄像头位置只是本申请的一部分实施例,而不是全部的实施例。本申请所提供的摄像头设备为深度摄像头或者单目摄像头、双目摄像头。This application provides a multi-camera robot or SLAM system, where a front camera, a rear camera, a left camera, and a rear camera are installed on the robot body. Apparently, the described camera positions are only some of the embodiments of this application, not all of them. The camera device provided in this application is a depth camera or a monocular camera or a binocular camera.
对于预先设定的物体模型,是SLAM系统的激光或是视觉扫描大量的实体物体得到的,SLAM系统对得到的物体图像信息进行记录和存储。从而可以对所述摄像头采集的物体进行识别。For the preset object model, it is obtained by the laser or visual scanning of a large number of physical objects in the SLAM system, and the SLAM system records and stores the obtained object image information. Therefore, the objects collected by the camera can be identified.
S102,将所述摄像头获取的所述物体的坐标映射为指定维度的坐标。S102. Map the coordinates of the object acquired by the camera to coordinates of a specified dimension.
其中,对于如何获取所述物体的坐标即SLAM系统如何生成地图,由于与现有技术的获取生成方式相同,故在此不再做具体阐述;Wherein, how to obtain the coordinates of the object, that is, how the SLAM system generates a map, is not described in detail here because it is the same as the acquisition and generation method of the prior art;
根据SLAM的实现方式与难度和传感器的形式与安装方式密切相关(传感器分为激光和视觉两大类),则当所述摄像头获取所述物体的坐标后,According to the implementation of SLAM and the difficulty and the form of the sensor are closely related to the installation method (the sensor is divided into two categories: laser and vision), then when the camera acquires the coordinates of the object,
当所述SLAM系统为激光SLAM系统时,所述地图通过所述SLAM系统采集的激光数据生成,所述指定维度为二维,且所述激光SLAM系统对应的所述摄像头的类型为深度摄像头;或When the SLAM system is a laser SLAM system, the map is generated by laser data collected by the SLAM system, the specified dimension is two-dimensional, and the type of the camera corresponding to the laser SLAM system is a depth camera; or
当所述SLAM系统为视觉SLAM系统时,所述地图通过所述SLAM系统通过所述摄像头采集的数据生成,所述指定维度为三维,且所述视觉SLAM系统对应的所述摄像头的类型包括单目摄像头、双目摄像头以及所述深度摄像头。When the SLAM system is a visual SLAM system, the map is generated by the SLAM system through the data collected by the camera, the specified dimension is three-dimensional, and the type of the camera corresponding to the visual SLAM system includes a single eye camera, binocular camera and the depth camera.
SLAM系统会根据不同的SLAM系统(激光/视觉),匹配相应的摄像头,进而生成相应的维度坐标。The SLAM system will match the corresponding cameras according to different SLAM systems (laser/vision), and then generate corresponding dimensional coordinates.
S103,根据所述识别结果以及所述坐标对所述SLAM系统生成的地图进行标注,以生成标注有所述物体的地图。S103. Mark the map generated by the SLAM system according to the recognition result and the coordinates, so as to generate a map marked with the object.
SLAM系统将所述识别结果以及所述坐标在所述SLAM系统生成的地图进行标注,该标注中明确了物体的信息,例如当扫描的是一扇打开的门时,该标注将配合所述的坐标将门的匹配信息标注在所述SLAM系统生成的地图上,即生成标注有所述物体的地图。The SLAM system marks the recognition result and the coordinates on the map generated by the SLAM system, and the mark specifies the information of the object. For example, when scanning an open door, the mark will match the The coordinates mark the matching information of the door on the map generated by the SLAM system, that is, generate a map marked with the object.
与现有技术相比,本申请实施例所提出的的技术方案的有益技术效果还包括:Compared with the prior art, the beneficial technical effects of the technical solution proposed in the embodiment of the present application also include:
本申请实施例公开了一种地图生成方法及机器人,在多摄像头的机器人中,该方法通过利用预设的物体模型对所述摄像头采集的物体进行识别,将所述摄像头获取的所述物体的坐标映射为指定维度的坐标,根据所述识别结果以及所述坐标对所述SLAM系统生成的地图进行标注,以生成标注有所述物体的地图。通过应用本申请实施例所提出的技术方案,可以实现机器人在建立地图时,自动将需要标注的物体标注在地图上。使得用户可以更加便利的在地图上进行标注,解决了机器人在建立地图的时无法对物体的自动标注的问题。The embodiment of the present application discloses a method for generating a map and a robot. In a multi-camera robot, the method uses a preset object model to identify the object collected by the camera, and uses the image of the object obtained by the camera to The coordinates are mapped to coordinates of a specified dimension, and the map generated by the SLAM system is marked according to the recognition result and the coordinates, so as to generate a map marked with the object. By applying the technical solutions proposed in the embodiments of the present application, the robot can automatically mark objects to be marked on the map when building a map. It enables users to mark on the map more conveniently, and solves the problem that the robot cannot automatically mark objects when building a map.
为了进一步阐述本申请的技术思想,现结合具体的应用场景,对本申请的技术方案进行说明。且根据SLAM的实现方式与难度和传感器的形式与安装方式密切相关(传感器分为激光和视觉两大类),本申请有两种不同的实施例方案:In order to further illustrate the technical idea of the present application, the technical solution of the present application is described in combination with specific application scenarios. And according to the implementation mode and difficulty of SLAM and the form and installation mode of the sensor are closely related (sensors are divided into two categories: laser and vision), this application has two different embodiment schemes:
实施例1,参见图3,为本申请实施例提供的激光Slam的流程示意图,该流程可包括:Embodiment 1, see FIG. 3 , which is a schematic flow diagram of the laser Slam provided in the embodiment of the present application. The flow may include:
S301,机器人分别利用激光扫描和深度摄像头获取激光数据和深度摄像头数据。S301. The robot acquires laser data and depth camera data by using the laser scanning and the depth camera respectively.
其中,激光Slam系统,该系统包括中激光对目标物体进行扫描,捕作目标物体的图像信息,机器人将该图像信息记录并生成激光数据;Among them, the laser Slam system, which includes a laser to scan the target object, capture the image information of the target object, and the robot records the image information and generates laser data;
同时深度摄像头也对目标物体进行扫描,捕作目标物体的图像信息以使生成深度摄像头数据。At the same time, the depth camera also scans the target object, and captures the image information of the target object to generate the depth camera data.
本申请中提供了一种深度摄像头,该摄像头有很多不同的方式去实现,例如双摄像头的视差,单个摄像头通过移动在不同角度捕捉同一场景,photometric stereo等等,甚至还有ML的方式重建场景模型,或者通过多次不同距离的对焦计算距离等。This application provides a depth camera, which can be implemented in many different ways, such as the parallax of dual cameras, a single camera captures the same scene at different angles by moving, photometric stereo, etc., and even reconstructs the scene in the way of ML model, or calculate the distance by focusing multiple times at different distances, etc.
S302,所述激光SLAM系统获取激光数据,并生成相应的地图。S302. The laser SLAM system acquires laser data and generates a corresponding map.
其中,所述地图可有三种表示方式:栅格表示,几何特征表示以及拓扑图表示,基于激光SLAM建立的是二维地图,则SLAM在二维世界中,有三个量,x轴,y轴,和方向角。Among them, the map can be expressed in three ways: grid representation, geometric feature representation and topological map representation. Based on laser SLAM, a two-dimensional map is established. In the two-dimensional world, SLAM has three quantities, x-axis and y-axis , and the orientation angle.
S303,利用预设的物体模型对所述深度摄像头采集的数据进行识别。S303. Using a preset object model to identify the data collected by the depth camera.
其中,预设的物体模型是SLAM系统的激光或是视觉扫描大量的实体物体得到的,SLAM系统对得到的物体图像信息进行记录和存储。从而物体模型可以对所述摄像头采集的物体进行识别。Among them, the preset object model is obtained by laser or visual scanning of a large number of physical objects in the SLAM system, and the SLAM system records and stores the obtained object image information. Therefore, the object model can recognize the objects collected by the camera.
S304,根据深度信息坐标映射二维坐标。S304, mapping two-dimensional coordinates according to the depth information coordinates.
S305,结合所述物体模型识别结果在所述激光SLAM系统生成的地图上再生成有标注的二维地图。S305, regenerating a marked two-dimensional map on the map generated by the laser SLAM system in combination with the object model recognition result.
在本实施例中,机器人分别利用激光扫描和深度摄像头获取激光数据和深度摄像头数据,所述激光SLAM系统获取激光数据,并生成相应的地图,利用预设的物体模型对所述深度摄像头采集的数据进行识别,根据深度信息坐标映射二维坐标,并结合所述物体模型识别结果在所述激光SLAM系统生成的地图上再生成有标注的二维地图。从而实现了机器人在建立地图时,自动将需要标注的物体标注在地图上。In this embodiment, the robot uses laser scanning and depth cameras to obtain laser data and depth camera data respectively. The laser SLAM system obtains laser data and generates corresponding maps. The data is identified, the two-dimensional coordinates are mapped according to the depth information coordinates, and the two-dimensional map with markings is regenerated on the map generated by the laser SLAM system in combination with the object model recognition results. Therefore, when the robot builds a map, it can automatically mark the objects that need to be marked on the map.
实施例2,参见图4,为本申请实施例提供的视觉Slam的流程示意图,该流程可包括:Embodiment 2, see FIG. 4, which is a schematic flow diagram of the visual Slam provided by the embodiment of the present application, which may include:
S401,机器人分别用单目、双目以及深度摄像头获取数据。S401. The robot acquires data with monocular, binocular and depth cameras respectively.
其中,视觉SLAM系统,该系统包括中所述摄像头对目标物体进行扫描,捕作目标物体的图像信息,机器人将该图像信息记录并生成相应的数据;Among them, the visual SLAM system, which includes the camera described above to scan the target object, capture the image information of the target object, and the robot records the image information and generates corresponding data;
本申请中提供了一种视觉SLAM系统,视觉SLAM系统分为四个模块(除去传感器数据读取):VO、(视觉里程计Visual Odometry)后端、建图、回环检测。This application provides a visual SLAM system. The visual SLAM system is divided into four modules (excluding sensor data reading): VO, (Visual Odometry) backend, mapping, and loop detection.
S402,所述视觉SLAM系统获取所述数据,并生成相应的地图。S402. The visual SLAM system acquires the data, and generates a corresponding map.
其中,所述地图可有三种表示方式:栅格表示,几何特征表示以及拓扑图表示,基于视觉SLAM建立的是三维地图,则在三维世界中,会复杂很多,有6个量,x,y,z,roll(滚转角Φ),yaw(偏航角ψ),pitch(俯仰角θ)。Among them, the map can be expressed in three ways: grid representation, geometric feature representation and topological map representation. The three-dimensional map established based on visual SLAM will be much more complicated in the three-dimensional world. There are 6 quantities, x, y , z, roll (roll angle Φ), yaw (yaw angle ψ), pitch (pitch angle θ).
S403,利用预设的物体模型对所述深度摄像头采集的数据进行识别。S403. Using a preset object model to identify the data collected by the depth camera.
其中,预设的物体模型是SLAM系统的激光或是视觉扫描大量的实体物体得到的,SLAM系统对得到的物体图像信息进行记录和存储。从而物体模型可以对所述摄像头采集的物体进行识别。Among them, the preset object model is obtained by laser or visual scanning of a large number of physical objects in the SLAM system, and the SLAM system records and stores the obtained object image information. Therefore, the object model can recognize the objects collected by the camera.
S404,根据深度信息坐标映射三维坐标。S404, mapping three-dimensional coordinates according to the depth information coordinates.
S405,结合所述物体模型识别结果在所述激光SLAM系统生成的地图上再生成有标注的三维地图。S405, regenerating a marked three-dimensional map on the map generated by the laser SLAM system in combination with the object model recognition result.
在本实施例中,机器人分别用单目、双目以及深度摄像头获取数据,所述视觉SLAM系统获取所述数据,并生成相应的地图,利用预设的物体模型对所述深度摄像头采集的数据进行识别,根据深度信息坐标映射三维坐标,并结合所述物体模型识别结果在所述激光SLAM系统生成的地图上再生成有标注的三维地图。从而实现了机器人在建立地图时,自动将需要标注的物体标注在地图上。In this embodiment, the robot acquires data with monocular, binocular, and depth cameras respectively, and the visual SLAM system acquires the data, and generates a corresponding map, and uses a preset object model to analyze the data collected by the depth camera. Recognize, map the three-dimensional coordinates according to the depth information coordinates, and regenerate a marked three-dimensional map on the map generated by the laser SLAM system in combination with the object model recognition results. Therefore, when the robot builds a map, it can automatically mark the objects that need to be marked on the map.
基于与上述方法同样的发明构思,本申请实施例中还提供了一种机器人,如图5所示,包括:Based on the same inventive concept as the above method, a robot is also provided in the embodiment of the present application, as shown in Figure 5, including:
识别模块52,用于利用预设的物体模型对所述摄像头采集的物体进行识别;A recognition module 52, configured to use a preset object model to recognize the object collected by the camera;
映射模块53,用于将所述摄像头获取的所述物体的坐标映射为指定维度的坐标;A mapping module 53, configured to map the coordinates of the object acquired by the camera into coordinates of a specified dimension;
标注模块54,用于根据所述识别结果以及所述坐标对所述SLAM系统生成的地图进行标注,以生成标注有所述物体的地图。The marking module 54 is configured to mark the map generated by the SLAM system according to the recognition result and the coordinates, so as to generate a map marked with the object.
在优选实施例中,在利用预设的物体模型对所述摄像头采集的物体进行识别之前,还包括:In a preferred embodiment, before using the preset object model to identify the object collected by the camera, it also includes:
模型训练模块51,用于对所述SLAM系统中所有需要标注的物体进行物体模型训练,以生成与各所述物体对应的物体模型。The model training module 51 is configured to perform object model training on all objects that need to be marked in the SLAM system, so as to generate object models corresponding to each of the objects.
优选地,当所述SLAM系统为激光SLAM系统时,所述地图通过所述SLAM系统采集的激光数据生成,所述指定维度为二维。Preferably, when the SLAM system is a laser SLAM system, the map is generated from laser data collected by the SLAM system, and the specified dimension is two-dimensional.
优选地,当所述SLAM系统为视觉SLAM系统时,所述地图通过所述SLAM系统通过所述摄像头采集的数据生成,所述指定维度为三维。Preferably, when the SLAM system is a visual SLAM system, the map is generated by the data collected by the camera by the SLAM system, and the specified dimension is three-dimensional.
优选地,与所述激光SLAM系统对应的所述摄像头的类型为深度摄像头;Preferably, the type of the camera corresponding to the laser SLAM system is a depth camera;
与所述视觉SLAM系统对应的所述摄像头的类型包括单目摄像头、双目摄像头以及所述深度摄像头。The type of the camera corresponding to the visual SLAM system includes a monocular camera, a binocular camera and the depth camera.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到本申请可以通过硬件实现,也可以借助软件加必要的通用硬件平台的方式来实现。基于这样的理解,本申请的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施场景所述的方法。Through the above description of the embodiments, those skilled in the art can clearly understand that the present application can be realized by hardware, or by software plus a necessary general hardware platform. Based on this understanding, the technical solution of the present application can be embodied in the form of software products, which can be stored in a non-volatile storage medium (which can be CD-ROM, U disk, mobile hard disk, etc.), including several The instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute the methods described in various implementation scenarios of the present application.
本领域技术人员可以理解附图只是一个优选实施场景的示意图,附图中的模块或流程并不一定是实施本申请所必须的。Those skilled in the art can understand that the accompanying drawing is only a schematic diagram of a preferred implementation scenario, and the modules or processes in the accompanying drawings are not necessarily necessary for implementing the present application.
本领域技术人员可以理解实施场景中的装置中的模块可以按照实施场景描述进行分布于实施场景的装置中,也可以进行相应变化位于不同于本实施场景的一个或多个装置中。上述实施场景的模块可以合并为一个模块,也可以进一步拆分成多个子模块。Those skilled in the art can understand that the modules in the devices in the implementation scenario can be distributed among the devices in the implementation scenario according to the description of the implementation scenario, or can be located in one or more devices different from the implementation scenario according to corresponding changes. The modules of the above implementation scenarios can be combined into one module, or can be further split into multiple sub-modules.
上述本申请序号仅仅为了描述,不代表实施场景的优劣。The serial numbers of the above application are for description only, and do not represent the pros and cons of the implementation scenarios.
以上公开的仅为本申请的几个具体实施场景,但是,本申请并非局限于此,任何本领域的技术人员能思之的变化都应落入本申请的保护范围。The above disclosures are only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any changes conceivable by those skilled in the art shall fall within the protection scope of the present application.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710298790.XACN108803591B (en) | 2017-05-02 | 2017-05-02 | A map generation method and robot |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710298790.XACN108803591B (en) | 2017-05-02 | 2017-05-02 | A map generation method and robot |
| Publication Number | Publication Date |
|---|---|
| CN108803591Atrue CN108803591A (en) | 2018-11-13 |
| CN108803591B CN108803591B (en) | 2025-02-18 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201710298790.XAActiveCN108803591B (en) | 2017-05-02 | 2017-05-02 | A map generation method and robot |
| Country | Link |
|---|---|
| CN (1) | CN108803591B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109637339A (en)* | 2018-11-19 | 2019-04-16 | 深圳市海柔创新科技有限公司 | Map generation method, map generation device, computer-readable storage medium and computer equipment |
| CN109887087A (en)* | 2019-02-22 | 2019-06-14 | 广州小鹏汽车科技有限公司 | A kind of SLAM of vehicle builds drawing method and system |
| CN110269550A (en)* | 2019-06-13 | 2019-09-24 | 深圳市银星智能科技股份有限公司 | A kind of location recognition method and mobile robot |
| CN110275540A (en)* | 2019-07-01 | 2019-09-24 | 湖南海森格诺信息技术有限公司 | Semantic navigation method and its system for sweeping robot |
| CN110275181A (en)* | 2019-07-08 | 2019-09-24 | 武汉中海庭数据技术有限公司 | A kind of vehicle-mounted mobile measuring system and its data processing method |
| CN110826474A (en)* | 2019-03-10 | 2020-02-21 | 成都家有为力机器人技术有限公司 | Semantic map construction system based on specific target recognition and laser SLAM |
| WO2020155615A1 (en)* | 2019-01-28 | 2020-08-06 | 速感科技(北京)有限公司 | Vslam method, controller, and mobile device |
| CN111552755A (en)* | 2020-04-26 | 2020-08-18 | 中科三清科技有限公司 | Drawing method, device and equipment for three-dimensional place name label and storage medium |
| CN113077509A (en)* | 2020-01-03 | 2021-07-06 | 上海依图信息技术有限公司 | Space mapping calibration method and space mapping system based on synchronous positioning and mapping |
| CN115187656A (en)* | 2022-07-14 | 2022-10-14 | 深圳奇迹智慧网络有限公司 | Coordinate correspondence method, robot and storage medium |
| CN115381354A (en)* | 2022-07-28 | 2022-11-25 | 广州宝乐软件科技有限公司 | Obstacle avoidance method and obstacle avoidance device for cleaning robot, storage medium and equipment |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040168148A1 (en)* | 2002-12-17 | 2004-08-26 | Goncalves Luis Filipe Domingues | Systems and methods for landmark generation for visual simultaneous localization and mapping |
| CN105015419A (en)* | 2015-07-17 | 2015-11-04 | 中山大学 | Automatic parking system and method based on stereoscopic vision localization and mapping |
| CN105841687A (en)* | 2015-01-14 | 2016-08-10 | 上海智乘网络科技有限公司 | Indoor location method and indoor location system |
| CN205905026U (en)* | 2016-08-26 | 2017-01-25 | 沈阳工学院 | Robot system based on two mesh stereovisions |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040168148A1 (en)* | 2002-12-17 | 2004-08-26 | Goncalves Luis Filipe Domingues | Systems and methods for landmark generation for visual simultaneous localization and mapping |
| CN105841687A (en)* | 2015-01-14 | 2016-08-10 | 上海智乘网络科技有限公司 | Indoor location method and indoor location system |
| CN105015419A (en)* | 2015-07-17 | 2015-11-04 | 中山大学 | Automatic parking system and method based on stereoscopic vision localization and mapping |
| CN205905026U (en)* | 2016-08-26 | 2017-01-25 | 沈阳工学院 | Robot system based on two mesh stereovisions |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109637339B (en)* | 2018-11-19 | 2022-08-09 | 深圳市海柔创新科技有限公司 | Map generation method, map generation device, computer-readable storage medium and computer equipment |
| CN109637339A (en)* | 2018-11-19 | 2019-04-16 | 深圳市海柔创新科技有限公司 | Map generation method, map generation device, computer-readable storage medium and computer equipment |
| WO2020155615A1 (en)* | 2019-01-28 | 2020-08-06 | 速感科技(北京)有限公司 | Vslam method, controller, and mobile device |
| CN109887087A (en)* | 2019-02-22 | 2019-06-14 | 广州小鹏汽车科技有限公司 | A kind of SLAM of vehicle builds drawing method and system |
| CN109887087B (en)* | 2019-02-22 | 2021-02-19 | 广州小鹏汽车科技有限公司 | SLAM mapping method and system for vehicle |
| CN110826474A (en)* | 2019-03-10 | 2020-02-21 | 成都家有为力机器人技术有限公司 | Semantic map construction system based on specific target recognition and laser SLAM |
| CN110269550B (en)* | 2019-06-13 | 2021-06-08 | 深圳市银星智能科技股份有限公司 | Door position identification method and mobile robot |
| CN110269550A (en)* | 2019-06-13 | 2019-09-24 | 深圳市银星智能科技股份有限公司 | A kind of location recognition method and mobile robot |
| CN110275540A (en)* | 2019-07-01 | 2019-09-24 | 湖南海森格诺信息技术有限公司 | Semantic navigation method and its system for sweeping robot |
| CN110275181A (en)* | 2019-07-08 | 2019-09-24 | 武汉中海庭数据技术有限公司 | A kind of vehicle-mounted mobile measuring system and its data processing method |
| CN113077509A (en)* | 2020-01-03 | 2021-07-06 | 上海依图信息技术有限公司 | Space mapping calibration method and space mapping system based on synchronous positioning and mapping |
| CN111552755A (en)* | 2020-04-26 | 2020-08-18 | 中科三清科技有限公司 | Drawing method, device and equipment for three-dimensional place name label and storage medium |
| CN115187656A (en)* | 2022-07-14 | 2022-10-14 | 深圳奇迹智慧网络有限公司 | Coordinate correspondence method, robot and storage medium |
| CN115381354A (en)* | 2022-07-28 | 2022-11-25 | 广州宝乐软件科技有限公司 | Obstacle avoidance method and obstacle avoidance device for cleaning robot, storage medium and equipment |
| Publication number | Publication date |
|---|---|
| CN108803591B (en) | 2025-02-18 |
| Publication | Publication Date | Title |
|---|---|---|
| CN108803591A (en) | A kind of ground drawing generating method and robot | |
| Yang et al. | Cubeslam: Monocular 3-d object slam | |
| US11727593B1 (en) | Automated data capture | |
| Huang et al. | ClusterVO: Clustering moving instances and estimating visual odometry for self and surroundings | |
| Zhang et al. | Intelligent collaborative localization among air-ground robots for industrial environment perception | |
| Tang et al. | 3D mapping and 6D pose computation for real time augmented reality on cylindrical objects | |
| CN106845515B (en) | Robot target identification and pose reconstruction method based on virtual sample deep learning | |
| Kragic et al. | Vision for robotic object manipulation in domestic settings | |
| CN105021124B (en) | A kind of planar part three-dimensional position and normal vector computational methods based on depth map | |
| US11788845B2 (en) | Systems and methods for robust self-relocalization in a visual map | |
| CN103424112B (en) | A kind of motion carrier vision navigation method auxiliary based on laser plane | |
| WO2022078467A1 (en) | Automatic robot recharging method and apparatus, and robot and storage medium | |
| US11111785B2 (en) | Method and device for acquiring three-dimensional coordinates of ore based on mining process | |
| CN114549738A (en) | Unmanned vehicle indoor real-time dense point cloud reconstruction method, system, equipment and medium | |
| CN106940186A (en) | A kind of robot autonomous localization and air navigation aid and system | |
| WO2020190166A1 (en) | Method and system for grasping an object by means of a robotic device | |
| WO2019001237A1 (en) | Mobile electronic device, and method in mobile electronic device | |
| CN111724432A (en) | Object three-dimensional detection method and device | |
| CN114063099A (en) | RGBD-based positioning method and device | |
| US10902610B2 (en) | Moving object controller, landmark, and moving object control method | |
| CN114782639A (en) | Rapid differential latent AGV dense three-dimensional reconstruction method based on multi-sensor fusion | |
| CN117830397A (en) | Repositioning method, repositioning device, electronic equipment, medium and vehicle | |
| WO2022227632A1 (en) | Image-based trajectory planning method and motion control method, and mobile machine using same | |
| Mojtahedzadeh | Robot obstacle avoidance using the Kinect | |
| CN112747752A (en) | Vehicle positioning method, device, equipment and storage medium based on laser odometer |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CP03 | Change of name, title or address | ||
| CP03 | Change of name, title or address | Address after:100000 Beijing City Haidian District Huayuan North Road No. 14 Huanxing Building C Building 208 Patentee after:Ningbo Miwen Power Technology Co., Ltd. Country or region after:China Address before:100000 Beijing City Haidian District Huayuan North Road No. 14 Huanxing Building C Building 208 Patentee before:BEIJING MIWEN POWER TECHNOLOGY CO.,LTD. Country or region before:China |