Movatterモバイル変換


[0]ホーム

URL:


CN119229036A - Three-dimensional mapping method, device and storage medium - Google Patents

Three-dimensional mapping method, device and storage medium
Download PDF

Info

Publication number
CN119229036A
CN119229036ACN202411760306.7ACN202411760306ACN119229036ACN 119229036 ACN119229036 ACN 119229036ACN 202411760306 ACN202411760306 ACN 202411760306ACN 119229036 ACN119229036 ACN 119229036A
Authority
CN
China
Prior art keywords
instance object
point cloud
merging
target
instance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202411760306.7A
Other languages
Chinese (zh)
Other versions
CN119229036B (en
Inventor
钟鼎
陈威余
邹晨阳
刘向阳
唐伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Yungu Technology Co Ltd
Original Assignee
Zhongke Yungu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Yungu Technology Co LtdfiledCriticalZhongke Yungu Technology Co Ltd
Priority to CN202411760306.7ApriorityCriticalpatent/CN119229036B/en
Publication of CN119229036ApublicationCriticalpatent/CN119229036A/en
Application grantedgrantedCritical
Publication of CN119229036BpublicationCriticalpatent/CN119229036B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本申请公开了一种三维建图方法、装置及存储介质,涉及三维建图的技术领域,方法包括:基于预设算法获取当前帧对应的局部点云图;获取针对目标场地的全局点云图;针对局部点云图中的任一第一实例对象,在全局点云图包括的多个第二实例对象中查找与该任一第一实例对象相匹配的目标实例对象,并将任一第一实例对象对应的第一点云与目标实例对象对应的第二点云执行点云合并操作,以更新全局点云图;在满足预设筛除条件的情况下,对全局点云图中的任一第二实例对象执行筛选清除操作,以更新全局点云图。其中,利用坐标点的合并次数属性,对不同尺寸的待检测物体都有精确的去重效果。并且,可以降低了全局点云图的内存占用,提高了处理效率。

The present application discloses a three-dimensional mapping method, device and storage medium, which relates to the technical field of three-dimensional mapping. The method includes: obtaining a local point cloud map corresponding to the current frame based on a preset algorithm; obtaining a global point cloud map for the target site; for any first instance object in the local point cloud map, searching for a target instance object matching any first instance object among multiple second instance objects included in the global point cloud map, and performing a point cloud merging operation on the first point cloud corresponding to any first instance object and the second point cloud corresponding to the target instance object to update the global point cloud map; when the preset screening conditions are met, performing a screening and clearing operation on any second instance object in the global point cloud map to update the global point cloud map. Among them, by utilizing the attribute of the number of merging times of the coordinate points, there is an accurate deduplication effect for objects to be detected of different sizes. In addition, the memory usage of the global point cloud map can be reduced, and the processing efficiency can be improved.

Description

Three-dimensional image construction method, device and storage medium
Technical Field
The application relates to the technical field of three-dimensional drawing, in particular to a three-dimensional drawing method, a three-dimensional drawing device and a storage medium.
Background
Three-dimensional mapping is a process of creating a model with three-dimensional data by using three-dimensional fabrication software. This process can stereoscopically render the two-dimensional plane more intuitive and stereoscopic. In the three-dimensional mapping process, the robot continuously collects multi-frame RGB-D images (photos containing depth images), each frame of images can be converted into three-dimensional point cloud data in an image area, and a global three-dimensional point cloud map is constructed and updated through continuous multi-frame point clouds. In the process of constructing the global point cloud image, the prior art combines new instance objects of the three-dimensional point cloud image of continuous image frames with old instance objects in the global point cloud image, so as to ensure dynamic update of the global point cloud image. However, when combining the new and old instance objects, a simpler clustering algorithm is adopted for combining, and the confidence distribution elements in the point cloud cannot be considered, so that double images can be eliminated finally, but position distortion is caused. In addition, the cluster algorithm requires setting the super-parameters of the cluster radius, and the average size of the object to be detected is generally set, which means that the cluster-based de-duplication mode in the prior art can only ensure that the instance objects within a certain standard size can be de-duplicated with high precision, but the instance objects with smaller or larger weights cannot be de-duplicated accurately, so that the final mapping effect is poor. Meanwhile, the number of point clouds in the global point cloud chart can generate a periodical extrusion phenomenon along with the updating of continuous multiframes, so that the non-instant downsampling causes heavy memory cost.
Disclosure of Invention
The embodiment of the application aims to provide a three-dimensional map building method, a three-dimensional map building device and a storage medium, which are used for solving the technical problems of position distortion, poor map building effect and high memory cost in the global point cloud map updating process in the prior art.
In order to achieve the above object, a first aspect of the present application provides a three-dimensional mapping method, including:
Acquiring a current frame acquired by a robot, and acquiring a local point cloud image corresponding to the current frame and aiming at a target site based on a preset algorithm, wherein the local point cloud image comprises first point clouds respectively corresponding to a plurality of first instance objects;
acquiring a global point cloud image aiming at a target site, wherein the global point cloud image comprises a plurality of second point clouds corresponding to a plurality of second instance objects respectively;
for any first instance object, searching a target instance object matched with any first instance object in a plurality of second instance objects included in the global point cloud image;
Performing point cloud merging operation on any first instance object and a first point cloud corresponding to any first instance object and a second point cloud corresponding to a target instance object to update a global point cloud image, wherein the point cloud merging operation is used for merging any first coordinate point in the first point cloud to a target coordinate point adjacent to any first coordinate point in the second point cloud and updating the first merging times of the target coordinate point;
and under the condition that a preset screening condition is met, executing screening and clearing operation on any second instance object in the global point cloud image to update the global point cloud image, wherein the screening and clearing operation is used for deleting the corresponding coordinate points according to the first merging times of the second coordinate points in the second point cloud corresponding to any second instance object and a preset point cloud deleting condition.
The second aspect of the present application provides a three-dimensional mapping apparatus, including:
a memory configured to store instructions;
and a processor configured to call instructions from the memory and when executing the instructions, to implement a three-dimensional mapping method according to the above.
A third aspect of the present application provides a machine-readable storage medium having stored thereon instructions for causing a machine to perform a three-dimensional mapping method according to the above.
Through the technical scheme, the double image removing algorithm based on the coordinate point level is realized by utilizing the merging time attribute of the coordinate points, and the double image removing method has accurate double image removing effect on objects to be detected with different sizes. And the coordinate point combination of each frame realizes a certain downsampling effect, so that the point cloud quantity of the global point cloud image can be controlled in each frame, the memory occupation of the global point cloud image is reduced, and the processing efficiency of the follow-up ghost image removing algorithm is improved.
Additional features and advantages of embodiments of the application will be set forth in the detailed description which follows.
Drawings
The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain, without limitation, the embodiments of the application. In the drawings:
FIG. 1 schematically illustrates a flow diagram of a three-dimensional mapping method according to an embodiment of the application;
FIG. 2 schematically shows a block diagram of a three-dimensional mapping apparatus according to an embodiment of the present application;
fig. 3 schematically shows a schematic structure of a computer device according to an embodiment of the application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it should be understood that the detailed description described herein is merely for illustrating and explaining the embodiments of the present application, and is not intended to limit the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that, if directional indications (such as up, down, left, right, front, and rear are referred to in the embodiments of the present application), the directional indications are merely used to explain the relative positional relationship, movement conditions, and the like between the components in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indications are correspondingly changed.
In addition, if there is a description of "first", "second", etc. in the embodiments of the present application, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present application.
Fig. 1 schematically shows a flow diagram of a three-dimensional mapping method according to an embodiment of the application. As shown in fig. 1, an embodiment of the present application provides a three-dimensional mapping method, which may include the following steps.
S102, acquiring a current frame acquired by a robot, and acquiring a local point cloud image corresponding to the current frame and aiming at a target site based on a preset algorithm, wherein the local point cloud image comprises first point clouds respectively corresponding to a plurality of first instance objects.
It is understood that a robot refers to a robot that is movable and can acquire images. The target site refers to a three-dimensional space site designated by a technician for three-dimensional mapping. The current frame refers to a current image corresponding to a local space region acquired when the robot performs image acquisition on a target site at the current moment. The local point cloud image refers to a point cloud image of a local space corresponding to the current frame, and the local point cloud image comprises three-dimensional point cloud data of the local space region. Specifically, during the three-dimensional mapping process, the robot may continuously collect multiple frames of images, which may be images containing depth information, such as RGB-D maps. Each frame of RGB-D image is combined with the camera pose of the robot, and then the three-dimensional point cloud data (local point cloud image) in the space area corresponding to the RGB-D image can be converted into the three-dimensional point cloud data (local point cloud image) based on a two-dimensional conversion three-dimensional mapping algorithm through a target detection algorithm or a target segmentation algorithm, and a global three-dimensional point cloud map is constructed and updated through the three-dimensional point cloud data of continuous multi-frame RGB-D images.
Specifically, points in three-dimensional point cloud data contained in the local point cloud image belong to different groups or categories, wherein the points in each group belong to the same object or the same area, and the point cloud is divided according to characteristics such as space, geometry, texture and the like, so that the point clouds divided in the same category have similar characteristics. An instance object refers to an object created by instantiating a class, and a first instance object refers to an object or region of the instantiated class in the local point cloud. Specifically, the local point cloud image includes a plurality of first instance objects. Wherein each first instance object has a corresponding first point cloud. For example, the local point cloud image may include example objects such as a table, a chair, a cup, and the like.
S104, acquiring a global point cloud image aiming at the target site, wherein the global point cloud image comprises a plurality of second point clouds corresponding to the second instance objects.
It is understood that the global point cloud image refers to a point cloud image of a spatial region corresponding to the entire target site. It will be appreciated that objects in the target field may move and change, and then the local point cloud corresponding to the current frame may update the global point cloud. The global point cloud image comprises a plurality of second instance objects, wherein the second instance objects refer to objects or areas of an instantiation class in the global point cloud image, and each second instance object has a corresponding second point cloud. It will be appreciated that the first instance object is the new instance object at the current time and the second instance object is the old instance object at the historical time. Since the first instance object and the second instance object have corresponding attributes respectively and can be positioned to corresponding spatial positions respectively, the global point cloud image can be updated by comparing and analyzing the first instance object and the second instance object.
S106, for any first instance object, searching a target instance object matched with any first instance object in a plurality of second instance objects included in the global point cloud image.
It will be appreciated that, when the first instance object exists in the local point cloud image and the second instance object of the global point cloud image are the same, and when the global point cloud image is updated by using the point cloud data of the local point cloud image, the global point cloud image exists in the repeated instance object, then the second instance object which is matched with the first instance object of the local point cloud image, namely, the target instance object, can be found in the global point cloud image. Wherein the matching with the first instance object includes calculating spatial overlap, visual feature similarity, etc. between the two instance objects.
S108, aiming at any first instance object, performing point cloud merging operation on a first point cloud corresponding to any first instance object and a second point cloud corresponding to a target instance object to update a global point cloud diagram, wherein the point cloud merging operation is used for merging any first coordinate point in the first point cloud to a target coordinate point adjacent to any first coordinate point in the second point cloud, and updating the first merging times of the target coordinate point.
It can be understood that the point cloud merging operation is performed on the first point cloud corresponding to the first instance object and the second point cloud corresponding to the target instance object. Specifically, the first point cloud includes a plurality of first coordinate points, and the second point cloud also includes a plurality of second coordinate points. Wherein each coordinate point has a unique identification. And traversing each first coordinate point in the first point cloud corresponding to the first instance object, namely traversing the space coordinates (x, y, z) of each first coordinate point, for the matched first instance object and the target instance object. Then, for each first coordinate point, whether or not there is a second coordinate point adjacent to the first coordinate point may be confirmed from the spatial coordinates of the second coordinate point in the second point cloud. If so, the second coordinate point where the second point cloud is searched may be determined as a target coordinate point adjacent to the first coordinate point, and the target coordinate point may be considered to be identical to the image information contained in the first coordinate point. Then, the target coordinate point may be combined with the first coordinate point. Merging may be to retain the target coordinate point in the global point cloud and delete the first coordinate point. Meanwhile, each time the target coordinate point is merged, the first merging times of the target coordinate point need to be updated in real time. The first merging times are accumulated merging times of the coordinate points for executing the point cloud merging operation. Then, after the point cloud merging operation is completed on the first point cloud corresponding to each first instance object in the local point cloud image corresponding to the current frame, an updated global point cloud image can be obtained. In this way, at the point cloud merging, the merging number attribute as the spatial existence confidence is reserved for each coordinate point, and the more the first merging number of coordinate points, the greater the likelihood that they do exist at the corresponding positions is. The coordinate point combination of each frame realizes a certain downsampling effect, so that the point cloud quantity of the global point cloud image can be controlled in each frame, the memory occupation of the global point cloud image is reduced, and the processing efficiency of the follow-up ghost image removing algorithm is improved.
And S110, under the condition that the preset screening condition is met, screening and clearing operation is carried out on any second instance object in the global point cloud image so as to update the global point cloud image, wherein the screening and clearing operation is used for deleting the corresponding coordinate points according to the first merging times of the second coordinate points in the second point cloud corresponding to any second instance object and the preset point cloud deleting condition.
It can be appreciated that the global point cloud updated by the foregoing scheme may have the problem of example object ghosting. The preset screening condition refers to a condition that a technician screens out a second instance object in the global point cloud picture according to technical experience. And after the global point cloud image is updated, screening and clearing operations are further carried out on a second instance object in the updated global point cloud image according to preset screening conditions so as to further update the global point cloud image, thereby solving the problem of ghost of the instance object. The screening and clearing operation is used for deleting the corresponding coordinate points according to the first merging times of the second coordinate points in the second point cloud corresponding to any one of the second instance objects and preset point cloud deleting conditions. The preset point cloud deleting condition refers to a condition for deleting the second coordinate point in the second point cloud specified in the preset screening condition. In point cloud merging, a merging time attribute as a spatial existence confidence is reserved for each coordinate point, and the more the first merging time, the more the coordinate point indicates that the coordinate point is actually present at the corresponding position, the more likely it is. Therefore, the duplicate removal algorithm based on the first combination time attribute can fully consider the confidence coefficient distribution in the point cloud, avoid removing accurate coordinate points and reserving inaccurate coordinate points, improve the point cloud position accuracy of the example object, and finally improve the three-dimensional map building accuracy.
Through the technical scheme, the double image removing algorithm based on the coordinate point level is realized by utilizing the merging time attribute of the coordinate points, and the double image removing method has accurate double image removing effect on objects to be detected with different sizes. And the coordinate point combination of each frame realizes a certain downsampling effect, so that the point cloud quantity of the global point cloud image can be controlled in each frame, the memory occupation of the global point cloud image is reduced, and the processing efficiency of the follow-up ghost image removing algorithm is improved.
In the embodiment of the application, searching the target example object matched with any first example object in a plurality of second example objects included in a global point cloud image comprises determining the spatial overlapping degree between each second example object included in the global point cloud image and any first example object for any first example object, determining the second example object with the spatial overlapping degree larger than a first preset threshold value in the plurality of second example objects as a third example object for any first example object, determining the visual characteristic similarity between the third example object and any first example object for any first example object, and determining the third example object as the target example object matched with any first example object under the condition that the visual characteristic similarity between the third example object and any first example object is larger than a second preset threshold value for any first example object.
It is appreciated that the spatial overlap may be represented by space IoU (Intersection over Union), and that space IoU is an indicator of the overlap of two bounding boxes. The value of the space IoU ranges between 0 and 1, with higher values indicating a higher degree of coincidence of the two bounding boxes. For any first instance object, the overlapping degree between the first instance object and each second instance object can be determined by calculating the space IoU through the bounding box corresponding to the first instance object and the bounding box corresponding to each second instance object. The first preset threshold value refers to an empirical value set by a technician for spatial overlap according to technical experience. If the spatial overlapping degree between the first instance objects in the global point cloud chart is larger than the second instance object of the first preset threshold value, the second instance object can be deleted to be used as a third instance object. Further, for any first instance object, screening a third instance object with visual characteristic similarity larger than a second preset threshold value from the third instance object, wherein the visual characteristic similarity is larger than the second preset threshold value, and the third instance object is used as a target instance object. It is understood that visual feature similarity refers to the measure of how close two images or objects are to a visual feature. This concept is based on low-level features of the image, such as color, texture, shape, etc. Specifically, the calculation of the visual feature similarity generally involves key elements such as ‌ global similarity, ‌ local similarity, and structural similarity. Wherein ‌ global similarity considers the color, texture, shape, etc. characteristics of the whole image. ‌ local similarity focuses on features of a particular region in the image, such as details of an object or scene. ‌ structural similarity can measure structural relationships between images, such as the layout of objects and scenes. Then, in the case where the spatial overlap and the visual feature similarity satisfy the conditions set by the technician, the instance object in the global point cloud image may be considered as a target instance object that matches the first instance object. ‌ ‌ A
In the embodiment of the application, aiming at any first instance object, the point cloud merging operation is carried out on the first point cloud corresponding to any first instance object and the second point cloud corresponding to the target instance object, wherein the method comprises the steps of determining a neighborhood space corresponding to any first coordinate point of the first point cloud corresponding to any first instance object, determining whether any second coordinate point included in the second point cloud corresponding to the target instance object exists in the neighborhood space corresponding to any first coordinate point for any first instance object, determining the second coordinate point existing in the neighborhood space corresponding to any first coordinate point in the second point cloud corresponding to any first coordinate point as the target coordinate point in the neighborhood space corresponding to any first coordinate point for any first instance object, and updating the first merging times of the target coordinate point in a global point graph so as to execute the point cloud merging operation.
It can be understood that, for the first example object and the target example object which are matched, each first coordinate point in the first point cloud corresponding to the first example object is traversed, that is, the space coordinates (x, y, z) of each first coordinate point are traversed, and whether any second coordinate point in the second point cloud corresponding to the target example object is included or not is searched in a neighborhood space (x±a, y±a, z±a) of the first coordinate point. If so, the second coordinate point where the second point cloud is searched may be determined as a target coordinate point adjacent to the first coordinate point, and the target coordinate point may be considered to be identical to the image information contained in the first coordinate point. Specifically, when searching for the second point cloud corresponding to the target instance object, there may be a plurality of second coordinate points in the neighborhood space corresponding to the first coordinate point, in order to improve the calculation efficiency, the searched first coordinate point may be used as the target coordinate point, and then the search of the next first coordinate point may be continued. Then, the target coordinate point may be combined with the first coordinate point. Merging may be to retain the target coordinate point in the global point cloud and delete the first coordinate point. Meanwhile, each time the target coordinate point is merged, the first merging times of the target coordinate point need to be updated in real time. The first merging times are accumulated merging times of the coordinate points for executing the point cloud merging operation. Then, after the point cloud merging operation is completed on the first point cloud corresponding to each first instance object in the local point cloud image corresponding to the current frame, an updated global point cloud image can be obtained. In this way, at the point cloud merging, the merging number attribute as the spatial existence confidence is reserved for each coordinate point, and the more the first merging number of coordinate points, the greater the likelihood that they do exist at the corresponding positions is. The coordinate point combination of each frame realizes a certain downsampling effect, so that the point cloud quantity of the global point cloud image can be controlled in each frame, the memory occupation of the global point cloud image is reduced, and the processing efficiency of the follow-up ghost image removing algorithm is improved.
In the embodiment of the application, the point cloud merging operation is performed on any first instance object and the first point cloud corresponding to any first instance object and the second point cloud corresponding to the target instance object, and further comprises the step of adding any first coordinate point to the global point cloud map for any first instance object under the condition that any second coordinate point included by the second point cloud does not exist in the neighborhood space corresponding to any first coordinate point, so as to perform the point cloud merging operation.
Specifically, the space coordinates (x, y, z) of each first coordinate point are traversed, and any one second coordinate point in the second point cloud corresponding to the target instance object is searched in a neighborhood space (x+/-a, y+/-a, z+/-a) of the first coordinate point. If the search is not completed, the arbitrary first coordinate point can be added to the global point cloud image to execute the point cloud merging operation. Therefore, new image information appearing in the local point cloud image can be supplemented to the global point cloud image, and dynamic update of the global point cloud image is achieved.
In the embodiment of the application, each second instance object corresponds to a visual feature, the visual feature comprises a plurality of dimension vectors, and the method further comprises the steps of executing instance merging operation on any first instance object and a target instance object according to any first instance object under the condition that point cloud merging operation is completed on all first coordinate points included in the first point cloud corresponding to any first instance object, updating the second merging times of the target instance object, wherein the instance merging operation is used for merging the visual feature of any first instance object and the target instance object, determining merging weights corresponding to any first instance object according to the second merging times of the target instance object corresponding to any first instance object according to any first instance object, and updating the value of each dimension vector included in the visual feature corresponding to the target instance object according to the merging weights corresponding to any first instance object so as to update a global point cloud map according to any first instance object.
It may be appreciated that the instance merging operation is a merging operation of instance objects between the local point cloud image and the global point cloud image, and may specifically be a merging of visual features corresponding to the instance objects. That is, any first instance object is fused with the visual features of the corresponding target instance object. Wherein, the fusion of the visual features can be realized by updating the values of the dimension vectors. It can be appreciated that, since the spatial overlapping degree and the visual feature similarity of the first instance object and the corresponding target instance object reach the preset threshold, the two may be defined as substantially the same instance object. Specifically, the second merging times of the target instance object can be calculated to obtain the merging weight corresponding to the instance merging operation of the first instance object, so as to serve as the basis of the instance merging operation. The merging weight can be obtained by normalizing the second merging times. The second merging times refer to the accumulated merging times of the instance object executing the instance merging operation. Then, for any first instance object, the value of each dimension vector included by the visual feature corresponding to the target instance object may be updated according to the merging weight corresponding to any first instance object, so as to update the global point cloud image.
In an embodiment of the present application, for any one of the first instance objects, updating the value of each dimension vector included in the visual feature corresponding to the target instance object according to the merge weight corresponding to any one of the first instance objects, so as to update the global point cloud image includes updating the value of each dimension vector according to formula (1):
(1)
Wherein,Refers to the value of the j-th dimension vector updated included in the visual feature corresponding to the target instance object,Refers to the merge weight corresponding to any first instance object,Refers to the value before the update of the ith dimension vector j included in the visual feature corresponding to the target instance object,Refers to the value of the j-th dimension vector included by the visual feature corresponding to any first instance object.
In an embodiment of the present application, for any first instance object, determining, according to the second merging times of the target instance object corresponding to any first instance object, the merging weight corresponding to any first instance object includes calculating the merging weight corresponding to any first instance object according to the following formula (2):
(2)
Wherein,The merging weight corresponding to any first instance object is indicated, and p is the second merging times of the target instance object corresponding to any first instance object. It will be appreciated that the more the second merge times of the target instance object corresponding to the first instance object, the smaller the value of the corresponding merge weight, and the greater the influence of the dimension vector of the visual feature corresponding to the target instance object on the value of the updated dimension vector, and the smaller the influence of the dimension vector of the visual feature corresponding to the first instance object on the value of the updated dimension vector. That is, the more the second merging times of the target instance object are, the more image information of the old instance object is reserved by the updated global point cloud image, so that the stability of the image information in the three-dimensional map is improved, and the precision of the three-dimensional map is improved.
In the embodiment of the application, the filtering and clearing operation is performed on any second instance object in the global point cloud image under the condition that the preset filtering and clearing condition is met to update the global point cloud image, wherein the filtering and clearing operation comprises the steps of determining the first merging times corresponding to each target historical frame of any second coordinate point in the second point cloud corresponding to any second instance object before a current frame and the current frame according to any second instance object, generating a cache array corresponding to any second coordinate point according to the first merging times corresponding to any second coordinate point, determining the merging growth rate of any second coordinate point in the current frame according to the cache array corresponding to any second coordinate point of the second point cloud corresponding to any second instance object according to any second coordinate point of any second instance object, and updating the global point cloud image according to the merging growth rate and preset clearing condition of any second coordinate point of the second point cloud corresponding to any second instance object according to any second instance object.
It is understood that the target history frame refers to a history frame selected from a plurality of history frames acquired before the current frame. For example, it may be the first 3 frames or the first 5 frames of the current frame. Specifically, when the point cloud merging operation is performed, if the point cloud merging operation exists in the target coordinate point corresponding to the first coordinate point, the cache array corresponding to the target coordinate point may also be updated. The first merging times corresponding to each second coordinate point of the second point cloud in each historical frame and each current frame in the cache array can reflect the updated Cheng Zhongdian cloud merging growth condition of the local point cloud image to the global point cloud image frame by frame, namely the merging growth rate. The preset point cloud deletion condition may be a deletion condition set for the merge growth rate.
Specifically, after the point cloud merging operation and the instance merging operation of m frames are performed, a plurality of repeated coordinate points still exist, and then the screening and clearing operation based on the first merging times is required to be performed once on the second point cloud corresponding to each second instance object in the global point cloud image, so that the object ghost problem is solved. Then, each second coordinate point within the second point cloud may be polled for the second point cloud for each second instance object. For each second coordinate point, a buffer array with the size of m+1 is maintained in advance and is used for storing the value of the second merging times of the second coordinate point after the last m frames are merged. For example, for the second coordinate point with id of 001, it maintains a {100,101,102,103,103,104} buffer array (assuming m=5) to represent that it has been accumulated and combined 104 times after the current frame, the last frame has been accumulated and combined 103 times, and so on, the buffer array is updated every frame, each element in the array is advanced by one bit, and the last element complements the number of combinations after the current frame has been combined. For each second coordinate point, the values of the second number of merges of the current frame (i.e., the value of the last element of the cache array) and the second number of merges of the m frame before it (i.e., the value of the first element of the cache array) are checked. Further, a combined growth rate for the second coordinate point may be calculated based on the cache array. Then, for any second instance object, according to the merging and growing rate corresponding to any second coordinate point of the second point cloud corresponding to any second instance object, whether the merging and growing rate meets the preset point cloud deleting condition can be judged, so that whether screening and cleaning operation is executed on the second instance object can be determined according to the judging result, and the global point cloud graph is updated. Thus, the coordinate points can be accurately screened and cleared, and the ghost problem of the example object is solved.
In the embodiment of the application, the preset screening condition means that the frame number of the current frame accords with the preset interval frame number, the determination of the merging and growing rate of any second coordinate point corresponding to the current frame according to the cache array corresponding to any second coordinate point in the second point cloud for any second example object comprises the steps of determining the first target merging times and the second target merging times in the cache array corresponding to any second coordinate point in the second point cloud corresponding to any second example object for any second example object, wherein the first target merging times refer to the first merging times corresponding to the first target frame before the preset interval frame number of the current frame, the second target merging times refer to the first merging times corresponding to the current frame after the point cloud merging operation is completed, and the determination of the first merging and growing rate corresponding to any second coordinate point according to the first target merging times and the second target merging times of any second coordinate point cloud corresponding to any second example object.
It is understood that the preset screening condition means that the number of frames of the current frame corresponds to the preset interval number of frames. For example, the preset screening condition is that a screening and clearing operation is performed once every m frames, the interval between the number of frames of the current frame and the number of frames of the last screening and clearing operation is m frames, and the screening and clearing operation can be performed for the global point cloud image updated by the current frame.
For any second instance object, determining a first target merging time and a second target merging time in a cache array corresponding to any second coordinate point of a second point cloud corresponding to any second instance object, wherein the first target merging time is a first merging time corresponding to a first target frame before a preset interval frame number of a current frame, and the second target merging time is a first merging time corresponding to the current frame after the point cloud merging operation is completed. For example, for a {100,101,102,103,103,104} cache array, the first target merge time is 100 and the second target merge time is 104. Further, for any second instance object, determining a first merging increasing rate corresponding to any second coordinate point according to the first target merging times and the second target merging times corresponding to any second coordinate point of the second point cloud corresponding to the second instance object. It is understood that the first merge growth rate refers to the growth rate of the number of recent merges.
In an embodiment of the present application, for any second example object, determining, according to a first target merging number and a second target merging number corresponding to any second coordinate point of a second point cloud corresponding to the second example object, a first merging growth rate corresponding to any second coordinate point includes calculating, according to the following formula (3), the first merging growth rate corresponding to any second coordinate point:
(3)
Wherein,Refers to a first merging growth rate corresponding to an ith second coordinate point of a second point cloud corresponding to any one of the second instance objects,Refers to the first target merging times corresponding to the ith second coordinate point of the second point cloud corresponding to any second instance object,The second target merging times corresponding to the ith second coordinate point of the second point cloud corresponding to any second instance object are indicated, and m is the preset interval frame number.
In the embodiment of the application, the point cloud merging operation is further used for updating the survival frame number of the target coordinate point in the second point cloud corresponding to any target instance object, and determining the merging and growing rate of any second coordinate point in the current frame according to the cache array corresponding to any second coordinate point in the second point cloud for any second instance object comprises determining the survival frame number of any second coordinate point in the second point cloud corresponding to any second instance object in the current frame for any second instance object, and determining the second merging and growing rate of any second coordinate point in the second point cloud corresponding to any second instance object according to the first merging times and the survival frame number of any second coordinate point in the current frame for any second instance object.
It can be understood that, when the point cloud merging operation is performed, if the point cloud merging operation exists in the case of the target coordinate point corresponding to the first coordinate point, the survival frame number corresponding to the target coordinate point may also be updated. Specifically, the first merging number of the target coordinate point is increased by 1 (default initial value is 0), the survival frame number is increased by 1 (default initial value is 0), and the cache array corresponding to the target coordinate point is updated. For any second example object, the survival frame number of any second coordinate point of the second point cloud corresponding to any second example object in the current frame can be determined, and then the second merging increasing rate corresponding to any second coordinate point can be determined according to the first merging times and the survival frame number of any second coordinate point of the second point cloud corresponding to any second example object in the current frame. The second merge growth rate refers to the growth rate of the accumulated number of merges.
In an embodiment of the present application, for any second example object, determining, according to the number of times of first merging and the number of survival frames of the current frame of any second coordinate point of the second point cloud corresponding to any second example object, a second merging growth rate corresponding to any second coordinate point includes calculating, according to the following formula (4), the second merging growth rate corresponding to any second coordinate point:
(4)
Wherein,Refers to a second merging growth rate corresponding to an ith second coordinate point of a second point cloud corresponding to any second instance object,Refers to the first merging times of the ith second coordinate point of the second point cloud corresponding to any second instance object in the current frame,The survival frame number of the ith second coordinate point of the second point cloud corresponding to any second instance object in the current frame is referred to.
In the embodiment of the application, the merging growth rate comprises a first merging growth rate, and the function expression of the preset point cloud deletion condition is shown in the following formula (5):
(5)
Wherein,Refers to a first merging growth rate corresponding to an ith second coordinate point of a second point cloud corresponding to any one of the second instance objects,Refers to the number between checkpoints where the current frame is viewed forward,The first deletion dynamics control parameter is referred to, and Z is a positive integer. If the first deletion dynamics control parameterThe larger the deletion strength is, the stronger the deletion strength is, and the stronger the duplicate removal effect is. But this also results in excessive deduplication, thenCan be set to 2/j (rounding downwards), achieves balanced de-duplication effect and retains important image information.
In the embodiment of the application, the merging growth rate comprises a second merging growth rate, and the function expression of the preset point cloud deletion condition is shown in the following formula (6):
(6)
Wherein,Refers to a second merging growth rate corresponding to an ith second coordinate point of a second point cloud corresponding to any second instance object,Refers to a second merging growth rate corresponding to a kth second coordinate point of the second point cloud corresponding to any second instance object, N refers to a second coordinate point total number of the second point cloud corresponding to any second instance object,Refers to a second deletion dynamics control parameter. Second deletion dynamics control parameterGenerally, the deletion strength is higher as the deletion strength is higher, and the duplicate removal effect is higher as the deletion strength is higher. But may also result in excessive deduplication, thenCan be set to 1/3, so as to achieve the balanced de-duplication effect and keep important image information.
The function expression (5) for the preset point cloud deletion condition is a deletion condition 1, and the function expression (6) is a deletion condition 2. For each second coordinate point in the global point cloud, if it reaches the deletion condition 1 or 2, the second coordinate point is deleted. Meanwhile, the corresponding cache array, survival frame number, merging and increasing rate and other attributes are deleted. And if both deleting conditions are met, reserving the second coordinate point in the global point cloud image so as to update the global point cloud image.
The method further comprises the steps of determining second target merging times in a cache array corresponding to any second coordinate point of a second point cloud corresponding to any second example object according to any second example object, and reserving any second coordinate point in the updated global point cloud image according to any second example object under the condition that the second target merging times corresponding to any second coordinate point of the second point cloud corresponding to any second example object are larger than or equal to reserved critical frame numbers.
It can be understood that, in order to avoid the phenomenon that the merging times increase slowly due to the fact that some parts of the instance object cannot be observed for a long time after the observation view angle of the instance object is changed, so that the deletion error is finally caused, a condition of refusing to delete can be set. That is, for any second instance object, determining a second target merging time in the cache array corresponding to any second coordinate point of the second point cloud corresponding to any second instance object, where the second target merging time corresponding to any second coordinate point of the second point cloud corresponding to any second instance object is greater than or equal to the retention critical frame numberAnd (3) reserving any second coordinate point in the updated global point cloud picture. That is, even if the second coordinate point reaches the preset point cloud deletion condition, if it is finally judged that it also reaches the deletion rejection condition, it is not deleted at last, but is retained in the global point cloud image, thereby realizing the update of the global point cloud image. Thus, in the process of updating the global point cloud image, more key information of the global point cloud image can be reserved.
Through the technical scheme, when point clouds are combined, the combination times attribute serving as the space existence confidence is reserved for each coordinate point, and the probability that the coordinate points with more combination times are actually present at corresponding positions is higher. The duplication elimination algorithm based on the merging time attribute can fully consider the confidence coefficient distribution in the point cloud, avoid the situation that accurate coordinate points are removed and coordinate points with inaccurate positions are reserved, improve the position accuracy of the point cloud of the instance object, and finally improve the accuracy of the 3d graph construction. And by utilizing the merging time attribute of the coordinate points, a ghost image removing algorithm based on the coordinate point level is realized, and the method has an accurate ghost image removing effect on objects to be detected with different sizes. Meanwhile, the coordinate point combination of each frame realizes a certain downsampling effect, so that the point cloud quantity of the global point cloud image can be controlled in each frame, the memory occupation of the global point cloud image is reduced, and the processing efficiency of the follow-up ghost image removing algorithm is improved.
FIG. 1 is a flow chart of a three-dimensional mapping method in one embodiment. It should be understood that, although the steps in the flowchart of fig. 1 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of other steps or sub-steps of other steps.
Fig. 2 schematically shows a block diagram of a three-dimensional mapping apparatus according to an embodiment of the present application. As shown in fig. 2, an embodiment of the present application provides a three-dimensional mapping method, which may include:
a memory configured to store instructions, and
And the processor is configured to call the instruction from the memory and can realize the three-dimensional mapping method when executing the instruction.
Specifically, in an embodiment of the present application, a processor may be configured to:
The method comprises the steps of acquiring a current frame acquired by a robot, acquiring a local point cloud image corresponding to a target site based on a preset algorithm, wherein the local point cloud image comprises first point clouds corresponding to a plurality of first example objects, acquiring a global point cloud image corresponding to the target site, wherein the global point cloud image comprises second point clouds corresponding to a plurality of second example objects, searching a target example object matched with any first example object in a plurality of second example objects contained in the global point cloud image aiming at any first example object, executing point cloud merging operation on the first point cloud corresponding to any first example object and the second point cloud corresponding to the target example object to update the global point cloud image, wherein the point cloud merging operation is used for merging any first coordinate point in the first point cloud to a target coordinate point adjacent to any first coordinate point in the second point cloud and updating the target coordinate point, and deleting any second coordinate point in the global point cloud image under the condition of meeting the preset screening condition, and deleting any second coordinate point in the global point cloud image according to the first cloud filtering condition.
In an embodiment of the application, the processor may be configured to:
The method comprises the steps of determining the spatial overlapping degree between each second instance object included in a global point cloud image and any first instance object according to any first instance object, determining the second instance object with the spatial overlapping degree larger than a first preset threshold value in the plurality of second instance objects according to any first instance object as a third instance object, determining the visual characteristic similarity between the third instance object and any first instance object according to any first instance object, and determining the third instance object as a target instance object matched with any first instance object according to any first instance object when the visual characteristic similarity between the third instance object and any first instance object is larger than a second preset threshold value.
In an embodiment of the application, the processor may be configured to:
The method comprises the steps of determining a neighborhood space corresponding to any first coordinate point of a first point cloud corresponding to a first instance object, determining whether any second coordinate point included in a second point cloud corresponding to a target instance object exists in the neighborhood space corresponding to any first coordinate point for any first instance object, determining the second coordinate point existing in the neighborhood space corresponding to any first coordinate point in the second point cloud corresponding to the target instance object as a target coordinate point under the condition that any second coordinate point exists in the neighborhood space corresponding to any first coordinate point for any first instance object, keeping the target coordinate point in a global point cloud graph, updating the first merging times of the target coordinate point, and executing point cloud merging operation.
In an embodiment of the application, the processor may be configured to:
and adding any first coordinate point to the global point cloud image to execute point cloud merging operation under the condition that any second coordinate point included in the second point cloud does not exist in a neighborhood space corresponding to any first coordinate point aiming at any first example object.
In an embodiment of the application, each second instance object corresponds to a visual feature, the visual feature comprising a plurality of dimension vectors, the processor may be configured to:
Under the condition that point cloud merging operation is completed by all first coordinate points included in the first point cloud corresponding to any first instance object, instance merging operation is carried out on any first instance object and a target instance object according to any first instance object, the second merging times of the target instance object are updated, the instance merging operation is used for merging visual features of the any first instance object and the target instance object, merging weights corresponding to any first instance object are determined according to the second merging times of the target instance object corresponding to any first instance object according to any first instance object, and values of each dimension vector included in the visual features corresponding to the target instance object are updated according to the merging weights corresponding to any first instance object according to any first instance object, so that a global point cloud map is updated.
In an embodiment of the application, the processor may be configured to:
updating the value of each dimension vector according to formula (1):
(1)
Wherein,Refers to the value of the j-th dimension vector updated included in the visual feature corresponding to the target instance object,Refers to the merge weight corresponding to any first instance object,Refers to the value before the update of the ith dimension vector j included in the visual feature corresponding to the target instance object,Refers to the value of the j-th dimension vector included by the visual feature corresponding to any first instance object.
In an embodiment of the application, the processor may be configured to:
the merging weight corresponding to any first instance object is calculated according to the following formula (2):
(2)
Wherein,The merging weight corresponding to any first instance object is indicated, and p is the second merging times of the target instance object corresponding to any first instance object.
In an embodiment of the application, the processor may be configured to:
Under the condition that a preset screening condition is met, determining the first merging times corresponding to each target historical frame of any second coordinate point in any second coordinate point cloud corresponding to any second example object in the current frame and before the current frame according to any second example object, generating a cache array corresponding to any second coordinate point according to the first merging times corresponding to any second coordinate point, determining the merging growth rate of any second coordinate point in the current frame according to the cache array corresponding to any second coordinate point of any second point cloud corresponding to any second example object according to any second example object, and executing screening and cleaning operation on any second example object according to the merging growth rate of any second coordinate point of any second point cloud corresponding to any second example object and preset point cloud deleting conditions according to any second example object so as to update a global point cloud map according to any second example object.
In an embodiment of the present application, the preset screening condition means that the number of frames of the current frame corresponds to a preset interval number of frames, and the processor may be configured to:
Determining a first target merging time and a second target merging time in a cache array corresponding to any second coordinate point of a second point cloud corresponding to any second example object according to any second example object, wherein the first target merging time is a first merging time corresponding to a first target frame before a preset interval frame number of a current frame, the second target merging time is a first merging time corresponding to the current frame after the point cloud merging operation is completed, and determining a first merging growth rate corresponding to any second coordinate point according to the first target merging time and the second target merging time corresponding to any second coordinate point of the second point cloud corresponding to the second example object according to any second example object.
In an embodiment of the application the processor may be configured to:
The first merging growth rate corresponding to any one of the second coordinate points is calculated according to the following formula (3):
(3)
Wherein,Refers to a first merging growth rate corresponding to an ith second coordinate point of a second point cloud corresponding to any one of the second instance objects,Refers to the first target merging times corresponding to the ith second coordinate point of the second point cloud corresponding to any second instance object,The second target merging times corresponding to the ith second coordinate point of the second point cloud corresponding to any second instance object are indicated, and m is the preset interval frame number.
In an embodiment of the present application, the point cloud merging operation is further used for updating a survival frame number of the target coordinate point in the second point cloud corresponding to any target instance object, and the processor may be configured to:
And determining a second merging increasing rate corresponding to any second coordinate point according to the first merging times and the survival frame number of any second coordinate point of the second point cloud corresponding to any second instance object in the current frame for any second instance object.
In an embodiment of the application, the processor may be configured to:
the second merging growth rate corresponding to any one of the second coordinate points is calculated according to the following formula (4):
(4)
Wherein,Refers to a second merging growth rate corresponding to an ith second coordinate point of a second point cloud corresponding to any second instance object,Refers to the first merging times of the ith second coordinate point of the second point cloud corresponding to any second instance object in the current frame,The survival frame number of the ith second coordinate point of the second point cloud corresponding to any second instance object in the current frame is referred to.
In the embodiment of the application, the merging growth rate comprises a first merging growth rate, and the function expression of the preset point cloud deletion condition is shown in the following formula (5):
(5)
Wherein,Refers to a first merging growth rate corresponding to an ith second coordinate point of a second point cloud corresponding to any one of the second instance objects,Refers to the number between checkpoints where the current frame is viewed forward,The first deletion dynamics control parameter is referred to, and Z is a positive integer.
In the embodiment of the application, the merging growth rate comprises a second merging growth rate, and the function expression of the preset point cloud deletion condition is shown in the following formula (6):
(6)
Wherein,Refers to a second merging growth rate corresponding to an ith second coordinate point of a second point cloud corresponding to any second instance object,Refers to a second merging growth rate corresponding to a kth second coordinate point of the second point cloud corresponding to any second instance object, N refers to a second coordinate point total number of the second point cloud corresponding to any second instance object,Refers to a second deletion dynamics control parameter.
In an embodiment of the application, the processor may be configured to:
And for any second instance object, if the second merging times corresponding to any second coordinate point of the second point cloud corresponding to any second instance object is larger than or equal to the reserved critical frame number, reserving any second coordinate point in the updated global point cloud image.
The embodiment of the application also provides a machine-readable storage medium, wherein the machine-readable storage medium is stored with instructions for causing a machine to execute the three-dimensional mapping method.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 3. The computer device includes a processor a01, a network interface a02, a memory (not shown) and a database (not shown) connected by a system bus. Wherein the processor a01 of the computer device is adapted to provide computing and control capabilities. The memory of the computer device includes internal memory a03 and nonvolatile storage medium a04. The nonvolatile storage medium a04 stores an operating system B01, a computer program B02, and a database (not shown in the figure). The internal memory a03 provides an environment for the operation of the operating system B01 and the computer program B02 in the nonvolatile storage medium a04. The database of the computer device is used for storing three-dimensional mapping data. The network interface a02 of the computer device is used for communication with an external terminal through a network connection. The computer program B02 is executed by the processor a01 to implement a three-dimensional mapping method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 3 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (17)

Translated fromChinese
1.一种三维建图方法,其特征在于,所述方法包括:1. A three-dimensional mapping method, characterized in that the method comprises:获取机器人采集的当前帧,并基于预设算法获取所述当前帧对应的针对目标场地的局部点云图,所述局部点云图包括多个第一实例对象分别对应的第一点云;Acquire a current frame collected by the robot, and acquire a local point cloud map corresponding to the current frame for the target site based on a preset algorithm, wherein the local point cloud map includes first point clouds corresponding to a plurality of first instance objects respectively;获取针对所述目标场地的全局点云图,所述全局点云图包括多个第二实例对象分别对应的第二点云;Acquire a global point cloud map for the target site, wherein the global point cloud map includes second point clouds corresponding to a plurality of second instance objects respectively;针对任一第一实例对象,在所述全局点云图包括的多个第二实例对象中查找与该任一第一实例对象相匹配的目标实例对象;For any first instance object, searching for a target instance object matching the first instance object among a plurality of second instance objects included in the global point cloud image;针对任一第一实例对象,并所述任一第一实例对象对应的第一点云与所述目标实例对象对应的第二点云执行点云合并操作,以更新所述全局点云图,其中,所述点云合并操作用于将该第一点云中的任一第一坐标点合并至该第二点云中与该任一第一坐标点邻近的目标坐标点,并更新该目标坐标点的第一合并次数;For any first instance object, a point cloud merging operation is performed on a first point cloud corresponding to any first instance object and a second point cloud corresponding to the target instance object to update the global point cloud map, wherein the point cloud merging operation is used to merge any first coordinate point in the first point cloud into a target coordinate point in the second point cloud that is adjacent to the any first coordinate point, and update a first merging number of the target coordinate point;在满足预设筛除条件的情况下,对所述全局点云图中的任一第二实例对象执行筛选清除操作,以更新所述全局点云图,其中,所述筛选清除操作用于根据该任一第二实例对象对应的第二点云中第二坐标点的第一合并次数和预设点云删除条件删除相应的坐标点。When a preset screening condition is met, a screening and clearing operation is performed on any second instance object in the global point cloud map to update the global point cloud map, wherein the screening and clearing operation is used to delete the corresponding coordinate point according to the first merging number of the second coordinate point in the second point cloud corresponding to the any second instance object and the preset point cloud deletion condition.2.根据权利要求1所述的三维建图方法,其特征在于,所述针对任一第一实例对象,在所述全局点云图包括的多个第二实例对象中查找与该任一第一实例对象相匹配的目标实例对象包括:2. The three-dimensional mapping method according to claim 1, wherein for any first instance object, searching for a target instance object matching any first instance object among a plurality of second instance objects included in the global point cloud map comprises:针对任一第一实例对象,确定全局点云图包括的每个第二实例对象与该任一第一实例对象之间的空间重叠度;For any first instance object, determining a spatial overlap between each second instance object included in the global point cloud image and the first instance object;针对任一第一实例对象,将所述多个第二实例对象中与该任一第一实例对象空间重叠度大于第一预设阈值的第二实例对象确定为第三实例对象;For any first instance object, determine a second instance object among the plurality of second instance objects whose spatial overlap with the first instance object is greater than a first preset threshold as a third instance object;针对任一第一实例对象,确定所述第三实例对象与该任一第一实例对象之间的视觉特征相似度;For any first instance object, determining a visual feature similarity between the third instance object and any first instance object;针对任一第一实例对象,在所述第三实例对象与该任一第一实例对象之间的视觉特征相似度大于第二预设阈值的情况下,确定所述第三实例对象为与该任一第一实例对象相匹配的目标实例对象。For any first instance object, when the visual feature similarity between the third instance object and the any first instance object is greater than a second preset threshold, the third instance object is determined to be a target instance object matching the any first instance object.3.根据权利要求1所述的三维建图方法,其特征在于,所述针对任一第一实例对象,对所述任一第一实例对象对应的第一点云与所述目标实例对象对应的第二点云执行点云合并操作包括:3. The three-dimensional mapping method according to claim 1, wherein for any first instance object, performing a point cloud merging operation on a first point cloud corresponding to any first instance object and a second point cloud corresponding to the target instance object comprises:针对任一第一实例对象,确定在所述第一实例对象对应的第一点云的任一第一坐标点对应的邻域空间;For any first instance object, determine a neighborhood space corresponding to any first coordinate point of a first point cloud corresponding to the first instance object;针对任一第一实例对象,确定所述任一第一坐标点对应的邻域空间是否存在所述目标实例对象对应的第二点云包括的任一第二坐标点;For any first instance object, determining whether there is any second coordinate point included in the second point cloud corresponding to the target instance object in the neighborhood space corresponding to the any first coordinate point;针对任一第一实例对象,在所述任一第一坐标点对应的邻域空间存在所述任一第二坐标点的情况下,将所述目标实例对象对应的第二点云中存在于该任一第一坐标点对应的邻域空间的第二坐标点确定为目标坐标点,在所述全局点云图中保留所述目标坐标点,并更新所述目标坐标点的第一合并次数,以执行所述点云合并操作。For any first instance object, when any second coordinate point exists in the neighborhood space corresponding to any first coordinate point, the second coordinate point in the neighborhood space corresponding to the any first coordinate point in the second point cloud corresponding to the target instance object is determined as the target coordinate point, the target coordinate point is retained in the global point cloud map, and the first merging number of the target coordinate point is updated to perform the point cloud merging operation.4.根据权利要求3所述的三维建图方法,其特征在于,所述针对任一第一实例对象,并所述任一第一实例对象对应的第一点云与所述目标实例对象对应的第二点云执行点云合并操作还包括:4. The three-dimensional mapping method according to claim 3, characterized in that the step of performing a point cloud merging operation on any first instance object and the first point cloud corresponding to any first instance object and the second point cloud corresponding to the target instance object further comprises:针对任一第一实例对象,在所述任一第一坐标点对应的邻域空间不存在所述第二点云包括的任一第二坐标点的情况下,将该任一第一坐标点添加至所述全局点云图,以执行所述点云合并操作。For any first instance object, if any second coordinate point included in the second point cloud does not exist in the neighborhood space corresponding to any first coordinate point, the any first coordinate point is added to the global point cloud map to perform the point cloud merging operation.5.根据权利要求1所述的三维建图方法,其特征在于,每个第二实例对象对应一个视觉特征,所述视觉特征包括多个维度向量,所述方法还包括:5. The three-dimensional mapping method according to claim 1, wherein each second instance object corresponds to a visual feature, and the visual feature includes multiple dimensional vectors, and the method further comprises:在任一第一实例对象对应的第一点云包括的全部第一坐标点完成所述点云合并操作的情况下,针对任一第一实例对象,对所述任一第一实例对象与所述目标实例对象执行实例合并操作,并更新所述目标实例对象的第二合并次数,所述实例合并操作用于将该任一第一实例对象与所述目标实例对象的视觉特征进行融合;When all first coordinate points included in the first point cloud corresponding to any first instance object complete the point cloud merging operation, for any first instance object, perform an instance merging operation on the any first instance object and the target instance object, and update the second merging number of the target instance object, wherein the instance merging operation is used to fuse the visual features of the any first instance object with the target instance object;针对任一第一实例对象,根据所述任一第一实例对象对应的目标实例对象的第二合并次数确定所述任一第一实例对象对应的合并权重;For any first instance object, determining a merging weight corresponding to the any first instance object according to the second merging number of the target instance object corresponding to the any first instance object;针对任一第一实例对象,根据所述任一第一实例对象对应的合并权重对所述目标实例对象对应的视觉特征包括的每个维度向量的值进行更新,以更新所述全局点云图。For any first instance object, the value of each dimensional vector included in the visual feature corresponding to the target instance object is updated according to the merging weight corresponding to the any first instance object, so as to update the global point cloud image.6.根据权利要求5所述的三维建图方法,其特征在于,所述针对任一第一实例对象,根据所述任一第一实例对象对应的合并权重对所述目标实例对象对应的视觉特征包括的每个维度向量的值进行更新,以更新所述全局点云图包括,根据公式(1)对每个维度向量的值进行更新:6. The three-dimensional mapping method according to claim 5, characterized in that, for any first instance object, the value of each dimensional vector included in the visual feature corresponding to the target instance object is updated according to the merging weight corresponding to the any first instance object to update the global point cloud image, including updating the value of each dimensional vector according to formula (1):(1) (1)其中,是指所述目标实例对象对应的视觉特征包括的第j个维度向量更新后的值,是指所述任一第一实例对象对应的合并权重,是指目标实例对象对应的视觉特征包括的第i个维度向量j更新前的值,是指所述任一第一实例对象对应的视觉特征包括的第j个维度向量的值。in, It refers to the updated value of the j-th dimension vector included in the visual feature corresponding to the target instance object, refers to the merge weight corresponding to any first instance object, It refers to the value of the i-th dimension vector j before the update, including the visual features corresponding to the target instance object. It refers to the value of the j-th dimension vector included in the visual features corresponding to any first instance object.7.根据权利要求5所述的三维建图方法,其特征在于,所述针对任一第一实例对象,根据所述任一第一实例对象对应的目标实例对象的第二合并次数确定所述任一第一实例对象对应的合并权重包括,所述任一第一实例对象对应的合并权重根据以下公式(2)进行计算:7. The three-dimensional mapping method according to claim 5, characterized in that, for any first instance object, determining the merging weight corresponding to any first instance object according to the second merging number of the target instance object corresponding to the any first instance object comprises calculating the merging weight corresponding to the any first instance object according to the following formula (2):(2) (2)其中,是指所述任一第一实例对象对应的合并权重,p是指所述任一第一实例对象对应的目标实例对象的第二合并次数。in, refers to the merging weight corresponding to any of the first instance objects, and p refers to the second merging number of target instance objects corresponding to any of the first instance objects.8.根据权利要求1所述的三维建图方法,其特征在于,所述在满足预设筛除条件的情况下,对所述全局点云图中的任一第二实例对象执行筛选清除操作,以更新所述全局点云图包括:8. The three-dimensional mapping method according to claim 1, wherein when a preset screening condition is met, performing a screening and clearing operation on any second instance object in the global point cloud map to update the global point cloud map comprises:针对任一第二实例对象,确定所述任一第二实例对象对应的第二点云中的任一第二坐标点在所述当前帧以及所述当前帧之前的每个目标历史帧对应的第一合并次数,并根据该任一第二坐标点对应的多个第一合并次数生成与该任一第二坐标点对应的缓存数组;For any second instance object, determine the first merging times corresponding to any second coordinate point in the second point cloud corresponding to the any second instance object in the current frame and each target historical frame before the current frame, and generate a cache array corresponding to the any second coordinate point according to the multiple first merging times corresponding to the any second coordinate point;在满足所述预设筛除条件的情况下,针对任一第二实例对象,根据所述任一第二实例对象对应的第二点云的任一第二坐标点对应的缓存数组确定该任一第二坐标点在所述当前帧对应的合并增长速率;When the preset screening condition is met, for any second instance object, determining the merged growth rate corresponding to any second coordinate point in the current frame according to the cache array corresponding to any second coordinate point of the second point cloud corresponding to the any second instance object;针对任一第二实例对象,根据所述任一第二实例对象对应的第二点云的任一第二坐标点对应的合并增长速率和所述预设点云删除条件对所述任一第二实例对象执行所述筛选清除操作,以更新所述全局点云图。For any second instance object, the screening and clearing operation is performed on the any second instance object according to the merged growth rate corresponding to any second coordinate point of the second point cloud corresponding to the any second instance object and the preset point cloud deletion condition to update the global point cloud map.9.根据权利要求8所述的三维建图方法,其特征在于,所述预设筛除条件是指所述当前帧的帧数符合预设间隔帧数,所述针对任一第二实例对象,根据所述第二点云中的任一第二坐标点对应的缓存数组确定该任一第二坐标点在所述当前帧对应的合并增长速率包括:9. The three-dimensional mapping method according to claim 8, characterized in that the preset screening condition refers to that the number of frames of the current frame meets the preset interval frame number, and for any second instance object, determining the merged growth rate corresponding to any second coordinate point in the current frame according to the cache array corresponding to any second coordinate point in the second point cloud comprises:针对任一第二实例对象,确定所述任一第二实例对象对应的第二点云的任一第二坐标点对应的缓存数组中的第一目标合并次数和第二目标合并次数,其中,第一目标合并次数是指所述当前帧的预设间隔帧数之前的第一目标帧对应的第一合并次数,第二目标合并次数是指所述当前帧完成点云合并操作后对应的第一合并次数;For any second instance object, determine the first target merging number and the second target merging number in the cache array corresponding to any second coordinate point of the second point cloud corresponding to the any second instance object, wherein the first target merging number refers to the first merging number corresponding to the first target frame before the preset interval frame number of the current frame, and the second target merging number refers to the first merging number corresponding to the current frame after the point cloud merging operation is completed;针对任一第二实例对象,根据所述第二实例对象对应的第二点云的任一第二坐标点对应的第一目标合并次数和第二目标合并次数确定该任一第二坐标点对应的第一合并增长速率。For any second instance object, a first merging growth rate corresponding to any second coordinate point of the second point cloud corresponding to the second instance object is determined according to the first target merging times and the second target merging times corresponding to the any second coordinate point.10.根据权利要求9所述的三维建图方法,其特征在于,所述针对任一第二实例对象,根据所述第二实例对象对应的第二点云的任一第二坐标点对应的第一目标合并次数和第二目标合并次数确定该任一第二坐标点对应的第一合并增长速率包括,该任一第二坐标点对应的第一合并增长速率根据以下公式(3)进行计算:10. The three-dimensional mapping method according to claim 9, characterized in that, for any second instance object, determining the first merged growth rate corresponding to any second coordinate point of the second point cloud corresponding to the second instance object according to the first target merge times and the second target merge times corresponding to the second coordinate point comprises calculating the first merged growth rate corresponding to the any second coordinate point according to the following formula (3):(3) (3)其中,是指任一第二实例对象对应的第二点云的第i个第二坐标点对应的第一合并增长速率,是指任一第二实例对象对应的第二点云的第i个第二坐标点对应的第一目标合并次数,是指任一第二实例对象对应的第二点云的第i个第二坐标点对应的第二目标合并次数,m是指预设间隔帧数。in, It refers to the first merged growth rate corresponding to the i-th second coordinate point of the second point cloud corresponding to any second instance object, It refers to the number of first target merging times corresponding to the i-th second coordinate point of the second point cloud corresponding to any second instance object. It refers to the number of second target merging times corresponding to the i-th second coordinate point of the second point cloud corresponding to any second instance object, and m refers to the preset interval frame number.11.根据权利要求8所述的三维建图方法,其特征在于,所述点云合并操作还用于更新任一目标实例对象对应的第二点云中的目标坐标点的存活帧数,所述针对任一第二实例对象,根据所述第二点云中的任一第二坐标点对应的缓存数组确定该任一第二坐标点在所述当前帧对应的合并增长速率包括:11. The three-dimensional mapping method according to claim 8, characterized in that the point cloud merging operation is also used to update the number of survival frames of the target coordinate point in the second point cloud corresponding to any target instance object, and for any second instance object, determining the merging growth rate corresponding to any second coordinate point in the current frame according to the cache array corresponding to any second coordinate point in the second point cloud comprises:针对任一第二实例对象,确定所述任一第二实例对象对应的第二点云的任一第二坐标点在所述当前帧对应的存活帧数;For any second instance object, determine the number of survival frames corresponding to any second coordinate point of the second point cloud corresponding to the any second instance object in the current frame;针对任一第二实例对象,根据所述任一第二实例对象对应的第二点云的任一第二坐标点在所述当前帧的第一合并次数和存活帧数确定该任一第二坐标点对应的第二合并增长速率。For any second instance object, a second merge growth rate corresponding to any second coordinate point of the second point cloud corresponding to the any second instance object is determined according to the first merge times and the number of survival frames in the current frame.12.根据权利要求11所述的三维建图方法,其特征在于,所述针对任一第二实例对象,根据所述任一第二实例对象对应的第二点云的任一第二坐标点在所述当前帧的第一合并次数和存活帧数确定该任一第二坐标点对应的第二合并增长速率包括,该任一第二坐标点对应的第二合并增长速率根据以下公式(4)进行计算:12. The three-dimensional mapping method according to claim 11, characterized in that for any second instance object, determining the second merged growth rate corresponding to any second coordinate point of the second point cloud corresponding to the any second instance object according to the first merge times and the survival frame number of the current frame comprises: the second merged growth rate corresponding to the any second coordinate point is calculated according to the following formula (4):(4) (4)其中,是指所述任一第二实例对象对应的第二点云的第i个第二坐标点对应的第二合并增长速率,是指任一第二实例对象对应的第二点云的第i个第二坐标点在所述当前帧的第一合并次数,是指所述任一第二实例对象对应的第二点云的第i个第二坐标点在所述当前帧的存活帧数。in, It refers to the second combined growth rate corresponding to the i-th second coordinate point of the second point cloud corresponding to any second instance object, It refers to the first merging number of the i-th second coordinate point of the second point cloud corresponding to any second instance object in the current frame, It refers to the number of frames in which the i-th second coordinate point of the second point cloud corresponding to any second instance object survives in the current frame.13.根据权利要求8所述的三维建图方法,其特征在于,所述合并增长速率包括第一合并增长速率,所述预设点云删除条件的函数表达式如以下公式(5)所示:13. The three-dimensional mapping method according to claim 8, characterized in that the merge growth rate includes a first merge growth rate, and the function expression of the preset point cloud deletion condition is shown in the following formula (5):(5) (5)其中,是指所述任一第二实例对象对应的第二点云的第i个第二坐标点对应的第一合并增长速率,是指所述当前帧距离向前查看的检查点之间的数量,是指第一删除力度控制参数,Z是指正整数。in, It refers to the first combined growth rate corresponding to the i-th second coordinate point of the second point cloud corresponding to any second instance object, is the number of checkpoints that are forward from the current frame. refers to the first deletion force control parameter, and Z refers to a positive integer.14.根据权利要求8所述的三维建图方法,其特征在于,所述合并增长速率包括第二合并增长速率,所述预设点云删除条件的函数表达式如以下公式(6)所示:14. The three-dimensional mapping method according to claim 8, characterized in that the merge growth rate includes a second merge growth rate, and the function expression of the preset point cloud deletion condition is shown in the following formula (6):(6) (6)其中,是指所述任一第二实例对象对应的第二点云的第i个第二坐标点对应的第二合并增长速率,是指所述任一第二实例对象对应的第二点云的第k个第二坐标点对应的第二合并增长速率,N是指所述任一第二实例对象对应的第二点云的第二坐标点总数,是指第二删除力度控制参数。in, It refers to the second combined growth rate corresponding to the i-th second coordinate point of the second point cloud corresponding to any second instance object, refers to the second merged growth rate corresponding to the kth second coordinate point of the second point cloud corresponding to any second instance object, N refers to the total number of second coordinate points of the second point cloud corresponding to any second instance object, Refers to the second deletion intensity control parameter.15.根据权利要求8所述的三维建图方法,其特征在于,所述方法还包括:15. The three-dimensional mapping method according to claim 8, characterized in that the method further comprises:针对任一第二实例对象,确定所述任一第二实例对象对应的第二点云的任一第二坐标点对应的缓存数组中的第二目标合并次数;For any second instance object, determine the second target merging times in the cache array corresponding to any second coordinate point of the second point cloud corresponding to the any second instance object;针对任一第二实例对象,在所述任一第二实例对象对应的第二点云的任一第二坐标点对应第二目标合并次数大于或等于保留临界帧数的情况下,保留所述该任一第二坐标点在更新后的全局点云图中。For any second instance object, if the number of second target merging corresponding to any second coordinate point of the second point cloud corresponding to the any second instance object is greater than or equal to the retention critical number of frames, the any second coordinate point is retained in the updated global point cloud map.16.一种三维建图装置,其特征在于,包括:16. A three-dimensional mapping device, comprising:存储器,被配置成存储指令;a memory configured to store instructions;处理器,被配置成从所述存储器调用所述指令以及在执行所述指令时能够实现根据权利要求1至15中任一项所述的三维建图方法。A processor is configured to call the instructions from the memory and implement the three-dimensional mapping method according to any one of claims 1 to 15 when executing the instructions.17.一种机器可读存储介质,其特征在于,该机器可读存储介质上存储有指令,该指令用于使得机器执行根据权利要求1至15中任一项所述的三维建图方法。17. A machine-readable storage medium, characterized in that instructions are stored on the machine-readable storage medium, and the instructions are used to enable a machine to execute the three-dimensional mapping method according to any one of claims 1 to 15.
CN202411760306.7A2024-12-032024-12-03Three-dimensional image construction method, device and storage mediumActiveCN119229036B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202411760306.7ACN119229036B (en)2024-12-032024-12-03Three-dimensional image construction method, device and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202411760306.7ACN119229036B (en)2024-12-032024-12-03Three-dimensional image construction method, device and storage medium

Publications (2)

Publication NumberPublication Date
CN119229036Atrue CN119229036A (en)2024-12-31
CN119229036B CN119229036B (en)2025-03-07

Family

ID=94065514

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202411760306.7AActiveCN119229036B (en)2024-12-032024-12-03Three-dimensional image construction method, device and storage medium

Country Status (1)

CountryLink
CN (1)CN119229036B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9858640B1 (en)*2015-07-152018-01-02Hrl Laboratories, LlcDevice and method for merging 3D point clouds from sparsely distributed viewpoints
CN111340942A (en)*2020-02-252020-06-26电子科技大学Three-dimensional reconstruction system based on unmanned aerial vehicle and method thereof
WO2021164469A1 (en)*2020-02-212021-08-26北京市商汤科技开发有限公司Target object detection method and apparatus, device, and storage medium
CN114119920A (en)*2021-10-292022-03-01北京航空航天大学杭州创新研究院Three-dimensional point cloud map construction method and system
CN115407357A (en)*2022-07-052022-11-29东南大学 Low-beam LiDAR-IMU-RTK positioning and mapping algorithm based on large scenes
CN115908514A (en)*2022-10-182023-04-04西安电子科技大学 A point cloud registration method based on fusion of global features and local features
CN115993121A (en)*2023-03-022023-04-21北京理工大学 A 3D map construction and maintenance method for an indoor mobile robot
US20230164353A1 (en)*2020-04-222023-05-25Lg Electronics Inc.Point cloud data processing device and processing method
CN117455936A (en)*2023-12-252024-01-26法奥意威(苏州)机器人系统有限公司Point cloud data processing method and device and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9858640B1 (en)*2015-07-152018-01-02Hrl Laboratories, LlcDevice and method for merging 3D point clouds from sparsely distributed viewpoints
WO2021164469A1 (en)*2020-02-212021-08-26北京市商汤科技开发有限公司Target object detection method and apparatus, device, and storage medium
CN111340942A (en)*2020-02-252020-06-26电子科技大学Three-dimensional reconstruction system based on unmanned aerial vehicle and method thereof
US20230164353A1 (en)*2020-04-222023-05-25Lg Electronics Inc.Point cloud data processing device and processing method
CN114119920A (en)*2021-10-292022-03-01北京航空航天大学杭州创新研究院Three-dimensional point cloud map construction method and system
CN115407357A (en)*2022-07-052022-11-29东南大学 Low-beam LiDAR-IMU-RTK positioning and mapping algorithm based on large scenes
CN115908514A (en)*2022-10-182023-04-04西安电子科技大学 A point cloud registration method based on fusion of global features and local features
CN115993121A (en)*2023-03-022023-04-21北京理工大学 A 3D map construction and maintenance method for an indoor mobile robot
CN117455936A (en)*2023-12-252024-01-26法奥意威(苏州)机器人系统有限公司Point cloud data processing method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张伟伟;陈超;徐军;: "融合激光与视觉点云信息的定位与建图方法", 计算机应用与软件, no. 07, 12 July 2020 (2020-07-12)*

Also Published As

Publication numberPublication date
CN119229036B (en)2025-03-07

Similar Documents

PublicationPublication DateTitle
CN111210429B (en)Point cloud data partitioning method and device and obstacle detection method and device
US20190370989A1 (en)Method and apparatus for 3-dimensional point cloud reconstruction
CN111582054B (en)Point cloud data processing method and device and obstacle detection method and device
CN111553946B (en)Method and device for removing ground point cloud and method and device for detecting obstacle
US8199977B2 (en)System and method for extraction of features from a 3-D point cloud
US20130215233A1 (en)3d scene model from collection of images
CN111340922A (en)Positioning and mapping method and electronic equipment
CN110046623B (en)Image feature point extraction method and camera
CN113298871B (en)Map generation method, positioning method, system thereof, and computer-readable storage medium
CN113362363A (en)Automatic image annotation method and device based on visual SLAM and storage medium
CN114066999B (en) Target positioning system and method based on three-dimensional modeling
CN114140581B (en) Automatic modeling method, device, computer equipment and storage medium
CN119229036B (en)Three-dimensional image construction method, device and storage medium
CN118521702A (en)Point cloud rendering method and system based on nerve radiation field
WO2022041119A1 (en)Three-dimensional point cloud processing method and apparatus
CN112258575A (en)Method for quickly identifying object in synchronous positioning and map construction
CN111210500B (en)Three-dimensional point cloud processing method and device
CN114037921A (en)Sag modeling method and system based on intelligent unmanned aerial vehicle identification
CN119251408B (en)Image processing method, device and storage medium based on open vocabulary
CN119229037B (en) Point cloud filtering method, device and storage medium
Tanner et al.Keep geometry in context: Using contextual priors for very-large-scale 3d dense reconstructions
CN115984461B (en)Face three-dimensional key point detection method based on RGBD camera
CN114777787B (en) Method and device for constructing autonomous driving map of wall-climbing robot
CN114782862B (en)Plane detection method, plane detection device, plane detection medium and plane detection program product
CN119579625B (en) A loop closure detection method based on geometric structure between planes

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp