Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it should be understood that the detailed description described herein is merely for illustrating and explaining the embodiments of the present application, and is not intended to limit the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that, if directional indications (such as up, down, left, right, front, and rear are referred to in the embodiments of the present application), the directional indications are merely used to explain the relative positional relationship, movement conditions, and the like between the components in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indications are correspondingly changed.
In addition, if there is a description of "first", "second", etc. in the embodiments of the present application, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present application.
Fig. 1 schematically shows a flow diagram of a three-dimensional mapping method according to an embodiment of the application. As shown in fig. 1, an embodiment of the present application provides a three-dimensional mapping method, which may include the following steps.
S102, acquiring a current frame acquired by a robot, and acquiring a local point cloud image corresponding to the current frame and aiming at a target site based on a preset algorithm, wherein the local point cloud image comprises first point clouds respectively corresponding to a plurality of first instance objects.
It is understood that a robot refers to a robot that is movable and can acquire images. The target site refers to a three-dimensional space site designated by a technician for three-dimensional mapping. The current frame refers to a current image corresponding to a local space region acquired when the robot performs image acquisition on a target site at the current moment. The local point cloud image refers to a point cloud image of a local space corresponding to the current frame, and the local point cloud image comprises three-dimensional point cloud data of the local space region. Specifically, during the three-dimensional mapping process, the robot may continuously collect multiple frames of images, which may be images containing depth information, such as RGB-D maps. Each frame of RGB-D image is combined with the camera pose of the robot, and then the three-dimensional point cloud data (local point cloud image) in the space area corresponding to the RGB-D image can be converted into the three-dimensional point cloud data (local point cloud image) based on a two-dimensional conversion three-dimensional mapping algorithm through a target detection algorithm or a target segmentation algorithm, and a global three-dimensional point cloud map is constructed and updated through the three-dimensional point cloud data of continuous multi-frame RGB-D images.
Specifically, points in three-dimensional point cloud data contained in the local point cloud image belong to different groups or categories, wherein the points in each group belong to the same object or the same area, and the point cloud is divided according to characteristics such as space, geometry, texture and the like, so that the point clouds divided in the same category have similar characteristics. An instance object refers to an object created by instantiating a class, and a first instance object refers to an object or region of the instantiated class in the local point cloud. Specifically, the local point cloud image includes a plurality of first instance objects. Wherein each first instance object has a corresponding first point cloud. For example, the local point cloud image may include example objects such as a table, a chair, a cup, and the like.
S104, acquiring a global point cloud image aiming at the target site, wherein the global point cloud image comprises a plurality of second point clouds corresponding to the second instance objects.
It is understood that the global point cloud image refers to a point cloud image of a spatial region corresponding to the entire target site. It will be appreciated that objects in the target field may move and change, and then the local point cloud corresponding to the current frame may update the global point cloud. The global point cloud image comprises a plurality of second instance objects, wherein the second instance objects refer to objects or areas of an instantiation class in the global point cloud image, and each second instance object has a corresponding second point cloud. It will be appreciated that the first instance object is the new instance object at the current time and the second instance object is the old instance object at the historical time. Since the first instance object and the second instance object have corresponding attributes respectively and can be positioned to corresponding spatial positions respectively, the global point cloud image can be updated by comparing and analyzing the first instance object and the second instance object.
S106, for any first instance object, searching a target instance object matched with any first instance object in a plurality of second instance objects included in the global point cloud image.
It will be appreciated that, when the first instance object exists in the local point cloud image and the second instance object of the global point cloud image are the same, and when the global point cloud image is updated by using the point cloud data of the local point cloud image, the global point cloud image exists in the repeated instance object, then the second instance object which is matched with the first instance object of the local point cloud image, namely, the target instance object, can be found in the global point cloud image. Wherein the matching with the first instance object includes calculating spatial overlap, visual feature similarity, etc. between the two instance objects.
S108, aiming at any first instance object, performing point cloud merging operation on a first point cloud corresponding to any first instance object and a second point cloud corresponding to a target instance object to update a global point cloud diagram, wherein the point cloud merging operation is used for merging any first coordinate point in the first point cloud to a target coordinate point adjacent to any first coordinate point in the second point cloud, and updating the first merging times of the target coordinate point.
It can be understood that the point cloud merging operation is performed on the first point cloud corresponding to the first instance object and the second point cloud corresponding to the target instance object. Specifically, the first point cloud includes a plurality of first coordinate points, and the second point cloud also includes a plurality of second coordinate points. Wherein each coordinate point has a unique identification. And traversing each first coordinate point in the first point cloud corresponding to the first instance object, namely traversing the space coordinates (x, y, z) of each first coordinate point, for the matched first instance object and the target instance object. Then, for each first coordinate point, whether or not there is a second coordinate point adjacent to the first coordinate point may be confirmed from the spatial coordinates of the second coordinate point in the second point cloud. If so, the second coordinate point where the second point cloud is searched may be determined as a target coordinate point adjacent to the first coordinate point, and the target coordinate point may be considered to be identical to the image information contained in the first coordinate point. Then, the target coordinate point may be combined with the first coordinate point. Merging may be to retain the target coordinate point in the global point cloud and delete the first coordinate point. Meanwhile, each time the target coordinate point is merged, the first merging times of the target coordinate point need to be updated in real time. The first merging times are accumulated merging times of the coordinate points for executing the point cloud merging operation. Then, after the point cloud merging operation is completed on the first point cloud corresponding to each first instance object in the local point cloud image corresponding to the current frame, an updated global point cloud image can be obtained. In this way, at the point cloud merging, the merging number attribute as the spatial existence confidence is reserved for each coordinate point, and the more the first merging number of coordinate points, the greater the likelihood that they do exist at the corresponding positions is. The coordinate point combination of each frame realizes a certain downsampling effect, so that the point cloud quantity of the global point cloud image can be controlled in each frame, the memory occupation of the global point cloud image is reduced, and the processing efficiency of the follow-up ghost image removing algorithm is improved.
And S110, under the condition that the preset screening condition is met, screening and clearing operation is carried out on any second instance object in the global point cloud image so as to update the global point cloud image, wherein the screening and clearing operation is used for deleting the corresponding coordinate points according to the first merging times of the second coordinate points in the second point cloud corresponding to any second instance object and the preset point cloud deleting condition.
It can be appreciated that the global point cloud updated by the foregoing scheme may have the problem of example object ghosting. The preset screening condition refers to a condition that a technician screens out a second instance object in the global point cloud picture according to technical experience. And after the global point cloud image is updated, screening and clearing operations are further carried out on a second instance object in the updated global point cloud image according to preset screening conditions so as to further update the global point cloud image, thereby solving the problem of ghost of the instance object. The screening and clearing operation is used for deleting the corresponding coordinate points according to the first merging times of the second coordinate points in the second point cloud corresponding to any one of the second instance objects and preset point cloud deleting conditions. The preset point cloud deleting condition refers to a condition for deleting the second coordinate point in the second point cloud specified in the preset screening condition. In point cloud merging, a merging time attribute as a spatial existence confidence is reserved for each coordinate point, and the more the first merging time, the more the coordinate point indicates that the coordinate point is actually present at the corresponding position, the more likely it is. Therefore, the duplicate removal algorithm based on the first combination time attribute can fully consider the confidence coefficient distribution in the point cloud, avoid removing accurate coordinate points and reserving inaccurate coordinate points, improve the point cloud position accuracy of the example object, and finally improve the three-dimensional map building accuracy.
Through the technical scheme, the double image removing algorithm based on the coordinate point level is realized by utilizing the merging time attribute of the coordinate points, and the double image removing method has accurate double image removing effect on objects to be detected with different sizes. And the coordinate point combination of each frame realizes a certain downsampling effect, so that the point cloud quantity of the global point cloud image can be controlled in each frame, the memory occupation of the global point cloud image is reduced, and the processing efficiency of the follow-up ghost image removing algorithm is improved.
In the embodiment of the application, searching the target example object matched with any first example object in a plurality of second example objects included in a global point cloud image comprises determining the spatial overlapping degree between each second example object included in the global point cloud image and any first example object for any first example object, determining the second example object with the spatial overlapping degree larger than a first preset threshold value in the plurality of second example objects as a third example object for any first example object, determining the visual characteristic similarity between the third example object and any first example object for any first example object, and determining the third example object as the target example object matched with any first example object under the condition that the visual characteristic similarity between the third example object and any first example object is larger than a second preset threshold value for any first example object.
It is appreciated that the spatial overlap may be represented by space IoU (Intersection over Union), and that space IoU is an indicator of the overlap of two bounding boxes. The value of the space IoU ranges between 0 and 1, with higher values indicating a higher degree of coincidence of the two bounding boxes. For any first instance object, the overlapping degree between the first instance object and each second instance object can be determined by calculating the space IoU through the bounding box corresponding to the first instance object and the bounding box corresponding to each second instance object. The first preset threshold value refers to an empirical value set by a technician for spatial overlap according to technical experience. If the spatial overlapping degree between the first instance objects in the global point cloud chart is larger than the second instance object of the first preset threshold value, the second instance object can be deleted to be used as a third instance object. Further, for any first instance object, screening a third instance object with visual characteristic similarity larger than a second preset threshold value from the third instance object, wherein the visual characteristic similarity is larger than the second preset threshold value, and the third instance object is used as a target instance object. It is understood that visual feature similarity refers to the measure of how close two images or objects are to a visual feature. This concept is based on low-level features of the image, such as color, texture, shape, etc. Specifically, the calculation of the visual feature similarity generally involves key elements such as global similarity, local similarity, and structural similarity. Wherein global similarity considers the color, texture, shape, etc. characteristics of the whole image. local similarity focuses on features of a particular region in the image, such as details of an object or scene. structural similarity can measure structural relationships between images, such as the layout of objects and scenes. Then, in the case where the spatial overlap and the visual feature similarity satisfy the conditions set by the technician, the instance object in the global point cloud image may be considered as a target instance object that matches the first instance object. A
In the embodiment of the application, aiming at any first instance object, the point cloud merging operation is carried out on the first point cloud corresponding to any first instance object and the second point cloud corresponding to the target instance object, wherein the method comprises the steps of determining a neighborhood space corresponding to any first coordinate point of the first point cloud corresponding to any first instance object, determining whether any second coordinate point included in the second point cloud corresponding to the target instance object exists in the neighborhood space corresponding to any first coordinate point for any first instance object, determining the second coordinate point existing in the neighborhood space corresponding to any first coordinate point in the second point cloud corresponding to any first coordinate point as the target coordinate point in the neighborhood space corresponding to any first coordinate point for any first instance object, and updating the first merging times of the target coordinate point in a global point graph so as to execute the point cloud merging operation.
It can be understood that, for the first example object and the target example object which are matched, each first coordinate point in the first point cloud corresponding to the first example object is traversed, that is, the space coordinates (x, y, z) of each first coordinate point are traversed, and whether any second coordinate point in the second point cloud corresponding to the target example object is included or not is searched in a neighborhood space (x±a, y±a, z±a) of the first coordinate point. If so, the second coordinate point where the second point cloud is searched may be determined as a target coordinate point adjacent to the first coordinate point, and the target coordinate point may be considered to be identical to the image information contained in the first coordinate point. Specifically, when searching for the second point cloud corresponding to the target instance object, there may be a plurality of second coordinate points in the neighborhood space corresponding to the first coordinate point, in order to improve the calculation efficiency, the searched first coordinate point may be used as the target coordinate point, and then the search of the next first coordinate point may be continued. Then, the target coordinate point may be combined with the first coordinate point. Merging may be to retain the target coordinate point in the global point cloud and delete the first coordinate point. Meanwhile, each time the target coordinate point is merged, the first merging times of the target coordinate point need to be updated in real time. The first merging times are accumulated merging times of the coordinate points for executing the point cloud merging operation. Then, after the point cloud merging operation is completed on the first point cloud corresponding to each first instance object in the local point cloud image corresponding to the current frame, an updated global point cloud image can be obtained. In this way, at the point cloud merging, the merging number attribute as the spatial existence confidence is reserved for each coordinate point, and the more the first merging number of coordinate points, the greater the likelihood that they do exist at the corresponding positions is. The coordinate point combination of each frame realizes a certain downsampling effect, so that the point cloud quantity of the global point cloud image can be controlled in each frame, the memory occupation of the global point cloud image is reduced, and the processing efficiency of the follow-up ghost image removing algorithm is improved.
In the embodiment of the application, the point cloud merging operation is performed on any first instance object and the first point cloud corresponding to any first instance object and the second point cloud corresponding to the target instance object, and further comprises the step of adding any first coordinate point to the global point cloud map for any first instance object under the condition that any second coordinate point included by the second point cloud does not exist in the neighborhood space corresponding to any first coordinate point, so as to perform the point cloud merging operation.
Specifically, the space coordinates (x, y, z) of each first coordinate point are traversed, and any one second coordinate point in the second point cloud corresponding to the target instance object is searched in a neighborhood space (x+/-a, y+/-a, z+/-a) of the first coordinate point. If the search is not completed, the arbitrary first coordinate point can be added to the global point cloud image to execute the point cloud merging operation. Therefore, new image information appearing in the local point cloud image can be supplemented to the global point cloud image, and dynamic update of the global point cloud image is achieved.
In the embodiment of the application, each second instance object corresponds to a visual feature, the visual feature comprises a plurality of dimension vectors, and the method further comprises the steps of executing instance merging operation on any first instance object and a target instance object according to any first instance object under the condition that point cloud merging operation is completed on all first coordinate points included in the first point cloud corresponding to any first instance object, updating the second merging times of the target instance object, wherein the instance merging operation is used for merging the visual feature of any first instance object and the target instance object, determining merging weights corresponding to any first instance object according to the second merging times of the target instance object corresponding to any first instance object according to any first instance object, and updating the value of each dimension vector included in the visual feature corresponding to the target instance object according to the merging weights corresponding to any first instance object so as to update a global point cloud map according to any first instance object.
It may be appreciated that the instance merging operation is a merging operation of instance objects between the local point cloud image and the global point cloud image, and may specifically be a merging of visual features corresponding to the instance objects. That is, any first instance object is fused with the visual features of the corresponding target instance object. Wherein, the fusion of the visual features can be realized by updating the values of the dimension vectors. It can be appreciated that, since the spatial overlapping degree and the visual feature similarity of the first instance object and the corresponding target instance object reach the preset threshold, the two may be defined as substantially the same instance object. Specifically, the second merging times of the target instance object can be calculated to obtain the merging weight corresponding to the instance merging operation of the first instance object, so as to serve as the basis of the instance merging operation. The merging weight can be obtained by normalizing the second merging times. The second merging times refer to the accumulated merging times of the instance object executing the instance merging operation. Then, for any first instance object, the value of each dimension vector included by the visual feature corresponding to the target instance object may be updated according to the merging weight corresponding to any first instance object, so as to update the global point cloud image.
In an embodiment of the present application, for any one of the first instance objects, updating the value of each dimension vector included in the visual feature corresponding to the target instance object according to the merge weight corresponding to any one of the first instance objects, so as to update the global point cloud image includes updating the value of each dimension vector according to formula (1):
(1)
Wherein,Refers to the value of the j-th dimension vector updated included in the visual feature corresponding to the target instance object,Refers to the merge weight corresponding to any first instance object,Refers to the value before the update of the ith dimension vector j included in the visual feature corresponding to the target instance object,Refers to the value of the j-th dimension vector included by the visual feature corresponding to any first instance object.
In an embodiment of the present application, for any first instance object, determining, according to the second merging times of the target instance object corresponding to any first instance object, the merging weight corresponding to any first instance object includes calculating the merging weight corresponding to any first instance object according to the following formula (2):
(2)
Wherein,The merging weight corresponding to any first instance object is indicated, and p is the second merging times of the target instance object corresponding to any first instance object. It will be appreciated that the more the second merge times of the target instance object corresponding to the first instance object, the smaller the value of the corresponding merge weight, and the greater the influence of the dimension vector of the visual feature corresponding to the target instance object on the value of the updated dimension vector, and the smaller the influence of the dimension vector of the visual feature corresponding to the first instance object on the value of the updated dimension vector. That is, the more the second merging times of the target instance object are, the more image information of the old instance object is reserved by the updated global point cloud image, so that the stability of the image information in the three-dimensional map is improved, and the precision of the three-dimensional map is improved.
In the embodiment of the application, the filtering and clearing operation is performed on any second instance object in the global point cloud image under the condition that the preset filtering and clearing condition is met to update the global point cloud image, wherein the filtering and clearing operation comprises the steps of determining the first merging times corresponding to each target historical frame of any second coordinate point in the second point cloud corresponding to any second instance object before a current frame and the current frame according to any second instance object, generating a cache array corresponding to any second coordinate point according to the first merging times corresponding to any second coordinate point, determining the merging growth rate of any second coordinate point in the current frame according to the cache array corresponding to any second coordinate point of the second point cloud corresponding to any second instance object according to any second coordinate point of any second instance object, and updating the global point cloud image according to the merging growth rate and preset clearing condition of any second coordinate point of the second point cloud corresponding to any second instance object according to any second instance object.
It is understood that the target history frame refers to a history frame selected from a plurality of history frames acquired before the current frame. For example, it may be the first 3 frames or the first 5 frames of the current frame. Specifically, when the point cloud merging operation is performed, if the point cloud merging operation exists in the target coordinate point corresponding to the first coordinate point, the cache array corresponding to the target coordinate point may also be updated. The first merging times corresponding to each second coordinate point of the second point cloud in each historical frame and each current frame in the cache array can reflect the updated Cheng Zhongdian cloud merging growth condition of the local point cloud image to the global point cloud image frame by frame, namely the merging growth rate. The preset point cloud deletion condition may be a deletion condition set for the merge growth rate.
Specifically, after the point cloud merging operation and the instance merging operation of m frames are performed, a plurality of repeated coordinate points still exist, and then the screening and clearing operation based on the first merging times is required to be performed once on the second point cloud corresponding to each second instance object in the global point cloud image, so that the object ghost problem is solved. Then, each second coordinate point within the second point cloud may be polled for the second point cloud for each second instance object. For each second coordinate point, a buffer array with the size of m+1 is maintained in advance and is used for storing the value of the second merging times of the second coordinate point after the last m frames are merged. For example, for the second coordinate point with id of 001, it maintains a {100,101,102,103,103,104} buffer array (assuming m=5) to represent that it has been accumulated and combined 104 times after the current frame, the last frame has been accumulated and combined 103 times, and so on, the buffer array is updated every frame, each element in the array is advanced by one bit, and the last element complements the number of combinations after the current frame has been combined. For each second coordinate point, the values of the second number of merges of the current frame (i.e., the value of the last element of the cache array) and the second number of merges of the m frame before it (i.e., the value of the first element of the cache array) are checked. Further, a combined growth rate for the second coordinate point may be calculated based on the cache array. Then, for any second instance object, according to the merging and growing rate corresponding to any second coordinate point of the second point cloud corresponding to any second instance object, whether the merging and growing rate meets the preset point cloud deleting condition can be judged, so that whether screening and cleaning operation is executed on the second instance object can be determined according to the judging result, and the global point cloud graph is updated. Thus, the coordinate points can be accurately screened and cleared, and the ghost problem of the example object is solved.
In the embodiment of the application, the preset screening condition means that the frame number of the current frame accords with the preset interval frame number, the determination of the merging and growing rate of any second coordinate point corresponding to the current frame according to the cache array corresponding to any second coordinate point in the second point cloud for any second example object comprises the steps of determining the first target merging times and the second target merging times in the cache array corresponding to any second coordinate point in the second point cloud corresponding to any second example object for any second example object, wherein the first target merging times refer to the first merging times corresponding to the first target frame before the preset interval frame number of the current frame, the second target merging times refer to the first merging times corresponding to the current frame after the point cloud merging operation is completed, and the determination of the first merging and growing rate corresponding to any second coordinate point according to the first target merging times and the second target merging times of any second coordinate point cloud corresponding to any second example object.
It is understood that the preset screening condition means that the number of frames of the current frame corresponds to the preset interval number of frames. For example, the preset screening condition is that a screening and clearing operation is performed once every m frames, the interval between the number of frames of the current frame and the number of frames of the last screening and clearing operation is m frames, and the screening and clearing operation can be performed for the global point cloud image updated by the current frame.
For any second instance object, determining a first target merging time and a second target merging time in a cache array corresponding to any second coordinate point of a second point cloud corresponding to any second instance object, wherein the first target merging time is a first merging time corresponding to a first target frame before a preset interval frame number of a current frame, and the second target merging time is a first merging time corresponding to the current frame after the point cloud merging operation is completed. For example, for a {100,101,102,103,103,104} cache array, the first target merge time is 100 and the second target merge time is 104. Further, for any second instance object, determining a first merging increasing rate corresponding to any second coordinate point according to the first target merging times and the second target merging times corresponding to any second coordinate point of the second point cloud corresponding to the second instance object. It is understood that the first merge growth rate refers to the growth rate of the number of recent merges.
In an embodiment of the present application, for any second example object, determining, according to a first target merging number and a second target merging number corresponding to any second coordinate point of a second point cloud corresponding to the second example object, a first merging growth rate corresponding to any second coordinate point includes calculating, according to the following formula (3), the first merging growth rate corresponding to any second coordinate point:
(3)
Wherein,Refers to a first merging growth rate corresponding to an ith second coordinate point of a second point cloud corresponding to any one of the second instance objects,Refers to the first target merging times corresponding to the ith second coordinate point of the second point cloud corresponding to any second instance object,The second target merging times corresponding to the ith second coordinate point of the second point cloud corresponding to any second instance object are indicated, and m is the preset interval frame number.
In the embodiment of the application, the point cloud merging operation is further used for updating the survival frame number of the target coordinate point in the second point cloud corresponding to any target instance object, and determining the merging and growing rate of any second coordinate point in the current frame according to the cache array corresponding to any second coordinate point in the second point cloud for any second instance object comprises determining the survival frame number of any second coordinate point in the second point cloud corresponding to any second instance object in the current frame for any second instance object, and determining the second merging and growing rate of any second coordinate point in the second point cloud corresponding to any second instance object according to the first merging times and the survival frame number of any second coordinate point in the current frame for any second instance object.
It can be understood that, when the point cloud merging operation is performed, if the point cloud merging operation exists in the case of the target coordinate point corresponding to the first coordinate point, the survival frame number corresponding to the target coordinate point may also be updated. Specifically, the first merging number of the target coordinate point is increased by 1 (default initial value is 0), the survival frame number is increased by 1 (default initial value is 0), and the cache array corresponding to the target coordinate point is updated. For any second example object, the survival frame number of any second coordinate point of the second point cloud corresponding to any second example object in the current frame can be determined, and then the second merging increasing rate corresponding to any second coordinate point can be determined according to the first merging times and the survival frame number of any second coordinate point of the second point cloud corresponding to any second example object in the current frame. The second merge growth rate refers to the growth rate of the accumulated number of merges.
In an embodiment of the present application, for any second example object, determining, according to the number of times of first merging and the number of survival frames of the current frame of any second coordinate point of the second point cloud corresponding to any second example object, a second merging growth rate corresponding to any second coordinate point includes calculating, according to the following formula (4), the second merging growth rate corresponding to any second coordinate point:
(4)
Wherein,Refers to a second merging growth rate corresponding to an ith second coordinate point of a second point cloud corresponding to any second instance object,Refers to the first merging times of the ith second coordinate point of the second point cloud corresponding to any second instance object in the current frame,The survival frame number of the ith second coordinate point of the second point cloud corresponding to any second instance object in the current frame is referred to.
In the embodiment of the application, the merging growth rate comprises a first merging growth rate, and the function expression of the preset point cloud deletion condition is shown in the following formula (5):
(5)
Wherein,Refers to a first merging growth rate corresponding to an ith second coordinate point of a second point cloud corresponding to any one of the second instance objects,Refers to the number between checkpoints where the current frame is viewed forward,The first deletion dynamics control parameter is referred to, and Z is a positive integer. If the first deletion dynamics control parameterThe larger the deletion strength is, the stronger the deletion strength is, and the stronger the duplicate removal effect is. But this also results in excessive deduplication, thenCan be set to 2/j (rounding downwards), achieves balanced de-duplication effect and retains important image information.
In the embodiment of the application, the merging growth rate comprises a second merging growth rate, and the function expression of the preset point cloud deletion condition is shown in the following formula (6):
(6)
Wherein,Refers to a second merging growth rate corresponding to an ith second coordinate point of a second point cloud corresponding to any second instance object,Refers to a second merging growth rate corresponding to a kth second coordinate point of the second point cloud corresponding to any second instance object, N refers to a second coordinate point total number of the second point cloud corresponding to any second instance object,Refers to a second deletion dynamics control parameter. Second deletion dynamics control parameterGenerally, the deletion strength is higher as the deletion strength is higher, and the duplicate removal effect is higher as the deletion strength is higher. But may also result in excessive deduplication, thenCan be set to 1/3, so as to achieve the balanced de-duplication effect and keep important image information.
The function expression (5) for the preset point cloud deletion condition is a deletion condition 1, and the function expression (6) is a deletion condition 2. For each second coordinate point in the global point cloud, if it reaches the deletion condition 1 or 2, the second coordinate point is deleted. Meanwhile, the corresponding cache array, survival frame number, merging and increasing rate and other attributes are deleted. And if both deleting conditions are met, reserving the second coordinate point in the global point cloud image so as to update the global point cloud image.
The method further comprises the steps of determining second target merging times in a cache array corresponding to any second coordinate point of a second point cloud corresponding to any second example object according to any second example object, and reserving any second coordinate point in the updated global point cloud image according to any second example object under the condition that the second target merging times corresponding to any second coordinate point of the second point cloud corresponding to any second example object are larger than or equal to reserved critical frame numbers.
It can be understood that, in order to avoid the phenomenon that the merging times increase slowly due to the fact that some parts of the instance object cannot be observed for a long time after the observation view angle of the instance object is changed, so that the deletion error is finally caused, a condition of refusing to delete can be set. That is, for any second instance object, determining a second target merging time in the cache array corresponding to any second coordinate point of the second point cloud corresponding to any second instance object, where the second target merging time corresponding to any second coordinate point of the second point cloud corresponding to any second instance object is greater than or equal to the retention critical frame numberAnd (3) reserving any second coordinate point in the updated global point cloud picture. That is, even if the second coordinate point reaches the preset point cloud deletion condition, if it is finally judged that it also reaches the deletion rejection condition, it is not deleted at last, but is retained in the global point cloud image, thereby realizing the update of the global point cloud image. Thus, in the process of updating the global point cloud image, more key information of the global point cloud image can be reserved.
Through the technical scheme, when point clouds are combined, the combination times attribute serving as the space existence confidence is reserved for each coordinate point, and the probability that the coordinate points with more combination times are actually present at corresponding positions is higher. The duplication elimination algorithm based on the merging time attribute can fully consider the confidence coefficient distribution in the point cloud, avoid the situation that accurate coordinate points are removed and coordinate points with inaccurate positions are reserved, improve the position accuracy of the point cloud of the instance object, and finally improve the accuracy of the 3d graph construction. And by utilizing the merging time attribute of the coordinate points, a ghost image removing algorithm based on the coordinate point level is realized, and the method has an accurate ghost image removing effect on objects to be detected with different sizes. Meanwhile, the coordinate point combination of each frame realizes a certain downsampling effect, so that the point cloud quantity of the global point cloud image can be controlled in each frame, the memory occupation of the global point cloud image is reduced, and the processing efficiency of the follow-up ghost image removing algorithm is improved.
FIG. 1 is a flow chart of a three-dimensional mapping method in one embodiment. It should be understood that, although the steps in the flowchart of fig. 1 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of other steps or sub-steps of other steps.
Fig. 2 schematically shows a block diagram of a three-dimensional mapping apparatus according to an embodiment of the present application. As shown in fig. 2, an embodiment of the present application provides a three-dimensional mapping method, which may include:
a memory configured to store instructions, and
And the processor is configured to call the instruction from the memory and can realize the three-dimensional mapping method when executing the instruction.
Specifically, in an embodiment of the present application, a processor may be configured to:
The method comprises the steps of acquiring a current frame acquired by a robot, acquiring a local point cloud image corresponding to a target site based on a preset algorithm, wherein the local point cloud image comprises first point clouds corresponding to a plurality of first example objects, acquiring a global point cloud image corresponding to the target site, wherein the global point cloud image comprises second point clouds corresponding to a plurality of second example objects, searching a target example object matched with any first example object in a plurality of second example objects contained in the global point cloud image aiming at any first example object, executing point cloud merging operation on the first point cloud corresponding to any first example object and the second point cloud corresponding to the target example object to update the global point cloud image, wherein the point cloud merging operation is used for merging any first coordinate point in the first point cloud to a target coordinate point adjacent to any first coordinate point in the second point cloud and updating the target coordinate point, and deleting any second coordinate point in the global point cloud image under the condition of meeting the preset screening condition, and deleting any second coordinate point in the global point cloud image according to the first cloud filtering condition.
In an embodiment of the application, the processor may be configured to:
The method comprises the steps of determining the spatial overlapping degree between each second instance object included in a global point cloud image and any first instance object according to any first instance object, determining the second instance object with the spatial overlapping degree larger than a first preset threshold value in the plurality of second instance objects according to any first instance object as a third instance object, determining the visual characteristic similarity between the third instance object and any first instance object according to any first instance object, and determining the third instance object as a target instance object matched with any first instance object according to any first instance object when the visual characteristic similarity between the third instance object and any first instance object is larger than a second preset threshold value.
In an embodiment of the application, the processor may be configured to:
The method comprises the steps of determining a neighborhood space corresponding to any first coordinate point of a first point cloud corresponding to a first instance object, determining whether any second coordinate point included in a second point cloud corresponding to a target instance object exists in the neighborhood space corresponding to any first coordinate point for any first instance object, determining the second coordinate point existing in the neighborhood space corresponding to any first coordinate point in the second point cloud corresponding to the target instance object as a target coordinate point under the condition that any second coordinate point exists in the neighborhood space corresponding to any first coordinate point for any first instance object, keeping the target coordinate point in a global point cloud graph, updating the first merging times of the target coordinate point, and executing point cloud merging operation.
In an embodiment of the application, the processor may be configured to:
and adding any first coordinate point to the global point cloud image to execute point cloud merging operation under the condition that any second coordinate point included in the second point cloud does not exist in a neighborhood space corresponding to any first coordinate point aiming at any first example object.
In an embodiment of the application, each second instance object corresponds to a visual feature, the visual feature comprising a plurality of dimension vectors, the processor may be configured to:
Under the condition that point cloud merging operation is completed by all first coordinate points included in the first point cloud corresponding to any first instance object, instance merging operation is carried out on any first instance object and a target instance object according to any first instance object, the second merging times of the target instance object are updated, the instance merging operation is used for merging visual features of the any first instance object and the target instance object, merging weights corresponding to any first instance object are determined according to the second merging times of the target instance object corresponding to any first instance object according to any first instance object, and values of each dimension vector included in the visual features corresponding to the target instance object are updated according to the merging weights corresponding to any first instance object according to any first instance object, so that a global point cloud map is updated.
In an embodiment of the application, the processor may be configured to:
updating the value of each dimension vector according to formula (1):
(1)
Wherein,Refers to the value of the j-th dimension vector updated included in the visual feature corresponding to the target instance object,Refers to the merge weight corresponding to any first instance object,Refers to the value before the update of the ith dimension vector j included in the visual feature corresponding to the target instance object,Refers to the value of the j-th dimension vector included by the visual feature corresponding to any first instance object.
In an embodiment of the application, the processor may be configured to:
the merging weight corresponding to any first instance object is calculated according to the following formula (2):
(2)
Wherein,The merging weight corresponding to any first instance object is indicated, and p is the second merging times of the target instance object corresponding to any first instance object.
In an embodiment of the application, the processor may be configured to:
Under the condition that a preset screening condition is met, determining the first merging times corresponding to each target historical frame of any second coordinate point in any second coordinate point cloud corresponding to any second example object in the current frame and before the current frame according to any second example object, generating a cache array corresponding to any second coordinate point according to the first merging times corresponding to any second coordinate point, determining the merging growth rate of any second coordinate point in the current frame according to the cache array corresponding to any second coordinate point of any second point cloud corresponding to any second example object according to any second example object, and executing screening and cleaning operation on any second example object according to the merging growth rate of any second coordinate point of any second point cloud corresponding to any second example object and preset point cloud deleting conditions according to any second example object so as to update a global point cloud map according to any second example object.
In an embodiment of the present application, the preset screening condition means that the number of frames of the current frame corresponds to a preset interval number of frames, and the processor may be configured to:
Determining a first target merging time and a second target merging time in a cache array corresponding to any second coordinate point of a second point cloud corresponding to any second example object according to any second example object, wherein the first target merging time is a first merging time corresponding to a first target frame before a preset interval frame number of a current frame, the second target merging time is a first merging time corresponding to the current frame after the point cloud merging operation is completed, and determining a first merging growth rate corresponding to any second coordinate point according to the first target merging time and the second target merging time corresponding to any second coordinate point of the second point cloud corresponding to the second example object according to any second example object.
In an embodiment of the application the processor may be configured to:
The first merging growth rate corresponding to any one of the second coordinate points is calculated according to the following formula (3):
(3)
Wherein,Refers to a first merging growth rate corresponding to an ith second coordinate point of a second point cloud corresponding to any one of the second instance objects,Refers to the first target merging times corresponding to the ith second coordinate point of the second point cloud corresponding to any second instance object,The second target merging times corresponding to the ith second coordinate point of the second point cloud corresponding to any second instance object are indicated, and m is the preset interval frame number.
In an embodiment of the present application, the point cloud merging operation is further used for updating a survival frame number of the target coordinate point in the second point cloud corresponding to any target instance object, and the processor may be configured to:
And determining a second merging increasing rate corresponding to any second coordinate point according to the first merging times and the survival frame number of any second coordinate point of the second point cloud corresponding to any second instance object in the current frame for any second instance object.
In an embodiment of the application, the processor may be configured to:
the second merging growth rate corresponding to any one of the second coordinate points is calculated according to the following formula (4):
(4)
Wherein,Refers to a second merging growth rate corresponding to an ith second coordinate point of a second point cloud corresponding to any second instance object,Refers to the first merging times of the ith second coordinate point of the second point cloud corresponding to any second instance object in the current frame,The survival frame number of the ith second coordinate point of the second point cloud corresponding to any second instance object in the current frame is referred to.
In the embodiment of the application, the merging growth rate comprises a first merging growth rate, and the function expression of the preset point cloud deletion condition is shown in the following formula (5):
(5)
Wherein,Refers to a first merging growth rate corresponding to an ith second coordinate point of a second point cloud corresponding to any one of the second instance objects,Refers to the number between checkpoints where the current frame is viewed forward,The first deletion dynamics control parameter is referred to, and Z is a positive integer.
In the embodiment of the application, the merging growth rate comprises a second merging growth rate, and the function expression of the preset point cloud deletion condition is shown in the following formula (6):
(6)
Wherein,Refers to a second merging growth rate corresponding to an ith second coordinate point of a second point cloud corresponding to any second instance object,Refers to a second merging growth rate corresponding to a kth second coordinate point of the second point cloud corresponding to any second instance object, N refers to a second coordinate point total number of the second point cloud corresponding to any second instance object,Refers to a second deletion dynamics control parameter.
In an embodiment of the application, the processor may be configured to:
And for any second instance object, if the second merging times corresponding to any second coordinate point of the second point cloud corresponding to any second instance object is larger than or equal to the reserved critical frame number, reserving any second coordinate point in the updated global point cloud image.
The embodiment of the application also provides a machine-readable storage medium, wherein the machine-readable storage medium is stored with instructions for causing a machine to execute the three-dimensional mapping method.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 3. The computer device includes a processor a01, a network interface a02, a memory (not shown) and a database (not shown) connected by a system bus. Wherein the processor a01 of the computer device is adapted to provide computing and control capabilities. The memory of the computer device includes internal memory a03 and nonvolatile storage medium a04. The nonvolatile storage medium a04 stores an operating system B01, a computer program B02, and a database (not shown in the figure). The internal memory a03 provides an environment for the operation of the operating system B01 and the computer program B02 in the nonvolatile storage medium a04. The database of the computer device is used for storing three-dimensional mapping data. The network interface a02 of the computer device is used for communication with an external terminal through a network connection. The computer program B02 is executed by the processor a01 to implement a three-dimensional mapping method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 3 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.