Movatterモバイル変換


[0]ホーム

URL:


CN108416808B - Method and device for vehicle relocation - Google Patents

Method and device for vehicle relocation
Download PDF

Info

Publication number
CN108416808B
CN108416808BCN201810157705.2ACN201810157705ACN108416808BCN 108416808 BCN108416808 BCN 108416808BCN 201810157705 ACN201810157705 ACN 201810157705ACN 108416808 BCN108416808 BCN 108416808B
Authority
CN
China
Prior art keywords
preset
feature information
feature
environment image
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810157705.2A
Other languages
Chinese (zh)
Other versions
CN108416808A (en
Inventor
卢彦斌
胡祝青
刘青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zebra Network Technology Co Ltd
Original Assignee
Zebred Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zebred Network Technology Co LtdfiledCriticalZebred Network Technology Co Ltd
Priority to CN201810157705.2ApriorityCriticalpatent/CN108416808B/en
Publication of CN108416808ApublicationCriticalpatent/CN108416808A/en
Application grantedgrantedCritical
Publication of CN108416808BpublicationCriticalpatent/CN108416808B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明提供一种车辆重定位的方法及装置,该方法包括:获取待定位车辆的环境图像;提取环境图像中的预设特征信息;预设特征信息包括几何特征信息和/或语义特征信息;根据预设特征信息构建环境图像对应的视觉特征;将视觉特征与预设视觉特征进行匹配,以确定待定位车辆所在的位置;其中,预设视觉特征为地图数据中的视觉特征。本发明提供的车辆重定位的方法及装置,减少了重定位过程中的计算量,且提高了计算的鲁棒性。

Figure 201810157705

The present invention provides a method and device for relocating a vehicle. The method includes: acquiring an environmental image of a vehicle to be positioned; extracting preset feature information in the environment image; the preset feature information includes geometric feature information and/or semantic feature information; The visual feature corresponding to the environmental image is constructed according to the preset feature information; the visual feature is matched with the preset visual feature to determine the position of the vehicle to be positioned; wherein, the preset visual feature is the visual feature in the map data. The method and device for vehicle relocation provided by the present invention reduce the amount of calculation in the relocation process and improve the robustness of the calculation.

Figure 201810157705

Description

Vehicle repositioning method and device
Technical Field
The invention relates to the technical field of vehicle positioning, in particular to a method and a device for vehicle repositioning.
Background
The vehicle networking system is a network and application system which is started in recent years and mainly aims at improving traffic efficiency and traffic safety, the vehicle positioning technology is a key technology, and the accurate position acquisition has important significance for improving the safety of intelligent vehicles and realizing autonomous driving.
At present, maps for high-precision navigation and positioning for automobiles are mainly divided into two types, one is a map mainly based on laser point cloud (laser radar map), and the other is a map mainly based on vector information (high-precision vector map). When the vehicle suddenly loses its position for some reason while traveling on a high-precision map, it is necessary to quickly and accurately restore its position (called repositioning) in the high-precision map to ensure the normal operation of the vehicle (particularly a navigation system). In the prior art, the main technologies include a laser point cloud matching-based repositioning method and an image point feature information-based repositioning method. The method based on laser point cloud matching provides a relatively accurate initial search position depending on auxiliary information such as a GPS, an IMU and a milemeter, and when the auxiliary information is lacked (such as a tunnel and a tall building), the calculation amount of relocation is very large and cannot be completed quickly. The information based on the image point features is less robust.
Therefore, how to reduce the amount of computation in the relocation process and improve the robustness of the computation is an urgent problem to be solved by those skilled in the art.
Disclosure of Invention
The invention provides a vehicle repositioning method and device, which are used for reducing the calculation amount in the repositioning process and improving the calculation robustness.
The embodiment of the invention provides a vehicle repositioning method, which comprises the following steps:
acquiring an environment image of a vehicle to be positioned;
extracting preset characteristic information in the environment image; the preset feature information comprises geometric feature information and/or semantic feature information;
constructing a visual feature corresponding to the environment image according to the preset feature information;
matching the visual features with preset visual features to determine the position of the vehicle to be positioned; and the preset visual features are visual features in the map data.
In an embodiment of the present invention, the constructing the visual feature corresponding to the environment image according to the preset feature information includes:
determining a descriptor corresponding to the preset characteristic information;
determining words in a word bag model corresponding to the descriptors; wherein each of the words corresponds to one or more of the descriptors;
and constructing the visual features corresponding to the environment images according to the number of the descriptors matched with each word.
In an embodiment of the present invention, before the constructing the visual feature corresponding to the environment image according to the preset feature information, the method further includes:
dividing the environment image into a plurality of sub-regions;
the constructing of the visual feature corresponding to the environment image according to the preset feature information includes:
determining a feature vector corresponding to preset feature information in each sub-region;
and carrying out vector combination on the feature vectors corresponding to each sub-region according to distribution positions to construct visual features corresponding to the environment image.
In an embodiment of the present invention, before dividing the environment image into a plurality of sub-regions, the method further includes:
determining a blanking point in the ambient image;
the dividing the environment image into a plurality of sub-regions comprises:
dividing the environmental image into the plurality of sub-regions according to the blanking points.
In an embodiment of the present invention, the extracting the preset feature information in the environment image includes:
extracting characteristic information in the environment image;
selecting the preset feature information from the feature information according to a preset rule; the preset rule is one or a combination of a random sampling rule and a normal vector distribution rule set uniform sampling rule.
The embodiment of the invention also provides a vehicle repositioning device, which comprises:
the system comprises an acquisition unit, a positioning unit and a positioning unit, wherein the acquisition unit is used for acquiring an environment image of a vehicle to be positioned;
the extraction unit is used for extracting preset characteristic information in the environment image; the preset feature information comprises geometric feature information and/or semantic feature information;
the construction unit is used for constructing the visual characteristics corresponding to the environment image according to the preset characteristic information;
the determining unit is used for matching the visual features with preset visual features so as to determine the position of the vehicle to be positioned; and the preset visual features are visual features in the map data.
In an embodiment of the present invention, the constructing unit is specifically configured to determine a descriptor corresponding to the preset feature information; determining words in a word bag model corresponding to the descriptors; wherein each of the words corresponds to one or more of the descriptors; and constructing the visual features corresponding to the environment images according to the number of the descriptors matched with each word.
In an embodiment of the present invention, the apparatus for repositioning vehicles further comprises a dividing unit;
the dividing unit is used for dividing the environment image into a plurality of sub-areas;
the construction unit is specifically configured to determine a feature vector corresponding to preset feature information in each sub-region; and carrying out vector combination on the feature vectors corresponding to each sub-region according to distribution positions to construct visual features corresponding to the environment image.
In an embodiment of the present invention, the determining unit is further configured to determine a blanking point in the environment image;
the dividing unit is specifically configured to divide the environmental image into the plurality of sub-regions according to the blanking point.
In an embodiment of the invention, the environment image comprises laser point cloud data;
the extraction unit is specifically used for extracting feature information in the environment image; selecting the preset feature information from the feature information according to a preset rule; the preset rule is one or a combination of a random sampling rule and a normal vector distribution rule set uniform sampling rule.
According to the method and the device for repositioning the vehicle position, provided by the embodiment of the invention, the environmental image of the vehicle to be positioned is obtained, and the preset characteristic information in the environmental image is extracted; then, constructing a visual characteristic corresponding to the environment image according to preset characteristic information; and then, matching the visual characteristics with preset visual characteristics so as to determine the position of the vehicle to be positioned. Therefore, when the position of the vehicle to be positioned is determined, the method and the device for repositioning the vehicle position provided by the embodiment of the invention match the visual characteristics with the preset visual characteristics of the map data according to the visual characteristics corresponding to the pre-constructed environment image, so that the position of the vehicle to be positioned is determined, the calculated amount in the repositioning process is reduced, and the calculated robustness is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a schematic illustration of a method of vehicle repositioning provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram of an environment image according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an environment image labeled with point features according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an environment image marked with line and circle features according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an environment image labeled with semantic features according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a visual feature corresponding to a constructed environment image according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a visual feature corresponding to an environment image constructed by corresponding words in a bag-of-words model according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a visual feature corresponding to another constructed environment image according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a visual feature corresponding to an environment image constructed by dividing sub-regions according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of an embodiment of dividing an environmental image by blanking points;
FIG. 11 is a schematic diagram of another embodiment of the present invention for dividing an environmental image by a blanking point;
FIG. 12 is a schematic diagram of another embodiment of dividing an environmental image by blanking points according to the present invention;
FIG. 13 is a schematic structural diagram of a vehicle repositioning device according to an embodiment of the invention;
fig. 14 is a schematic structural diagram of another vehicle repositioning device according to an embodiment of the present invention.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical solution of the present invention and how to solve the above technical problems will be described in detail with specific examples. The following specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of a vehicle repositioning method according to an embodiment of the present invention, where the vehicle repositioning method may be performed by a vehicle repositioning device, and the vehicle repositioning device may be disposed independently or in a processor of a vehicle, as shown in fig. 1, and the vehicle repositioning method may include:
s101, obtaining an environment image of the vehicle to be positioned.
Wherein the environment image is used for indicating the surrounding environment condition of the vehicle to be positioned. Optionally, the environment image may further include laser point cloud data and GPS data. The laser point cloud information can reflect real three-dimensional geometric information and material information of the surrounding environment; the GPS information can reflect latitude and longitude information of the surrounding environment.
In the embodiment of the invention, the environmental image of the vehicle to be positioned can be acquired through the sensor, and the environmental image of the vehicle to be positioned can also be acquired through other modes. For example, please refer to fig. 2, fig. 2 is a schematic diagram of an environment image according to an embodiment of the present invention, where the environment image may include lane line information, street lamp information, traffic light information, and the like.
And S102, extracting preset characteristic information in the environment image.
The preset feature information may include geometric feature information and/or semantic feature information.
The geometric feature information here may include point feature information, and may also include line feature information and circle feature information. That is, in the embodiment of the present invention, when the preset feature information in the environment image is extracted, only one of the geometric feature information and the semantic feature information in the environment image may be extracted, or the geometric feature information and the semantic feature information in the environment image may be extracted at the same time. In detail, when preset feature information in the environment image is extracted, only line feature information and circle feature information may be extracted; or only semantic feature information can be extracted; of course, the point feature information, the line feature information, and the circle feature information may be extracted; the point feature information and the semantic feature information may be extracted, the line feature information, the circle feature information, and the semantic feature information may be extracted, or the point feature information, the line feature information, the circle feature information, and the semantic feature information may be extracted at the same time.
For example, in the embodiment of the present invention, the grayscale features such as the point features may be image features having feature descriptors, such as SIFT features, SURF features, and ORB features, or image features of feature point combination descriptors, such as FAST feature points and BRISK descriptors. Because the gray scale features of the images such as the point features can reflect the texture information of the surrounding environment and have certain invariance, the similarity of the images can be measured by comparing the similarity of the feature points in two different images, and further the similarity of the vehicle positions can be measured. For example, please refer to fig. 3, where fig. 3 is a schematic diagram of an environment image labeled with point features according to an embodiment of the present invention.
The geometric features of the image can reflect the geometric projection information of the surrounding environment. Taking the example that the geometric features include Line features and circle features, the geometric Line features can be obtained by Hough transformation or a Line Segment Detector and the like. The description of the Line feature can be calculated by a Line Binary Descriptor (LBD) or the like. The geometric features of the image can reflect the geometric information of the surrounding environment, for example, the lane lines are oblique straight lines, the lampposts of the traffic lights are vertical straight lines, and the buildings have oblique, vertical and horizontal straight lines. Since the geometric line segments have a certain scale (length), and therefore, the geometric line segments have similar distribution in the images at the similar positions, the descriptors of the geometric features can also be used for measuring the similarity of the images, and therefore, the similarity of the vehicle positions can be measured by comparing the geometric features in two different images. For example, please refer to fig. 4, fig. 4 is a schematic diagram of an environment image labeled with a line feature and a circle feature according to an embodiment of the present invention, and it can be seen from fig. 4 that the lane line information may be labeled as a line feature and the traffic light information may be labeled as a circle feature.
The semantic features of the image can reflect the real meaning information of the surrounding environment, and the semantic feature information can be common road elements such as lane lines, road signboards, speed limit signs, street lamps, traffic lights, stop lines and the like, and can also be local information related to driving such as parking lot entrances and exits, parking spaces, gas stations and the like. Images of vehicles at similar positions necessarily contain extremely similar semantic information, and therefore, the similarity of the vehicle positions can be measured. Referring to fig. 5, fig. 5 is a schematic diagram of an environment image labeled with semantic features according to an embodiment of the present invention, and it can be seen from fig. 5 that lane line information can be labeled with semantic features, traffic light information can be labeled with semantic features, and street light information can be labeled with semantic features.
Optionally, when the environment image includes the laser point cloud data, extracting the preset feature information in the environment image may be implemented in the following possible manners:
extracting feature information in the environment image, and selecting preset feature information from the feature information according to a preset rule; the preset rule is one or a combination of a random sampling rule and a normal vector distribution rule set uniform sampling rule.
After the preset feature information in the environment image is extracted, S103, which is described below, may be performed to construct a visual feature corresponding to the environment image according to the preset feature information.
S103, constructing a visual characteristic corresponding to the environment image according to the preset characteristic information.
Optionally, in the embodiment of the present invention, the S103 constructing, according to the preset feature information, the visual feature corresponding to the environment image may be implemented in at least two possible manners, where one possible manner is to construct the visual feature corresponding to the environment image by a word in the corresponding bag-of-words model; another possible way is to construct the corresponding visual features of the environment image by dividing the sub-regions. In the following, these two possible implementations will be described in detail.
In a possible implementation manner, a visual feature corresponding to an environment image may be constructed by a word in a corresponding bag-of-words model, please refer to fig. 6, where fig. 6 is a schematic diagram of a visual feature corresponding to an environment image according to an embodiment of the present invention.
S601, determining a descriptor corresponding to the preset characteristic information.
S602, determining the words in the word bag model corresponding to the descriptors.
Wherein each word corresponds to one or more descriptors.
It should be noted that the preset feature information may correspond to a plurality of descriptors, each of the plurality of descriptors corresponds to a word in the word bag model, and when the plurality of descriptors corresponds to a word, there may be a case where the plurality of descriptors correspond to a word, so that the number of corresponding words is less than the number of descriptors.
And S603, constructing visual features corresponding to the environment image according to the number of the descriptors matched with each word.
After each descriptor is corresponding to a word in the bag-of-words model, the number of the descriptors corresponding to each word can be calculated, so that a feature vector is generated according to the number of the descriptors matched with each word, and the feature vector is the visual feature corresponding to the environment image. For example, if the extracted preset features correspond to 500 descriptors, where 200 descriptors correspond to word 1 in the bag-of-words model, 200 descriptors correspond to word 2 in the bag-of-words model, and the remaining 100 descriptors correspond to word 3 in the bag-of-words model, the number of descriptors matched with word 1 is 200, the number of descriptors matched with word 2 is 200, and the number of descriptors matched with word 3 is 100, then a feature vector (200, 200, 100) is generated according to the number of descriptors matched with each word, and the feature vector (200, 200, 100) is the visual feature corresponding to the environment image, so as to implement the construction of the visual feature corresponding to the environment image. For example, please refer to fig. 7, fig. 7 is a schematic diagram illustrating a visual feature corresponding to an environment image constructed by words in a corresponding bag-of-words model according to an embodiment of the present invention.
In another possible implementation manner, a visual feature corresponding to an environment image may be constructed by dividing sub-regions, please refer to fig. 8, where fig. 8 is a schematic diagram of another visual feature corresponding to an environment image according to an embodiment of the present invention.
S801, dividing the environment image into a plurality of sub-areas.
S802, determining a feature vector corresponding to the preset feature information in each sub-area.
And S803, performing vector combination on the feature vectors corresponding to each sub-region according to the distribution positions to construct visual features corresponding to the environment image.
In this way, when the visual features corresponding to the environment image are constructed, the environment image needs to be divided into a plurality of sub-regions, the feature vectors corresponding to the preset feature information in each sub-region are determined, then, the feature vectors corresponding to each sub-region are subjected to vector combination according to the distribution position, and the vectors obtained after the vector combination are the visual features corresponding to the environment image. For example, if the environment image is divided into 4 sub-regions, the feature vector corresponding to the first sub-region is a, the feature vector corresponding to the second sub-region is b, the feature vector corresponding to the third sub-region is c, and the feature vector corresponding to the fourth sub-region is d, if the distribution positions of the four sub-regions are the first sub-region, the second sub-region, the third sub-region, and the fourth sub-region, the feature vectors corresponding to each sub-region are vector-combined according to the distribution positions, and the obtained vector is (a, b, c, d). Referring to fig. 9, fig. 9 is a schematic diagram illustrating a visual feature corresponding to an environment image constructed by dividing sub-regions according to an embodiment of the present invention. As can be seen from fig. 9, when the visual features corresponding to the environment image are constructed, the environment image is divided into 9 sub-regions, so that the visual features corresponding to the environment image are constructed according to the feature vector corresponding to each sub-region in the 9 sub-regions.
Optionally, in the above scheme of constructing the visual feature corresponding to the environment image by dividing the sub-regions, the environment image may be divided into a plurality of sub-regions by the blanking point, and therefore the blanking point in the environment image needs to be determined first. Referring to fig. 10 to 12, fig. 10 is a schematic diagram illustrating an environmental image divided by a blanking point according to an embodiment of the present invention, fig. 11 is a schematic diagram illustrating another environmental image divided by a blanking point according to an embodiment of the present invention, and fig. 12 is a schematic diagram illustrating another environmental image divided by a blanking point according to an embodiment of the present invention, wherein fig. 10, fig. 11, and fig. 12 respectively determine blanking points from different spatial points, so as to divide the environmental image according to the blanking points.
It should be noted that the blanking point is an intersection point of a set of parallel straight lines in the real world in the image. Referring to fig. 10-12, the blanking point is the intersection of the extended lines (black dashed line) of the two lane lines in fig. 10-14. A blanking point is a point on the horizon at infinity, all of which make up the horizon. Thus, in an image, a blanking point may serve as a reference point for spatial division. In particular, on the road, all the lane lines constitute a set of parallel lines, which correspond to the same blanking point in the image. The position of the blanking point in the image is related to the focal length of the camera, the picture elements and the direction of the parallel lines in the real world. Due to differences in vehicle model, camera mounting position and angle, the blanking points in the image are not constant. Fig. 10 and 11 show schematic diagrams of blanking points in an environmental image captured at two different positions of a vehicle during driving. Fig. 10 and 12 show schematic diagrams of the blank points in the environmental images acquired at two different positions of different vehicles (such as cars and SUVs) during driving, and the areas divided based on the blank points have certain translation invariance, so that the accuracy of the visual feature comparison is increased.
And S104, matching the visual features with preset visual features to determine the position of the vehicle to be positioned.
The preset visual features are visual features in the map data.
After the visual features corresponding to the environment images are constructed through the steps, the constructed visual features can be matched with the preset visual features, and therefore the position of the vehicle to be positioned is determined according to the matching result.
For example, in the embodiment of the present invention, preset visual features of key positions in map data may be obtained in advance, and after the visual features and the preset visual features are obtained respectively, the visual features may be matched with the preset visual features, and the position of the vehicle to be positioned may be determined according to the matching result. For example, the visual features may be matched by point cloud matching, feature matching, and pose optimization matching.
The method for repositioning the vehicle position comprises the steps of obtaining an environment image of a vehicle to be positioned, and extracting preset characteristic information in the environment image; then, constructing a visual characteristic corresponding to the environment image according to preset characteristic information; and then, matching the visual characteristics with preset visual characteristics so as to determine the position of the vehicle to be positioned. Therefore, when the position of the vehicle to be positioned is determined, the method for repositioning the vehicle position provided by the embodiment of the invention determines the position of the vehicle to be positioned according to the visual features corresponding to the pre-constructed environment image and by matching the visual features with the preset visual features of the map data, so that the calculated amount in the repositioning process is reduced, and the calculated robustness is improved.
Fig. 13 is a schematic structural diagram of avehicle repositioning device 130 according to an embodiment of the present invention, please refer to fig. 13, where thevehicle repositioning device 130 may include:
an obtainingunit 1301, configured to obtain an environment image of the vehicle to be located.
An extractingunit 1302, configured to extract preset feature information in an environment image; the preset feature information includes geometric feature information and/or semantic feature information.
And theconstructing unit 1303 is configured to construct the visual features corresponding to the environment image according to the preset feature information.
A determining unit 1304, configured to match the visual characteristics with preset visual characteristics to determine a position of the vehicle to be positioned; the preset visual features are visual features in the map data.
Optionally, theconstructing unit 1303 is specifically configured to determine a descriptor corresponding to the preset feature information; determining words in a word bag model corresponding to the descriptors; wherein each word corresponds to one or more descriptors; and constructing the visual features corresponding to the environment images according to the number of the descriptors matched with each word. .
Optionally, thevehicle repositioning device 130 may further include adividing unit 1305, please refer to fig. 14, and fig. 14 is a schematic structural diagram of anothervehicle repositioning device 130 according to an embodiment of the present invention.
Adividing unit 1305 for dividing the environment image into a plurality of sub-regions.
Theconstructing unit 1303 is specifically configured to determine a feature vector corresponding to preset feature information in each sub-region; and carrying out vector combination on the feature vectors corresponding to each sub-region according to the distribution positions to construct visual features corresponding to the environment image.
Optionally, the determining unit 1304 is further configured to determine a blanking point in the environment image.
Thedividing unit 1305 is specifically configured to divide the environment image into a plurality of sub-regions according to the blanking points.
Optionally, the environmental image comprises laser point cloud data.
An extractingunit 1302, specifically configured to extract feature information in the environment image; selecting preset feature information from the feature information according to a preset rule; the preset rule is one or a combination of a random sampling rule and a normal vector distribution rule set uniform sampling rule.
Thedevice 130 for vehicle relocation shown in the embodiment of the present invention may implement the technical solution of the method for vehicle relocation shown in any of the above embodiments, and its implementation principle and beneficial effects are similar, and are not described herein again.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (6)

Translated fromChinese
1.一种车辆重定位的方法,其特征在于,包括:1. a method for vehicle relocation, characterized in that, comprising:获取待定位车辆的环境图像;Obtain the environmental image of the vehicle to be positioned;提取所述环境图像中的预设特征信息;所述预设特征信息包括几何特征信息和/或语义特征信息;Extracting preset feature information in the environment image; the preset feature information includes geometric feature information and/or semantic feature information;根据所述预设特征信息构建所述环境图像对应的视觉特征;constructing the visual feature corresponding to the environment image according to the preset feature information;将所述视觉特征与预设视觉特征进行匹配,以确定所述待定位车辆所在的位置;其中,所述预设视觉特征为地图数据中的视觉特征;Matching the visual feature with a preset visual feature to determine the location of the vehicle to be positioned; wherein the preset visual feature is a visual feature in map data;所述根据所述预设特征信息构建所述环境图像对应的视觉特征,包括:The constructing the visual feature corresponding to the environment image according to the preset feature information includes:确定所述预设特征信息对应的描述子;determining the descriptor corresponding to the preset feature information;确定所述描述子对应的词袋模型中的单词;其中,每一个所述单词对应一个或多个所述描述子;Determine the words in the bag-of-words model corresponding to the descriptor; wherein, each of the words corresponds to one or more of the descriptors;根据与每一个所述单词匹配的描述子的个数构建所述环境图像对应的视觉特征;Construct the visual feature corresponding to the environmental image according to the number of descriptors matched with each of the words;根据所述预设特征信息构建所述环境图像对应的视觉特征之前,将所述环境图像划分为多个子区域;Before constructing the visual feature corresponding to the environment image according to the preset feature information, dividing the environment image into a plurality of sub-regions;确定每一个子区域中预设特征信息对应的特征向量;Determine the feature vector corresponding to the preset feature information in each sub-region;将每一个所述子区域对应的特征向量按照分布位置进行向量组合,构建所述环境图像对应的视觉特征。The feature vectors corresponding to each of the sub-regions are combined according to the distribution positions, and the visual features corresponding to the environment image are constructed.2.根据权利要求1所述的方法,其特征在于,所述将所述环境图像划分为多个子区域之前,还包括:2. The method according to claim 1, wherein before the dividing the environment image into a plurality of sub-regions, the method further comprises:确定所述环境图像中的消隐点;determining blanking points in the environment image;所述将所述环境图像划分为多个子区域,包括:The dividing the environment image into a plurality of sub-regions includes:根据所述消隐点将所述环境图像划分为所述多个子区域。The environment image is divided into the plurality of sub-regions according to the blanking points.3.根据权利要求1-2任一项所述的方法,其特征在于,所述环境图像包括激光点云数据,所述提取所述环境图像中的预设特征信息,包括:3. The method according to any one of claims 1-2, wherein the environmental image comprises laser point cloud data, and the extracting preset feature information in the environmental image comprises:提取所述环境图像中的特征信息;extracting feature information in the environment image;根据预设规则在所述特征信息中选取所述预设特征信息;其中,所述预设规则为随机采样规则、法向量分布规则集均匀采样规则中的一种或多种的组合。The preset feature information is selected from the feature information according to a preset rule; wherein the preset rule is a combination of one or more of a random sampling rule and a normal vector distribution rule set uniform sampling rule.4.一种车辆重定位的装置,其特征在于,包括:4. A device for relocating a vehicle, comprising:获取单元,用于获取待定位车辆的环境图像;an acquisition unit for acquiring an environmental image of the vehicle to be positioned;提取单元,用于提取所述环境图像中的预设特征信息;所述预设特征信息包括几何特征信息和/或语义特征信息;an extraction unit, configured to extract preset feature information in the environment image; the preset feature information includes geometric feature information and/or semantic feature information;构建单元,用于根据所述预设特征信息构建所述环境图像对应的视觉特征;a construction unit, configured to construct a visual feature corresponding to the environmental image according to the preset feature information;确定单元,用于将所述视觉特征与预设视觉特征进行匹配,以确定所述待定位车辆所在的位置;其中,所述预设视觉特征为地图数据中的视觉特征;a determining unit, configured to match the visual feature with a preset visual feature to determine the location of the vehicle to be positioned; wherein the preset visual feature is a visual feature in map data;所述构建单元,具体用于确定所述预设特征信息对应的描述子;确定所述描述子对应的词袋模型中的单词;其中,每一个所述单词对应一个或多个所述描述子;并根据与每一个所述单词匹配的描述子的个数构建所述环境图像对应的视觉特征;The construction unit is specifically configured to determine the descriptor corresponding to the preset feature information; determine the word in the bag-of-words model corresponding to the descriptor; wherein, each of the words corresponds to one or more of the descriptors ; And build the visual feature corresponding to the described environment image according to the number of descriptors matched with each described word;划分单元,用于将所述环境图像划分为多个子区域;a dividing unit for dividing the environment image into a plurality of sub-regions;所述构建单元,具体用于确定每一个子区域中预设特征信息对应的特征向量;并将每一个所述子区域对应的特征向量按照分布位置进行向量组合,构建所述环境图像对应的视觉特征。The construction unit is specifically used to determine the feature vector corresponding to the preset feature information in each sub-region; and perform vector combination of the feature vector corresponding to each of the sub-regions according to the distribution position to construct the visual corresponding to the environmental image. feature.5.根据权利要求4所述的装置,其特征在于,5. The device of claim 4, wherein所述确定单元,还用于确定所述环境图像中的消隐点;The determining unit is further configured to determine a blanking point in the environment image;所述划分单元,具体用于根据所述消隐点将所述环境图像划分为所述多个子区域。The dividing unit is specifically configured to divide the environment image into the multiple sub-regions according to the blanking point.6.根据权利要求4-5任一项所述的装置,其特征在于,所述环境图像包括激光点云数据;6. The device according to any one of claims 4-5, wherein the environment image comprises laser point cloud data;所述提取单元,具体用于提取所述环境图像中的特征信息;并根据预设规则在所述特征信息中选取所述预设特征信息;其中,所述预设规则为随机采样规则、法向量分布规则集均匀采样规则中的一种或多种的组合。The extraction unit is specifically configured to extract the feature information in the environment image; and select the preset feature information from the feature information according to a preset rule; wherein, the preset rule is a random sampling rule, a method A combination of one or more of the uniform sampling rules in the vector distribution rule set.
CN201810157705.2A2018-02-242018-02-24 Method and device for vehicle relocationActiveCN108416808B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201810157705.2ACN108416808B (en)2018-02-242018-02-24 Method and device for vehicle relocation

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201810157705.2ACN108416808B (en)2018-02-242018-02-24 Method and device for vehicle relocation

Publications (2)

Publication NumberPublication Date
CN108416808A CN108416808A (en)2018-08-17
CN108416808Btrue CN108416808B (en)2022-03-08

Family

ID=63128916

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201810157705.2AActiveCN108416808B (en)2018-02-242018-02-24 Method and device for vehicle relocation

Country Status (1)

CountryLink
CN (1)CN108416808B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109141444B (en)*2018-08-282019-12-06北京三快在线科技有限公司positioning method, positioning device, storage medium and mobile equipment
CN110147705B (en)*2018-08-282021-05-04北京初速度科技有限公司Vehicle positioning method based on visual perception and electronic equipment
CN109255817A (en)*2018-09-142019-01-22北京猎户星空科技有限公司A kind of the vision method for relocating and device of smart machine
CN111143489B (en)*2018-11-062024-01-09北京嘀嘀无限科技发展有限公司Image-based positioning method and device, computer equipment and readable storage medium
CN109461211B (en)*2018-11-122021-01-26南京人工智能高等研究院有限公司Semantic vector map construction method and device based on visual point cloud and electronic equipment
CN111322993B (en)*2018-12-132022-03-04杭州海康机器人技术有限公司Visual positioning method and device
CN111750881B (en)*2019-03-292022-05-13北京魔门塔科技有限公司Vehicle pose correction method and device based on light pole
CN111860084B (en)*2019-04-302024-04-16千寻位置网络有限公司Image feature matching and positioning method and device and positioning system
CN110415297B (en)*2019-07-122021-11-05北京三快在线科技有限公司Positioning method and device and unmanned equipment
CN110568447B (en)*2019-07-292022-03-08广东星舆科技有限公司Visual positioning method, device and computer readable medium
EP3809313B1 (en)*2019-10-162025-01-29Ningbo Geely Automobile Research & Development Co., Ltd.A vehicle parking finder support system, method and computer program product for determining if a vehicle is at a reference parking location
CN110967018B (en)*2019-11-252024-04-12斑马网络技术有限公司 Parking lot positioning method, device, electronic device and computer readable medium
CN111508258B (en)*2020-04-172021-11-05北京三快在线科技有限公司Positioning method and device
DE102020213151A1 (en)*2020-10-192022-04-21Robert Bosch Gesellschaft mit beschränkter Haftung Method and device for mapping an operational environment for at least one mobile unit and for locating at least one mobile unit in an operational environment and localization system for an operational environment
CN114545400B (en)*2022-04-272022-08-05陕西欧卡电子智能科技有限公司Global repositioning method of water surface robot based on millimeter wave radar

Citations (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101364347A (en)*2008-09-172009-02-11同济大学 A video-based detection method for vehicle control delays at intersections
CN101862194A (en)*2010-06-172010-10-20天津大学 Imaginative Action EEG Identity Recognition Method Based on Fusion Feature
CN102054178A (en)*2011-01-202011-05-11北京联合大学Chinese painting image identifying method based on local semantic concept
CN102053249A (en)*2009-10-302011-05-11吴立新Underground space high-precision positioning method based on laser scanning and sequence encoded graphics
CN103473739A (en)*2013-08-152013-12-25华中科技大学White blood cell image accurate segmentation method and system based on support vector machine
CN103810505A (en)*2014-02-192014-05-21北京大学Vehicle identification method and system based on multilayer descriptors
CN103971124A (en)*2014-05-042014-08-06杭州电子科技大学Multi-class motor imagery brain electrical signal classification method based on phase synchronization
CN104217444A (en)*2013-06-032014-12-17支付宝(中国)网络技术有限公司 Method and apparatus for locating card areas
CN104268876A (en)*2014-09-262015-01-07大连理工大学Camera calibration method based on partitioning
CN105404887A (en)*2015-07-052016-03-16中国计量学院White blood count five-classification method based on random forest
CN106569244A (en)*2016-11-042017-04-19杭州联络互动信息科技股份有限公司Vehicle positioning method based on intelligent equipment and apparatus thereof
CN106896353A (en)*2017-03-212017-06-27同济大学A kind of unmanned vehicle crossing detection method based on three-dimensional laser radar
CN106908775A (en)*2017-03-082017-06-30同济大学A kind of unmanned vehicle real-time location method based on laser reflection intensity
CN106960179A (en)*2017-02-242017-07-18北京交通大学Rail line Environmental security intelligent monitoring method and device
CN107533630A (en)*2015-01-202018-01-02索菲斯研究股份有限公司For the real time machine vision of remote sense and wagon control and put cloud analysis

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6879341B1 (en)*1997-07-152005-04-12Silverbrook Research Pty LtdDigital camera system containing a VLIW vector processor
US8620026B2 (en)*2011-04-132013-12-31International Business Machines CorporationVideo-based detection of multiple object types under varying poses
US20170024412A1 (en)*2015-07-172017-01-26Environmental Systems Research Institute (ESRI)Geo-event processor
CN107451574B (en)*2017-08-092020-03-17安徽大学Motion estimation method based on Haar-like visual feature perception

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101364347A (en)*2008-09-172009-02-11同济大学 A video-based detection method for vehicle control delays at intersections
CN102053249A (en)*2009-10-302011-05-11吴立新Underground space high-precision positioning method based on laser scanning and sequence encoded graphics
CN101862194A (en)*2010-06-172010-10-20天津大学 Imaginative Action EEG Identity Recognition Method Based on Fusion Feature
CN102054178A (en)*2011-01-202011-05-11北京联合大学Chinese painting image identifying method based on local semantic concept
CN104217444A (en)*2013-06-032014-12-17支付宝(中国)网络技术有限公司 Method and apparatus for locating card areas
CN103473739A (en)*2013-08-152013-12-25华中科技大学White blood cell image accurate segmentation method and system based on support vector machine
CN103810505A (en)*2014-02-192014-05-21北京大学Vehicle identification method and system based on multilayer descriptors
CN103971124A (en)*2014-05-042014-08-06杭州电子科技大学Multi-class motor imagery brain electrical signal classification method based on phase synchronization
CN104268876A (en)*2014-09-262015-01-07大连理工大学Camera calibration method based on partitioning
CN107533630A (en)*2015-01-202018-01-02索菲斯研究股份有限公司For the real time machine vision of remote sense and wagon control and put cloud analysis
CN105404887A (en)*2015-07-052016-03-16中国计量学院White blood count five-classification method based on random forest
CN106569244A (en)*2016-11-042017-04-19杭州联络互动信息科技股份有限公司Vehicle positioning method based on intelligent equipment and apparatus thereof
CN106960179A (en)*2017-02-242017-07-18北京交通大学Rail line Environmental security intelligent monitoring method and device
CN106908775A (en)*2017-03-082017-06-30同济大学A kind of unmanned vehicle real-time location method based on laser reflection intensity
CN106896353A (en)*2017-03-212017-06-27同济大学A kind of unmanned vehicle crossing detection method based on three-dimensional laser radar

Also Published As

Publication numberPublication date
CN108416808A (en)2018-08-17

Similar Documents

PublicationPublication DateTitle
CN108416808B (en) Method and device for vehicle relocation
CN113034566B (en)High-precision map construction method and device, electronic equipment and storage medium
US11094112B2 (en)Intelligent capturing of a dynamic physical environment
US11501104B2 (en)Method, apparatus, and system for providing image labeling for cross view alignment
CN105512646B (en)A kind of data processing method, device and terminal
EP3644013B1 (en)Method, apparatus, and system for location correction based on feature point correspondence
KR102218881B1 (en)Method and system for determining position of vehicle
CN109141444B (en)positioning method, positioning device, storage medium and mobile equipment
WO2015096717A1 (en)Positioning method and device
CN105807296B (en)A kind of vehicle positioning method, device and equipment
CN108428254A (en)The construction method and device of three-dimensional map
US10152635B2 (en)Unsupervised online learning of overhanging structure detector for map generation
US10515293B2 (en)Method, apparatus, and system for providing skip areas for machine learning
CN111754388B (en)Picture construction method and vehicle-mounted terminal
CN114509065B (en)Map construction method, system, vehicle terminal, server and storage medium
JP2022542082A (en) Pose identification method, pose identification device, computer readable storage medium, computer equipment and computer program
US10949707B2 (en)Method, apparatus, and system for generating feature correspondence from camera geometry
WO2020156923A2 (en)Map and method for creating a map
JP2020518917A (en) Method and apparatus for generating a digital map model
CN110827340B (en)Map updating method, device and storage medium
JP5435294B2 (en) Image processing apparatus and image processing program
CN115979278B (en) A method, device, equipment and medium for positioning a car
CN114120701B (en)Parking positioning method and device
Lee et al.Semi-automatic framework for traffic landmark annotation
US20240013554A1 (en)Method, apparatus, and system for providing machine learning-based registration of imagery with different perspectives

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
CP03Change of name, title or address

Address after:200000 Shanghai City Xuhui District Longyao Road No. 18 10th Floor 1001 Room

Patentee after:Zebra Network Technology Co.,Ltd.

Country or region after:China

Address before:Building D1, 2nd Floor, No. 55 Huaihai West Road, Xuhui District, Shanghai

Patentee before:ZEBRED NETWORK TECHNOLOGY Co.,Ltd.

Country or region before:China

CP03Change of name, title or address

[8]ページ先頭

©2009-2025 Movatter.jp