Movatterモバイル変換


[0]ホーム

URL:


CN114111812B - Method and system for generating and using positioning reference data - Google Patents

Method and system for generating and using positioning reference data
Download PDF

Info

Publication number
CN114111812B
CN114111812BCN202111620810.3ACN202111620810ACN114111812BCN 114111812 BCN114111812 BCN 114111812BCN 202111620810 ACN202111620810 ACN 202111620810ACN 114111812 BCN114111812 BCN 114111812B
Authority
CN
China
Prior art keywords
data
vehicle
depth
pixel
junction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111620810.3A
Other languages
Chinese (zh)
Other versions
CN114111812A (en
Inventor
克日什托夫·库德因斯基
克日什托夫·米克萨
拉法尔·扬·格利什琴斯基
布拉泽伊·库比亚克
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TomTom Global Content BV
Original Assignee
TomTom Global Content BV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TomTom Global Content BVfiledCriticalTomTom Global Content BV
Publication of CN114111812ApublicationCriticalpatent/CN114111812A/en
Application grantedgrantedCritical
Publication of CN114111812BpublicationCriticalpatent/CN114111812B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本申请涉及用于生成及使用定位参考数据的方法及系统。本发明揭示用于改进相对于数字地图的定位精度的方法及系统,且所述方法及系统优选地用于高度及全自动驾驶应用,且其可使用与数字地图相关联的定位参考数据。本发明进一步扩展到用于生成与数字地图相关联的定位参考数据的方法及系统。

The present application relates to methods and systems for generating and using positioning reference data. The present invention discloses methods and systems for improving positioning accuracy relative to a digital map, and the methods and systems are preferably used for highly and fully automated driving applications, and they can use positioning reference data associated with a digital map. The present invention further extends to methods and systems for generating positioning reference data associated with a digital map.

Description

Method and system for generating and using positioning reference data
Information about the divisional application
The scheme is a divisional application. The parent of the division is the patent application of the invention with the application date of 2016, 08 and 03, the application number of 201680044930.4 and the name of a method and a system for generating and using positioning reference data.
Technical Field
The present invention relates in certain aspects and embodiments to methods and systems for improving positioning accuracy relative to digital maps, and which are desirable for high-and fully-automatic driving applications. Such methods and systems may use positioning reference data associated with a digital map. In a further aspect, the invention relates to the generation of positioning reference data associated with a digital map, including the format of the reference data and the use of the reference data. For example, embodiments of the present invention relate to using reference data to accurately locate vehicles on a digital map by comparison with data sensed from the vehicles. Other embodiments relate to using reference data for other purposes, not necessarily in techniques that also use sensed data from a vehicle. For example, further embodiments relate to using the generated reference data for reconstructing a view from a camera associated with the vehicle.
Background
In recent years, it has become common for vehicles to be equipped with navigation devices, which may be in the form of Portable Navigation Devices (PNDs) removably positioned within the vehicle or in the form of systems integrated into the vehicle. These navigation devices comprise means for determining the current position of the device, typically a Global Navigation Satellite System (GNSS) receiver, such as GPS or GLONASS. However, it should be appreciated that other means may be used, such as using a mobile telecommunications network, surface beacons or the like.
The navigation device may also access a digital map representing a navigable network on which the vehicle is traveling. A digital map (or sometimes called a mathematical graph) is in its simplest form actually a database containing data representing nodes (most commonly representing road intersections) and lines between those nodes representing roads between those intersections. In a more detailed digital map, a line may be divided into road segments defined by a start node and an end node. These nodes may be "real" where they represent road intersections where a minimum of 3 lines or segments intersect, or they may be "artificial" where they are provided as anchor points for segments that are not bounded at one or both ends by real nodes to provide, inter alia, shape information for a particular segment or means to identify where certain characteristics of the road (e.g., speed limits) along the road change. In virtually all modern digital maps, nodes and road segments are further defined by various attributes, which are also represented by data in a database. For example, each node will typically have geographic coordinates to define its real world location, e.g., latitude and longitude. The node will typically also have live data associated with it indicating whether it is possible to move from one road to another at the junction, while the road segment will also have relevant properties such as maximum speed allowed, lane size, number of lanes, whether there is a separator in between, etc. For the purposes of this disclosure, this form of digital map is referred to as a "standard map".
The navigation device is arranged to be able to perform a plurality of tasks, such as guidance in respect of a determined route, using the current location of the device and a standard map, and to provide traffic and travel information relative to the current location or predicted future location based on the determined route.
However, it has been recognized that the data contained within the standard map is insufficient for various next generation applications, such as highly automated driving, where the vehicle is able to automatically control, e.g., accelerate, brake and steer, and even fully automated "unmanned" vehicles without input from the driver. For such applications, a more accurate digital map is needed. This more detailed digital map typically includes a three-dimensional vector model in which each lane of a road is represented separately along with connectivity data with other lanes. For the purposes of this disclosure, this form of digital map will be referred to as a "planning map" or "High Definition (HD) map.
A representation of a portion of a planning map is shown in fig. 1, where each line represents a centerline of a lane. Fig. 2 shows another exemplary portion of the planning map, but this time overlaid on an image of the road network. The data within these maps is typically accurate to within a meter, even smaller, and can be collected using a variety of techniques.
One exemplary technique for collecting data to construct such planning maps is to use a mobile mapping system, an example of which is depicted in FIG. 3. The mobile mapping system 2 includes a survey vehicle 4, a digital camera 40 mounted on top 8 of the vehicle 4, and a laser scanner 6. The survey vehicle 4 further includes a processor 10, a memory 12, and a transceiver 14. In addition, the survey vehicle 4 includes an absolute positioning device 20 (e.g., a GNSS receiver) and a relative positioning device 22 including an Inertial Measurement Unit (IMU) and a Distance Measurement Instrument (DMI). The absolute positioning device 20 provides the geographic coordinates of the vehicle and the relative positioning device 22 is used to improve the accuracy of the coordinates measured by the absolute positioning device 20 (and to replace the absolute positioning device in those cases where signals from navigation satellites cannot be received). The laser scanner 6, camera 40, memory 12, transceiver 14, absolute positioning device 20, and relative positioning device 22 are all configured for communication with the processor 10 (as indicated by line 24). The laser scanner 6 is configured to scan the laser beam in a 3D manner throughout the environment and create a cloud of points representing the environment, each point indicating the location of the surface of the object from which the laser beam is reflected. The laser scanner 6 is also configured as a time-of-flight laser rangefinder for measuring the distance to each incident position of the laser beam on the object surface.
In use, as shown in fig. 4, the survey vehicle 4 travels along a roadway 30, the roadway 30 comprising a surface 32 having road markings 34 coated thereon. The processor 10 determines the position and orientation of the vehicle 4 at any instant in time from the position and orientation data measured using the absolute positioning device 20 and the relative positioning device 22, and stores the data in the memory 12 with an appropriate time stamp. In addition, the camera 40 repeatedly captures images of the road surface 32 to provide a plurality of road surface images, and the processor 10 adds a time stamp to each image and stores the image in the memory 12. The laser scanner 6 also repeatedly scans the surface 32 to provide at least a plurality of measured distance values, and the processor adds a time stamp to each distance value and stores it in the memory 12. Examples of data obtained from laser scanner 6 are shown in fig. 5 and 6. Fig. 5 shows a 3D view and fig. 6 shows a side view projection, the colors in each picture representing the distance to the road. All data obtained from these moving mapping vehicles may be analyzed and used to create a planning map of the portion of the navigable (or road) network traveled by the vehicles.
The applicant has realized that in order to use such planning maps for highly and fully automated driving applications, it is necessary to know the position of the vehicle relative to the planning map with high accuracy. Conventional techniques of determining the current location of a device using navigation satellites or terrestrial beacons provide the absolute location of the device with an accuracy of about 5 to 10 meters, which is then matched to the corresponding location on the digital map. While this level of precision is adequate for most conventional applications, it is not accurate enough for next generation applications where position relative to digital maps is required to be at sub-meter precision, even when traveling at high speeds on a road network. Thus, there is a need for improved positioning methods.
Applicants have also recognized a need for improved methods of generating positioning reference data associated with digital maps, for example, for providing "planning maps" that can be used in determining the position of a vehicle relative to a map, among other contexts.
Disclosure of Invention
According to a first aspect of the present invention there is provided a method of generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the method comprising, for at least one navigable element represented by the digital map:
generating positioning reference data comprising at least one depth map indicative of an environment surrounding the navigable element projected onto a reference plane defined by a reference line associated with the navigable element, each pixel in the at least one depth map being associated with a location in the reference plane associated with the navigable element and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment, and
The generated positioning reference data is associated with the digital map data.
It should be appreciated that the digital map (in this and any other aspects or embodiments of the invention) includes data representing navigable elements of a navigable network, such as roads of a road network.
According to a first aspect of the invention, positioning reference data associated with one or more navigable elements of a navigable network represented by a digital map is generated. This data may be generated for at least part and preferably all of the navigable elements represented by the map. The generated data provides a compressed representation of the environment surrounding the navigable element. This is achieved using at least one depth map indicating an environment surrounding the element projected onto a reference plane defined by a reference line, which in turn is defined relative to the navigable element. Each pixel of the depth map is associated with a location in the reference plane and includes a depth channel representing a distance along a predetermined direction from the location of the pixel in the reference plane to a surface of an object in the environment.
Various features of the at least one depth map of the positioning reference data will now be described. It should be appreciated that such features may alternatively or additionally be applied to at least one depth map of real-time scan data used in certain further aspects or embodiments of the invention, provided that they are not mutually exclusive.
The reference line associated with the navigable element and used to define the reference plane may be set with respect to any manner of navigable element. The reference line is defined by a point or points associated with the navigable element. The reference line may have a predetermined orientation relative to the navigable element. In a preferred embodiment, the reference line is parallel to the navigable element. This may be suitable for providing positioning reference data (and/or real-time scan data) related to the lateral environment on one or more sides of the navigable element. The reference line may be linear or non-linear, i.e. depending on whether the navigable element is straight. The reference line may include straight lines and non-linearities, e.g., curved portions, e.g., remaining parallel to the navigable elements. It should be appreciated that in some further embodiments, the reference line may not be parallel to the navigable elements. For example, as described below, the reference line may be defined by a radius centered on a point associated with the navigable element (e.g., one point on the navigable element). The reference line may be circular. This may then provide a 360 degree representation of the environment around the junction.
The reference line is preferably a longitudinal reference line and may be, for example, an edge or boundary of a navigable element or its lane, or a centerline of a navigable element. The positioning reference data (and/or real-time scan data) will then provide a representation of the environment on one or more sides of the element. The reference line may be located on the element.
In an embodiment, since the reference line of the navigable element (e.g., the edge or centerline of the navigable element) and the associated depth information may undergo a mapping to a linear reference line, the reference line may be linear even when the navigable element is curved. This mapping or transformation is described in more detail in WO2009/045096A1, WO2009/045096A1 is incorporated herein by reference in its entirety.
The reference plane defined by the reference line is preferably oriented perpendicular to the surface of the navigable element. As used herein, a reference plane refers to a 2-dimensional surface, which may be curved or non-curved.
Where the reference line is a longitudinal reference line parallel to the navigable element, the depth channel of each pixel preferably represents a lateral distance to the surface of the object in the environment.
Each depth map may be in the form of a raster image. It should be appreciated that each depth map represents a distance along a predetermined direction from a surface of an object in the environment to a reference plane of a plurality of longitudinal locations and altitudes (i.e., locations corresponding to each pixel associated with the reference plane). The depth map includes a plurality of pixels. Each pixel of the depth map is associated with a particular longitudinal position and elevation in the depth map (e.g., raster image).
In some preferred embodiments, the reference plane is defined by a longitudinal reference line parallel to the navigable element, and the reference plane is oriented perpendicular to the surface of the navigable element. Each pixel then includes a depth channel representing a lateral distance to a surface of an object in the environment.
In a preferred embodiment, at least one depth map may have a fixed longitudinal resolution and a variable vertical and/or depth resolution.
According to a second aspect of the present invention there is provided a method of generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the method comprising, for at least one navigable element represented by the digital map:
Generating positioning reference data comprising at least one depth map indicative of an environment surrounding the navigable element projected onto a reference plane, the reference plane being defined by a longitudinal reference line oriented parallel to the navigable element and perpendicular to a surface of the navigable element, each pixel of the at least one depth map being associated with a position in the reference plane associated with the navigable element and the pixel including a depth channel representing a lateral distance along a predetermined direction from the associated position of the pixel in the reference plane to a surface of an object in the environment, preferably wherein the at least one depth map has a fixed longitudinal resolution and a variable vertical and/or depth resolution, and
The generated positioning reference data is associated with the digital map data.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
Regardless of the orientation of the reference line, the reference plane, and the line along which the environment is projected onto the reference plane, it is advantageous in accordance with the present invention in its various aspects and embodiments that at least one depth map has a fixed longitudinal resolution and a variable vertical and/or depth resolution. The at least one depth map of the positioning reference data (and/or the real-time scan data) preferably has a fixed longitudinal resolution and a variable vertical and/or depth resolution. The variable vertical and/or depth resolution is preferably non-linear. The higher resolution may show portions of the depth map (e.g., raster image) that are closer to the ground and to the navigable element (and thus to the vehicle) than portions of the depth map (e.g., raster image) that are higher than the ground and further from the navigable element (and thus further from the vehicle). This maximizes the information density at the height and depth that are more important to the detection of the vehicle sensors.
Regardless of the orientation of the reference lines and planes and the resolution of the depth map along the various directions, the projection of the environment onto the reference plane is along a predetermined direction, which may be selected as desired. In some embodiments, the projection is an orthogonal projection. In these embodiments, the depth channel of each pixel represents a distance from the associated location of the pixel in the reference plane to the surface of the object in the environment along a direction perpendicular to the reference plane. Thus, in some embodiments in which the distance represented by the depth channel is a lateral distance, the lateral distance is along a direction perpendicular to the reference plane (although non-orthogonal projection is not limited to the case in which the depth channel is related to the lateral distance). The use of orthogonal projection may be advantageous in some contexts, as this will result in any height information independent of distance from the reference line (and thus independent of distance from the reference plane).
In other embodiments, it has been found to be potentially advantageous to use non-orthogonal projections. Thus, in some embodiments of the invention in any of its aspects, unless mutually exclusive, the depth channel of each pixel (whether or not the predetermined distance is a lateral distance) represents the distance from the associated location of the pixel in the reference plane to the surface of an object in the environment in a direction that is not perpendicular to the reference plane. The use of non-orthogonal projections has the advantage that information about surfaces oriented perpendicular to the navigable elements (i.e. where the reference line is parallel to the elements) can be saved. This may be accomplished without providing additional data channels associated with the pixels. Thus, information about objects in the vicinity of the navigable element may be captured more efficiently and in more detail without increasing storage capacity. The predetermined direction may be along any desired direction relative to the reference plane, for example at 45 degrees.
The use of non-orthogonal projections has also been found to be particularly useful in preserving a greater amount of information about the surface of an object detectable by a camera or cameras of a vehicle in dark conditions, and thus in connection with some aspects and embodiments of the invention in which a reference image or point cloud is compared to an image or point cloud obtained based on real-time data sensed by a camera of a vehicle.
According to another aspect of the present invention there is provided a method of generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the method comprising, for at least one navigable element represented by the digital map:
Generating positioning reference data comprising at least one depth map indicative of an environment surrounding the navigable element projected onto a reference plane, the reference plane being defined by a reference line parallel to the navigable element, each pixel in the at least one depth map being associated with a location in the reference plane associated with the navigable element, and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment, wherein the predetermined direction is non-perpendicular to the reference plane, and
The generated positioning reference data is associated with digital map data indicative of the navigable elements.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
According to any aspect or embodiment of the invention, the positioning reference data (and/or the real-time scan data) is based on scan data obtained by scanning the environment surrounding the navigable element using one or more sensors. The one or more scanners may include one or more of a laser scanner, a radar scanner, and a camera, such as a single camera or a pair of stereo cameras.
Preferably, the distance to the surface of the object represented by the depth channel of each pixel of the positioning reference data (and/or the real-time scan data) is determined based on a set of multiple sensed data points, each indicative of a distance from the location of the pixel to the surface of the object along a predetermined direction. Data points may be obtained when a scan of the environment surrounding the navigable element is performed. The set of sensed data points may be obtained from one or more types of sensors. However, in some preferred embodiments, the sensed data points comprise or include a set of data points sensed by a laser scanner. In other words, the sensed data points comprise or include laser measurements.
It has been found that using an average of multiple sensed data points in determining the distance value for a depth channel for a given pixel may lead to erroneous results. This is because there is a possibility that at least some of the sensed data points that indicate the position of the surface of the object from the reference plane along the applicable predetermined direction and that are considered to be mapped to a particular pixel may be related to the surface of a different object. It should be appreciated that due to the compressed data format, the extension area of the environment may map to an area of pixels in the reference plane. A considerable amount of sensing data, i.e. a number of sensing data points, is thus available for that pixel. Within that zone, there may be objects positioned at different depths relative to the reference plane, including objects that may overlap with another object in any dimension by only a short distance, such as trees, lampposts, walls, and moving objects. The depth values to the object surface represented by the sensor data points for a particular pixel may thus exhibit considerable variation.
According to any aspect or embodiment of the present invention, wherein the distance to the surface of the object represented by the depth channel of each pixel of the positioning reference data (and/or the real-time scan data) is determined based on a set of multiple sensed data points, each sensed data point indicating a sensed distance from the position of the pixel to the surface of the object along a predetermined direction, preferably the distance represented by the depth channel of the pixel is not based on an average of the set of multiple sensed data points. In a preferred embodiment, the distance represented by the depth channel of the pixel is the closest sensed distance from the object surface from among the set of sensed data points, or the closest mode value obtained using a distribution of sensed depth values. It will be appreciated that the detected nearest value or values most likely reflect the depth of the object surface to the pixel most accurately. For example, consider the case where a tree is positioned between a building and a road. Different sensing depth values for a particular pixel may be building or tree based detection. If all of these sensed values are taken into account to provide an average depth value, the average will indicate that the depth measured from the pixel to the object surface is somewhere between the depth to the tree and the depth to the building. This will lead to misleading depth values for the pixels, which can lead to problems in correlating real-time vehicle sensing data with reference data, and can potentially be dangerous, as it is very important to know with certainty how close an object is to the road. In contrast, the most recent depth value or most recent mode value is likely to be related to a tree, but not a building, reflecting the true position of the most recent object.
According to another aspect of the present invention there is provided a method of generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the method comprising, for at least one navigable element represented by the digital map:
Generating positioning reference data comprising at least one depth map indicative of an environment surrounding the navigable element projected onto a reference plane, the reference plane being defined by a reference line associated with the navigable element, each pixel in the at least one depth map being associated with a location in the reference plane associated with the navigable element, and the pixel including a depth channel representing a distance from the associated location of the pixel in the reference plane to a surface of an object in the environment along a predetermined direction, wherein the distance to the surface of the object represented by the depth channel of each pixel is determined based on a set of a plurality of sensed data points, each sensed data point indicating a sensed distance from the location of the pixel to the surface of the object along the predetermined direction, and wherein the distance to the surface of the object represented by the depth channel of the pixel is based on the set of sensed data points, a nearest or nearest distance, and a pattern of distances to the object
The generated positioning reference data is associated with digital map data.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
According to any aspect or embodiment of the invention, each pixel (in the positioning reference data and/or the real-time scan data) includes a depth channel representing a distance to a surface of an object in the environment. In a preferred embodiment, each pixel includes one or more additional channels. This may provide a depth map with one or more additional information layers. Each channel preferably indicates a value of a property obtained based on one or more sensed data points and preferably based on a set of multiple sensed data points. The sensing data may be obtained from one or more of the sensors described earlier. In a preferred embodiment, the or each pixel includes at least one channel indicative of a value of a given type of sensed reflectance. Each pixel may include one or more of a channel indicating a value of sensed laser reflectivity and a channel indicating a value of sensed radar reflectivity. The sensed reflectance value of the pixel indicated by the channel is related to the sensed reflectance in the applicable portion of the environment represented by the pixel. The sensed reflectance value of the pixel preferably indicates the sensed reflectance at a distance from the reference plane that corresponds to the depth of the pixel from the reference plane indicated by the depth channel of the pixel, i.e. the sensed reflectance around the depth value of the pixel. This may then be considered to indicate the relevant reflectivity properties of the object present at that depth. Preferably, the sensed reflectivity is an average reflectivity. The sensed reflectance data may be based on a reflectance associated with the same data point used to determine the depth value for a set of larger data points. For example, the reflectivity associated with the sensed depth values applicable to the pixels (and in addition to those most recent ones of the depth values preferably used to determine the depth channel) may additionally be considered.
In this way, a multi-channel depth map, such as a raster image, is provided. This format may enable more efficient compression of larger amounts of data related to the environment surrounding the navigable elements, facilitating storage and processing, and providing the ability to implement improved correlation with sensing of real-time data by the vehicle under different conditions, and the vehicle need not necessarily have the same type of sensor as used in generating the reference positioning data. As will be described in more detail below, this data may also help reconstruct data sensed by the vehicle, or images of the surrounding of the navigable element that would be obtained using the camera of the vehicle under certain conditions (e.g., at night). For example, radar or laser reflectivity may enable identification of those objects that will be visible under certain conditions (e.g., at night).
According to another aspect of the present invention there is provided a method of generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the method comprising, for at least one navigable element represented by the digital map:
Generating positioning reference data comprising at least one depth map indicative of an environment surrounding the navigable element projected onto a reference plane defined by a reference line associated with the navigable element, each pixel in the at least one depth map being associated with a location in the reference plane associated with the navigable element and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment, wherein each pixel further includes one or more of a channel indicative of a value of sensed laser reflectivity and a channel indicative of a value of sensed radar reflectivity, and
The generated positioning reference data is associated with digital map data.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
Other channels associated with pixels may alternatively or additionally be used in accordance with any aspect or embodiment of the present invention. For example, the additional channels may be one or more of a thickness of the object proximate to the distance indicated by the depth channel of the pixel from the reference plane to the position of the pixel in the predetermined direction, a density of reflected data points proximate to the distance indicated by the depth channel of the pixel from the reference plane to the position of the pixel in the predetermined direction, a color proximate to the distance indicated by the depth channel of the pixel from the reference plane to the position of the pixel in the predetermined direction, and a texture proximate to the distance indicated by the depth channel of the pixel from the reference plane to the position of the pixel in the predetermined direction. Each channel may include a value indicative of a relevant property. The values are based on available sensor data, which may optionally be obtained from one or more different types of sensors, e.g., cameras for color or texture data. Each value may be based on a plurality of sensed data points and may be an average from the plurality of sensed data points.
It should be appreciated that while the depth channel indicates the distance of the object from a reference plane at the location of the pixel along a predetermined direction, other channels may indicate other properties of the object, such as the reflectivity of the object, or its color, texture, etc. This may be useful in reconstructing scan data that may be expected to have been sensed by the vehicle and/or camera images taken by the vehicle. Data indicative of the thickness of the object may be used to recover information related to the surface of the object perpendicular to the navigable element using orthogonal projection of the environment onto a reference plane. This may provide an alternative to the embodiments described above for determining information related to such surfaces of objects, which use non-orthogonal projections.
In many embodiments, the positioning reference data is used to provide a compressed representation of the environment of one or more sides of the navigable element, i.e., to provide a side depth map. The reference line may then be parallel to the navigable element, wherein the depth channel of the pixel indicates the lateral distance of the object surface from the reference plane. However, the use of depth maps may also be helpful in other contexts. Applicant has appreciated that it would be useful for providing a circular depth map in the area of a junction (e.g., intersection). This may provide improved ability to position the vehicle relative to the junction (e.g., intersection), or, if desired, reconstruct data indicative of the environment surrounding the junction (e.g., intersection). Preferably a 360 degree representation of the environment around the junction is provided, although it will be appreciated that the depth map need not extend around a complete circle and may therefore extend around less than 360 degrees. In some embodiments, the reference plane is defined by a reference line defined by a radius centered on a reference point associated with the navigable element. In these embodiments, the reference line is curved, and preferably circular. The reference point is preferably located on a navigable stretch at the junction. For example, the reference point may be located at the center of a junction (e.g., intersection). The radius defining the reference line may be selected as desired, e.g., depending on the size of the junction.
According to another aspect of the present invention there is provided a method of generating positioning reference data associated with a digital map representing elements of a navigable network, the positioning reference data providing a compressed representation of the environment surrounding at least one junction of the navigable network represented by the digital map, the method comprising, for at least one junction represented by the digital map:
Generating positioning reference data comprising at least one depth map indicative of an environment surrounding the junction projected onto a reference plane defined by a reference line defined by a radius centered on a reference point associated with the junction, each pixel in the at least one depth map being associated with a location in the reference plane associated with the junction and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment, and
The generated positioning reference data is associated with digital map data indicative of the point of engagement.
As described with respect to the earlier embodiments, the junction may be an intersection. The reference point may be located at the center of the junction. The reference point may be associated with a node or navigable element at the node of the digital map representing the point of engagement. These additional aspects or embodiments of the invention may be used in conjunction with a side depth map representing an environment of sides of navigable elements away from the junction.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
According to any aspect or embodiment of the invention related to the generation of positioning reference data, the method may comprise associating the generated positioning reference data about the navigable element or junction with digital map data indicative of the element or junction. The method may include storing generated positioning data associated with the digital map data, for example with navigable elements or joints to which it relates.
In some embodiments, the positioning reference data may include a reference scan representing, for example, a lateral environment to the left of the navigable element and to the right of the navigable element. The positioning reference data for each side of the navigable element may be stored in the combined dataset. Thus, data from multiple portions of the navigable network may be stored together in an efficient data format. The data stored in the combined dataset may be compressed, allowing more portions of the data of the navigable network to be stored within the same storage capacity. Data compression will also allow for the use of reduced network bandwidth if the reference scan data is transmitted to the vehicle over a wireless network connection. However, it should be appreciated that the positioning reference data need not necessarily relate to the lateral environment on either side of the navigable element. For example, as discussed in certain embodiments above, the reference data may relate to the environment surrounding the junction.
The invention also extends to a data product storing positioning reference data generated in accordance with any aspect or embodiment of the invention.
The data products in any of these further aspects or embodiments of the invention may be in any suitable form. In some embodiments, the data product may be stored on a computer readable medium. The computer readable medium may be, for example, a floppy disk, CD ROM, RAM, flash memory, or a hard disk. The invention extends to a computer readable medium comprising a data product according to any aspect or embodiment of the invention.
Positioning reference data generated in accordance with any aspect or embodiment of the invention related to the generation of this data may be used in a variety of ways. In further aspects related to using data, the step of obtaining reference data may be extended to generating data, or generally include retrieving data. The reference data is preferably generated by a server. The step of using the data is preferably performed by a device that may be associated with the vehicle, such as a navigation device or similar device.
In some preferred embodiments, the data is used to determine the position of the vehicle relative to the digital map. The digital map thus includes data representing navigable elements along which the vehicle travels. The method may include obtaining positioning reference data associated with a digital map for a considered current position of the vehicle along a navigable element of a navigable network, determining real-time scan data by scanning an environment surrounding the vehicle using at least one sensor, wherein the real-time scan data includes at least one depth map indicative of the environment surrounding the vehicle, each pixel in the at least one depth map being associated with a position in the reference plane associated with the navigable element, and the pixel including a depth channel representing a distance from the associated position of the pixel in the reference plane to a surface of an object in the environment determined using the at least one sensor, calculating a correlation between the positioning reference data and the real-time scan data to determine an alignment offset between the depth map, and adjusting the considered current position using the determined alignment offset to determine the position of the vehicle relative to the digital map. It should be appreciated that the obtained positioning reference data relates to navigable elements along which the vehicle travels. The depth map of the positioning reference data indicative of the environment around the navigable element is thus indicative of the environment around the vehicle.
According to another aspect of the present invention there is provided a method of determining the position of a vehicle relative to a digital map comprising data representing navigable elements of a navigable network along which the vehicle travels, the method comprising:
Obtaining positioning reference data associated with the digital map for a considered current position of a navigable element of the navigable network for the vehicle, wherein the positioning reference data comprises at least one depth map indicative of an environment surrounding the vehicle projected onto a reference plane defined by a reference line associated with a navigable element, each pixel in the at least one depth map being associated with a position in the reference plane associated with the navigable element along which the vehicle travels, and the pixel including a depth channel representing a distance along a predetermined direction from the associated position of the pixel in the reference plane to a surface of an object in the environment;
Determining real-time scan data by scanning the environment around the vehicle using at least one sensor, wherein the real-time scan data comprises at least one depth map indicative of an environment around the vehicle, each pixel in the at least one depth map being associated with a location in the reference plane associated with the navigable element along which the vehicle traveled, and the pixel including a depth channel representing a distance from the associated location of the pixel in the reference plane to a surface of an object in the environment determined using the at least one sensor along the predetermined direction;
Calculating a correlation between the positioning reference data and the real-time scan data to determine an alignment offset between the depth maps, and
The determined alignment offset is used to adjust the considered current position to determine the position of the vehicle relative to the digital map.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
In further aspects and embodiments of the invention related to using positioning reference data and real-time scan data in determining the position of a vehicle, the current position of the vehicle may be a longitudinal position. The real-time scan data may be related to the lateral environment surrounding the vehicle. The depth map of the positioning reference data and/or real-time sensor data will then be defined by a reference line parallel to the navigable elements and include depth channels representing lateral distances to the surface of objects in the environment. The determined offset may then be a longitudinal offset.
According to another aspect of the present invention there is provided a method of determining the position of a vehicle relative to a digital map comprising data representative of a junction through which the vehicle travels, the method comprising:
Obtaining positioning reference data associated with the digital map for a considered current position of the vehicle in the navigable network, wherein the positioning reference data comprises at least one depth map indicative of an environment surrounding the vehicle projected onto a reference plane defined by a reference line defined by a radius centered on a reference point associated with the junction point, each pixel in the at least one depth map being associated with a position in the reference plane associated with the junction point through which the vehicle travels, and the pixel including a depth channel representing a distance along a predetermined direction from the associated position of the pixel in the reference plane to a surface of an object in the environment;
Determining real-time scan data by scanning the environment around the vehicle using at least one sensor, wherein the real-time scan data comprises at least one depth map indicative of the environment around the vehicle, each pixel in the at least one depth map being associated with a location in the reference plane associated with the junction, and the pixel including a depth channel representing a distance from the associated location of the pixel in the reference plane to a surface of an object in the environment, determined using the at least one sensor, along the predetermined direction;
Calculating a correlation between the positioning reference data and the real-time scan data to determine an alignment offset between the depth maps, and
The determined alignment offset is used to adjust the considered current position to determine the position of the vehicle relative to the digital map.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
According to another aspect of the present invention there is provided a method of determining the position of a vehicle relative to a digital map comprising data representing navigable elements of a navigable network along which the vehicle travels, the method comprising:
Obtaining positioning reference data associated with the digital map for a considered current position of a navigable element of the navigable network for the vehicle, wherein the positioning reference data comprises at least one depth map indicative of an environment surrounding the vehicle, each pixel of the at least one depth map being associated with a position in the reference plane associated with the navigable element, the reference plane being defined by a longitudinal reference line oriented parallel to the navigable element and perpendicular to a surface of the navigable element, and each pixel including a depth channel representing a lateral distance to a surface of an object in the environment, optionally wherein the at least one depth map has a fixed longitudinal resolution and a variable vertical and/or depth resolution;
Determining real-time scan data by scanning the environment around the vehicle using at least one sensor;
Determining real-time scan data using the sensor data, wherein the real-time scan data comprises at least one depth map indicative of an environment surrounding the vehicle, each pixel in the at least one depth map being associated with a location in the reference plane associated with the navigable element, and each pixel including a depth channel representing a lateral distance to a surface of an object in the environment determined from the sensor data, optionally wherein the at least one depth map has a fixed longitudinal resolution and a variable vertical and/or depth resolution;
Calculating a correlation between the positioning reference data and the real-time scan data to determine an alignment offset between the depth maps, and
The determined alignment offset is used to adjust the considered current position to determine the position of the vehicle relative to the digital map.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
In a further aspect of the invention relating to the use of positioning reference data, the data may be generated in accordance with any of the earlier aspects of the invention. The real-time scan data used in determining the position of the vehicle or otherwise should have a form corresponding to the positioning reference data. Thus, the determined depth map will comprise pixels having positions in a reference plane defined relative to a reference line associated with the navigable element in the same manner as the positioning reference data, so that the real time scan data and the positioning reference data are related to each other. The depth channel data of the depth map may be determined in a manner corresponding to the manner of the reference data, e.g., without using an average of the sensed data, and thus may include a closest distance from the plurality of sensed data points to the surface. The real-time scan data may include any additional channels. In case the depth map of the positioning reference data has a fixed longitudinal resolution and a variable vertical and/or depth resolution, the depth map of the real-time scan data may also have this resolution.
Thus, in accordance with these aspects or embodiments of the present invention, a method is provided for continuously determining a position of a vehicle relative to a digital map comprising data representing navigable elements (e.g., roads) of a navigable network (e.g., road network) along which the vehicle is traveling. The method includes receiving real-time scan data obtained by scanning an environment surrounding the vehicle, retrieving positioning reference data associated with the digital map for a considered current position of the vehicle relative to the digital map (e.g., wherein the positioning reference data includes a reference scan of the environment surrounding the considered current position), optionally wherein the reference scan has been obtained throughout the digital map from at least one device that has previously traveled along a route, comparing the real-time scan data to the positioning reference data to determine an offset between the real-time scan data and the positioning reference data, and adjusting the considered current position based on the offset. The position of the vehicle relative to the digital map is thus always known with high accuracy. Examples in the prior art have attempted to determine the position of a vehicle by comparing collected data with known reference data for predetermined landmarks along a route. However, landmarks may be sparsely distributed across many lines, resulting in significant estimation errors of vehicle position as the vehicle travels between landmarks. This is a problem in the case of, for example, highly automated driving systems, where such errors can lead to catastrophic consequences, such as vehicle collision accidents leading to serious injury or loss of life. The present invention solves this problem in at least some aspects by having reference scan data throughout the digital map and by scanning the environment surrounding the vehicle in real time. In this way, the present invention may allow for comparison of real-time scan data with reference data so that the position of the vehicle relative to the digital map is always known with high accuracy.
According to another aspect of the present invention there is provided a method of determining a longitudinal position of a vehicle relative to a digital map comprising data representative of navigable elements of a navigable network along which the vehicle travels, the method comprising:
Obtaining positioning reference data associated with the digital map for a considered current position of the vehicle along a navigable element of the navigable network, wherein the positioning reference data comprises contours of objects in the environment surrounding the vehicle projected onto a reference plane defined by a longitudinal reference line oriented parallel to the navigable element and perpendicular to a surface of the navigable element;
Obtaining sensor data by scanning the environment around the vehicle using at least one sensor;
determining real-time scan data using the sensor data, wherein the real-time scan data includes contours of objects in an environment surrounding the vehicle projected onto a reference plane as determined from the sensor data;
calculating a correlation between the positioning reference data and the real-time scan data to determine a longitudinal alignment offset, and
The determined alignment offset is used to adjust the considered current position to determine the longitudinal position of the vehicle relative to the digital map.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
The positioning reference data may be stored in association with a digital map, for example in association with related navigable elements, such that the contours of objects in the environment surrounding the vehicle projected onto the reference plane have been determined. However, in other embodiments, the positioning reference data may be stored in a different format, and the stored data processed in order to determine the profile. For example, in an embodiment, as in the earlier described aspects of the disclosure, the positioning reference data includes one or more depth maps, such as raster images, each depth map representing lateral distances to a surface in an environment of multiple longitudinal locations and altitudes. The depth map may be according to any of the previous aspects and embodiments. In other words, the positioning reference data comprises at least one depth map, such as a raster image, indicative of the environment surrounding the vehicle, wherein each pixel of the at least one depth map is associated with a position in the reference plane, and each pixel includes a channel representing a lateral distance (e.g., perpendicular to the reference plane) to a surface of an object in the environment. In such embodiments, the relevant depth map, e.g., a raster image, is processed using an edge detection algorithm to generate a contour of the object in the environment. The edge detection algorithm may include Canny operator, prewitt operator, and the like. However, in a preferred embodiment, edge detection is performed using the Sobel operator. The edge detection operator may be applied in both the elevation (or elevation) and longitudinal domains, or in only one of the domains. For example, in a preferred embodiment, the edge detection operator is applied only in the longitudinal domain.
Similarly, the contour of an object in the environment surrounding the vehicle projected onto the reference plane can be determined directly from the sensor data obtained by the at least one sensor. Alternatively, in other embodiments, the sensor data may be used to determine one or more depth maps, such as raster images, each depth map representing lateral distances to a surface in an environment of multiple longitudinal locations and altitudes. In other words, the real-time scan data comprises at least one depth map, such as a raster image, indicative of the environment surrounding the vehicle, wherein each pixel of the at least one depth map is associated with a location in a reference plane, and each pixel includes a channel representing a lateral distance (e.g., perpendicular to the reference plane) to a surface of an object in the environment determined using at least one sensor. The relevant depth map, e.g. a raster image, may then be processed using an edge detection algorithm, preferably the same edge detection algorithm applied to the positioning reference data, to determine the contour of the real-time scan data. The edge detection operator may be applied in both the elevation (or elevation) and longitudinal domains, or in only one of the domains. For example, in a preferred embodiment, the edge detection operator is applied only in the longitudinal domain.
In an embodiment, the blurring operator is applied to the contour of at least one of the positioning reference data and the real-time scan data before correlating the two sets of data. The blurring operator may be applied in both the elevation (or altitude) and longitudinal domains, or in only one of the domains. For example, in a preferred embodiment, the blurring operator is applied only in the height domain. In obtaining real-time scan data and/or positioning reference data, the blurring operator may take into account any tilt of the vehicle such that, for example, the contour is slightly shifted up or down in the elevation domain.
According to any aspect or embodiment of the invention, the considered current (for example) longitudinal position of the vehicle may be obtained at least initially from an absolute positioning system, such as a satellite navigation device (e.g. GPS, GLONASS), european galileo positioning system, COMPASS positioning system or IRNSS (indian regional navigation satellite system). However, it should be appreciated that other location determination means may be used, such as using mobile telecommunications, surface beacons, or the like.
The digital map may include a three-dimensional vector model representing navigable elements of a navigable network (e.g., roads of a road network), with each lane of the navigable elements (e.g., roads) being represented separately. Thus, the lateral position of the vehicle on the road may be known by determining the lane in which the vehicle is traveling, for example, by image processing of a camera mounted to the vehicle. In such embodiments, the longitudinal reference line may be, for example, an edge or boundary of a lane of a navigable element or a centerline of a lane of a navigable element.
The real-time scan data may be obtained on the left side of the vehicle and on the right side of the vehicle. This helps to reduce the impact of transient features on position estimation. Such transient features may be, for example, parked vehicles, vehicles that overtake, or vehicles that travel in opposite directions on the same route. Thus, real-time scan data can record features present on both sides of the vehicle. In some embodiments, the real-time scan data may be obtained from the left side of the vehicle or the right side of the vehicle.
In embodiments in which the positioning reference data and the real-time scan data are each about the left and right sides of the vehicle, the comparison of the real-time scan data from the left side of the vehicle to the positioning reference data from the left side of the navigable element and the comparison of the real-time scan data from the right side of the vehicle to the positioning reference data from the right side of the navigable element may be a single comparison. Thus, when the scan data includes data from the left side of the navigable element and data from the right side of the navigable element, the scan data may be compared as a single data set, significantly reducing processing requirements compared to a case where the comparison for the left side of the navigable element and the comparison for the right side of the navigable element are performed separately.
Comparing the real-time scan data with the positioning reference data may comprise calculating a cross-correlation, preferably a normalized cross-correlation, between the real-time scan data and the positioning reference data, whether or not it relates to the left and right sides of the vehicle. The method may include determining a location at which the data set is most aligned. Preferably, the determined alignment offset between the depth maps is at least a longitudinal alignment offset and the position at which the data set is most aligned is a longitudinal position. The step of determining the longitudinal position at which the data set is most aligned may comprise longitudinally shifting a depth map (e.g. a raster image provided by the depth map based on real-time scan data) and a depth map (e.g. a raster image provided by the depth map based on positioning reference data) relative to each other until the depth maps are aligned. This may be performed in the image domain.
The determined longitudinal alignment offset is used to adjust the current position considered to adjust the longitudinal position of the vehicle relative to the digital map.
Alternatively or preferably in addition to determining the longitudinal alignment offset between the depth maps, it is desirable to determine the lateral alignment offset between the depth maps. The determined lateral alignment offset may then be used to adjust the considered current lateral position of the vehicle and thus determine the position of the vehicle relative to the digital map. Preferably, a longitudinal alignment offset is determined, which may be implemented in any of the ways described above, and a lateral alignment offset is additionally determined. The determined lateral and longitudinal alignment deviations are then used together to adjust both the longitudinal and lateral positions of the vehicle relative to the digital map.
The method may include determining a longitudinal alignment offset between the depth maps, such as by calculating a correlation between positioning reference data and real-time scan data, and may further include determining a lateral offset between the depth maps, and adjusting the considered current position using the determined lateral and longitudinal alignment offsets to determine the position of the vehicle relative to the digital map.
The longitudinal alignment offset is preferably determined before the lateral alignment offset. According to certain embodiments described below, the lateral alignment offset may be determined based on first determining a longitudinal offset between the depth maps and longitudinally aligning the depth maps relative to one another based on the offset.
The lateral offset is preferably determined based on the most common lateral offset, i.e. the mode lateral offset, between corresponding pixels of the depth map.
According to another aspect of the present invention there is provided a method of determining the position of a vehicle relative to a digital map comprising data representative of navigable elements of a navigable network along which the vehicle travels, the method comprising:
obtaining positioning reference data associated with the digital map for a considered current position of a navigable element of the navigable network by the vehicle, wherein the positioning reference data comprises at least one depth map indicative of an environment surrounding the vehicle projected onto a reference plane defined by a reference line associated with the navigable element, each pixel of the at least one depth map being associated with a position in the reference plane associated with the navigable element along which the vehicle travels, and the pixel including a depth channel representing a distance along a predetermined direction from the associated position of the pixel in the reference plane to a surface of an object in the environment;
Determining real-time scan data by scanning an environment surrounding the vehicle using at least one sensor, wherein the real-time scan data comprises at least one depth map indicative of the environment surrounding the vehicle, each pixel of the at least one depth map being associated with a location in the reference plane associated with the navigable element, and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment determined using the at least one sensor;
determining a longitudinal alignment offset between the positioning reference data and the depth map of the real-time scan data by calculating a correlation between the positioning reference data and the real-time scan data;
determining a lateral alignment offset between the depth maps, wherein the lateral offset is based on the most common lateral offset between corresponding pixels of the depth maps, and
The determined longitudinal and lateral alignment offsets are used to adjust the considered current position to determine the position of the vehicle relative to the digital map.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
According to these aspects and embodiments of the present invention in which a lateral alignment offset is determined, the most common lateral alignment offset may be determined by considering depth channel data of corresponding pixels of the depth map. The most common lateral alignment offset is a determined lateral alignment offset determined between respective pairs of corresponding positioned pixels based on the depth map, and preferably based on the lateral alignment offset of each pair of corresponding pixels. In order to determine the lateral alignment offset between corresponding pixels of the depth map, corresponding pairs of pixels in the depth map must be identified. The method may include identifying corresponding pairs of pixels in a depth map. Preferably, the longitudinal alignment offset is determined before the lateral alignment offset. The depth maps are desirably shifted relative to each other until they are longitudinally aligned to enable identification of corresponding pixels in each depth map.
Accordingly, the method may further include longitudinally aligning the depth maps relative to each other based on the determined longitudinal alignment offset. The step of longitudinally aligning the depth maps with each other may include longitudinally shifting one or both of the depth maps. Longitudinal shifting of depth maps relative to each other may be implemented in the image domain. The step of aligning the depth maps may thus comprise longitudinally shifting the raster images corresponding to each depth map relative to each other. The method may further include cropping a size of the image provided by the positioning reference data depth map to correspond to a size of the image provided by the real-time scan data depth map. This may facilitate a comparison between depth maps.
Once the corresponding pixels in the two depth maps have been identified, a lateral offset between each pair of corresponding pixels may be determined. This may be accomplished by comparing the distances from the locations of the pixels in the reference plane to the surface of the object in the environment along a predetermined direction indicated by the depth channel data associated with each pixel. As described earlier, the depth map preferably has a variable depth resolution. The lateral alignment offset between each pair of corresponding pixels may be based on the difference in distance indicated by the depth channel data of the pixels. The method may include identifying a most common lateral alignment offset between corresponding pixels of a depth map using a histogram. The histogram may indicate the frequency of occurrence of different lateral alignment offsets between corresponding pixel pairs. The histogram may indicate a probability density function of lateral alignment offset, where the pattern reflects the most likely shift.
In some embodiments, each pixel has a color that indicates a value of a depth channel of the pixel. Thus, the comparison of the depth values of the corresponding pixels may include comparing the colors of the corresponding pixels of the depth map. The difference in color between corresponding pixels may indicate a lateral alignment offset between pixels, such as when the depth map has a fixed depth resolution.
In these embodiments, where a lateral alignment offset is determined, the current longitudinal and lateral positions of the vehicle relative to the digital map may be adjusted.
According to any aspect or embodiment of the invention in which the current position of the vehicle (whether longitudinal and/or lateral) is adjusted, the adjusted current position may be an estimate of the current position obtained in any suitable manner, such as from an absolute position determination system or other position determination system, as described above. For example, GPS or dead reckoning may be used. As should be appreciated, the absolute position is preferably matched to the digital map to determine an initial position relative to the digital map, and then longitudinal and/or lateral corrections are applied to the initial position to improve position relative to the digital map.
The inventors have recognized that while the techniques described above may be useful in adjusting the position of a vehicle relative to a digital map, they will not correct the heading of the vehicle. In a preferred embodiment, the method further comprises adjusting the perceived heading of the vehicle using the positioning reference data and the real-time scan data depth map. This further step is preferably implemented in addition to determining the longitudinal and lateral alignment offsets of the depth map according to any of the above described embodiments. In these embodiments, the perceived heading of the vehicle may be determined in any suitable manner, for example using GPS data or the like, as described with respect to determining the perceived location of the vehicle.
It has been found that when the perceived forward direction of the vehicle is incorrect, the lateral alignment offset between corresponding pixels of the depth map will vary along the depth map (i.e., along the depth map image) in the longitudinal direction. It has been found that the forward direction offset may be determined based on a function indicative of a change in lateral alignment offset between corresponding pixels of the depth map relative to a longitudinal position along the depth map. The step of determining the forward direction offset may incorporate any of the features described earlier with respect to determining the lateral alignment offset of the corresponding pixel. Thus, the method preferably first comprises shifting the depth maps relative to each other to longitudinally align the depth maps.
Accordingly, the method may further include determining a longitudinal alignment offset between the depth maps, determining a function indicative of a change in lateral alignment offset between corresponding pixels of the depth maps relative to a longitudinal position of the pixels along the depth maps, and adjusting a considered current heading of the vehicle using the determined function to determine a heading of the vehicle relative to the digital map.
The determined lateral alignment offset between corresponding pixels is, as described above, preferably based on a difference in values indicated by the depth channel data of the pixels, e.g. by referencing the color of the pixels.
In these aspects or embodiments, the determined function is indicative of a heading offset of the vehicle.
The step of determining a function indicative of a change in lateral alignment offset relative to longitudinal position may include determining an average (i.e., mean) lateral alignment offset across corresponding pixels of the depth map in each of a plurality of vertical sections of the depth map along a longitudinal direction of the depth map. The function may then be obtained based on the change in the average lateral alignment offset determined for each vertical section along the longitudinal direction of the depth map. It should be appreciated that at least some, and optionally each, of the corresponding pairs of pixels in the depth map are considered in determining the function.
According to another aspect of the present invention there is provided a method of determining the position of a vehicle relative to a digital map comprising data representative of navigable elements of a navigable network along which the vehicle travels, the method comprising:
Obtaining positioning reference data associated with the digital map for a considered current position of a navigable element of the navigable network by the vehicle, wherein the positioning reference data comprises at least one depth map indicative of an environment surrounding the vehicle projected onto a reference plane defined by a reference line associated with the navigable element, each pixel of the at least one depth map being associated with a position in the reference plane associated with the navigable element along which the vehicle travels, and the pixel including a depth channel representing a distance along a predetermined direction from the associated position of the pixel in the reference plane to a surface of an object in the environment;
Determining real-time scan data by scanning an environment surrounding the vehicle using at least one sensor, wherein the real-time scan data comprises at least one depth map indicative of the environment surrounding the vehicle, each pixel of the at least one depth map being associated with a location in the reference plane associated with the navigable element, and the pixel including a depth channel representing a distance from the associated location of the pixel in the reference plane to a surface of an object in the environment determined using the at least one sensor along a predetermined direction;
Determining a function indicative of a change in lateral alignment offset between corresponding pixels of the depth map of the positioning reference data and the real-time sensor data relative to a longitudinal position of the pixels along the depth map, and
The determined function is used to adjust the considered current heading of the vehicle to determine the heading of the vehicle relative to the digital map.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
In these aspects and embodiments of the present invention, additional steps may be taken to improve the determined heading offset, such as by filtering out noise pixels, or weighting the average pixel depth differences within a longitudinal section of the depth map or image by referring to the number of significant pixels considered in that section.
As mentioned above, the depth map of the positioning reference data, and thus the depth map of the real-time data, may be transformed so as to be always associated with a linear reference line. Due to this linearization of the depth map, when the navigable element is curved, it has been found that it is not possible to directly apply the determined longitudinal, lateral and/or heading correction. Applicants have identified a computationally efficient method of adjusting or correcting the current position of a vehicle relative to a digital map involves applying each of the corrections in a series of incrementally independent linear update steps.
Thus, in a preferred embodiment, the determined longitudinal offset is applied to the current position of the vehicle relative to the digital map, and at least one depth map of the real-time scan data is recalculated based on the adjusted position. Next, the lateral offset determined using the recalculated real-time scan data is applied to the adjusted position of the vehicle relative to the digital map, and at least one depth map of the real-time scan data is recalculated based on the other adjusted position. The skew, i.e., heading offset, determined using the recalculated real-time scan data is then applied to another adjusted position of the vehicle relative to the digital map and at least one depth map of the real-time scan data is recalculated based on the again adjusted position. These steps are preferably repeated any number of times as desired until there is zero or substantially zero longitudinal offset, lateral offset, and skew.
It should be appreciated that the generated positioning reference data obtained in accordance with any aspect or embodiment of the present invention may be otherwise used with real-time scan data to determine a more accurate position of a vehicle, or indeed, for other purposes. In particular, the applicant has realized that it may not always be possible, or at least not always convenient, to use real-time scan data to determine a corresponding depth map for comparison with a depth map of positioning reference scan data. In other words, it may not be appropriate to perform a comparison of the data sets in the image domain. In particular, this may be the case where the type of sensor available on the vehicle is different from the type of sensor used to obtain the positioning reference data.
According to some further aspects and embodiments of the present invention, the method includes determining a reference point cloud indicative of an environment surrounding a navigable element using positioning reference data, the reference point cloud including a set of first data points in a three-dimensional coordinate system, wherein each first data point represents a surface of an object in the environment.
In accordance with another aspect of the present invention, there is provided a method of generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the method comprising, for at least one navigable element represented by the digital map:
generating positioning reference data comprising at least one depth map indicative of an environment surrounding the navigable element projected onto a reference plane, the reference plane being defined by a reference line associated with the navigable element, each pixel in the at least one depth map being associated with a location in the reference plane associated with the navigable element, and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment;
associating the generated positioning reference data with the digital map data, and
A reference point cloud indicative of the environment surrounding the navigable element is determined using the positioning reference data, the reference point cloud comprising a set of first data points in a three-dimensional coordinate system, wherein each first data point represents a surface of an object in the environment.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
According to another aspect of the present invention there is provided a method of generating positioning reference data associated with a digital map representing elements of a navigable network, the positioning reference data providing a compressed representation of the environment surrounding at least one junction of the navigable network represented by the digital map, the method comprising, for at least one junction represented by the digital map:
Generating positioning reference data comprising at least one depth map indicative of an environment surrounding the junction projected onto a reference plane, the reference plane being defined by a reference line defined by a radius centered on a reference point associated with the junction, each pixel in the at least one depth map being associated with a location in the reference plane associated with the junction, and the pixel including a depth channel representing a distance along a predetermined direction from an associated location of the pixel in the reference plane to a surface of an object in the environment;
associating the generated positioning reference data with digital map data indicative of the point of engagement, and
A reference point cloud indicative of the environment surrounding the junction is determined using the positioning reference data, the reference point cloud comprising a set of first data points in a three-dimensional coordinate system, wherein each first data point represents a surface of an object in the environment.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
A reference point cloud that includes a set of first data points in a three-dimensional coordinate system (where each first data point represents a surface of an object in the environment) may be referred to herein as a "3D point cloud. The 3D point cloud obtained according to these further aspects of the invention may be used in determining the position of a vehicle.
In some embodiments, the method may include using the generated positioning reference data in any aspect or embodiment of the invention in determining the position of a vehicle relative to a digital map that includes data representing navigable elements of a navigable network along which the vehicle travels, the method comprising:
Obtaining positioning reference data associated with a digital map for a considered current location of the vehicle along a navigable element or junction of a navigable network, determining a reference point cloud indicative of an environment surrounding the vehicle using the positioning reference data, the reference point cloud comprising a set of first data points in a three-dimensional coordinate system, wherein each first data point represents a surface of an object in the environment;
Determining real-time scan data by scanning an environment surrounding the vehicle using at least one sensor, the real-time scan data comprising a point cloud indicative of the environment surrounding the vehicle, the point cloud comprising a set of second data points in a three-dimensional coordinate system, wherein each data point represents a surface of an object in the environment determined using the at least one sensor;
Calculating a correlation between the point clouds of the real-time scan data and the point clouds of the obtained positioning reference data to determine an alignment offset between the point clouds, and
The determined alignment offset is used to adjust the perceived current position to determine the position of the vehicle relative to the digital map.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
According to another aspect of the present invention there is provided a method of determining the position of a vehicle relative to a digital map comprising data representing navigable elements of a navigable network along which the vehicle travels, the method comprising:
Obtaining positioning reference data associated with the digital map for a considered current position of a navigable element of the navigable network for the vehicle, wherein the positioning reference data comprises at least one depth map indicative of an environment surrounding the vehicle projected onto a reference plane defined by a reference line associated with a navigable element, each pixel in the at least one depth map being associated with a position in the reference plane associated with the navigable element along which the vehicle travels, and the pixel including a depth channel representing a distance along a predetermined direction from the associated position of the pixel in the reference plane to a surface of an object in the environment;
Determining, using the positioning reference data, a reference point cloud indicative of the environment around the vehicle, the reference point cloud including a set of first data points in a three-dimensional coordinate system, wherein each first data point represents a surface of an object in the environment;
Determining real-time scan data by scanning an environment surrounding the vehicle using at least one sensor, the real-time scan data comprising a point cloud indicative of the environment surrounding the vehicle, the point cloud comprising a set of second data points in a three-dimensional coordinate system, wherein each data point represents a surface of an object in the environment determined using the at least one sensor;
Calculating a correlation between the point clouds of the real-time scan data and the point clouds of the obtained positioning reference data to determine an alignment offset between the point clouds, and
The determined alignment offset is used to adjust the perceived current position to determine the position of the vehicle relative to the digital map.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
According to yet another aspect of the present invention there is provided a method of determining the position of a vehicle relative to a digital map comprising data representative of the engagement points of a navigable network through which the vehicle travels, the method comprising:
Obtaining positioning reference data associated with the digital map for a considered current position of the vehicle at a junction of the navigable network, wherein the positioning reference data comprises at least one depth map indicative of an environment surrounding the vehicle projected onto a reference plane defined by a reference line defined by a radius centered on a reference point associated with the junction, each pixel in the at least one depth map being associated with a location in the reference plane associated with the junction through which the vehicle travels, and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment;
Determining, using the positioning reference data, a reference point cloud indicative of the environment around the vehicle, the reference point cloud including a set of first data points in a three-dimensional coordinate system, wherein each first data point represents a surface of an object in the environment;
Determining real-time scan data by scanning the environment around the vehicle using at least one sensor, the real-time scan data comprising a point cloud indicative of the environment around the vehicle, the point cloud comprising a set of second data points in a three-dimensional coordinate system, wherein each data point represents a surface of an object in the environment determined using the at least one sensor;
Calculating a correlation between the point clouds of the real-time scan data and the point clouds of the obtained positioning reference data to determine an alignment offset between the point clouds, and
The determined alignment offset is used to adjust the considered current position to determine the position of the vehicle relative to the digital map.
A reference point cloud that includes a set of second data points in a three-dimensional coordinate system (where each second data point represents a surface of an object in the environment) in these further aspects may be referred to herein as a "3D point cloud".
In these further aspects or embodiments of the invention, the positioning reference data is used to obtain a 3D reference point cloud. This indicates the navigable element to which the data relates or the environment surrounding the junction, and thus the environment surrounding the vehicle as it travels along or through the junction. The point cloud of real-time sensor data relates to the environment surrounding the vehicle and thus may also be referred to as the environment surrounding the navigable element or junction at which the vehicle is positioned. In some preferred embodiments, the 3D point cloud obtained based on the positioning reference data is compared to a 3D point cloud indicative of the environment surrounding the vehicle (i.e., when traveling over the relevant element or through the junction) obtained based on the real-time scan data. The position of the vehicle may then be adjusted based on this comparison, rather than a comparison of the depth map (e.g., raster image).
A real-time scanned data point cloud is obtained using one or more sensors associated with the vehicle. A single sensor or a plurality of such sensors may be used, and in the latter case any combination of sensor types may be used. The sensor may include any one or some of a set of one or more laser scanners, a set of one or more radar scanners, and a set of one or more cameras, such as a single camera or a pair of stereo cameras. A single laser scanner, radar scanner, and/or camera may be used. In the case where the vehicle is associated with a camera or cameras, images obtained from the camera or cameras may be used to construct a three-dimensional scene indicative of the environment surrounding the vehicle, and a 3-dimensional point cloud may be obtained using the three-dimensional scene. For example, where the vehicle uses a single camera, the point cloud may be determined therefrom by obtaining a two-dimensional image sequence from the camera as the vehicle travels along the navigable element or through the junction, constructing a three-dimensional scene using the two-dimensional image sequence, and obtaining a three-dimensional point cloud using the three-dimensional scene. In the case of vehicles associated with stereoscopic cameras, the images obtained from the cameras may be used to obtain a three-dimensional scene, which is then used to obtain a three-dimensional point cloud.
By transforming the depth map of the positioning reference data into a 3D point cloud, it can be compared with a 3D point cloud obtained by real-time scanning with vehicle sensors, irrespective of what the vehicle sensors may be. For example, positioning reference data may be based on reference scanning using a variety of sensor types, including laser scanners, cameras, and radar scanners. The vehicle may or may not have a corresponding set of sensors. For example, a typical vehicle may include only one or more cameras.
The positioning reference data may be used to determine a reference point cloud indicative of an environment surrounding the vehicle that corresponds to a point cloud expected to be generated by at least one sensor of the vehicle. In case the reference point cloud is obtained using sensors of the same type as the sensor type of the vehicle, this may be straightforward and all positioning reference data may be used in constructing the 3D point cloud. Similarly, under certain conditions, data sensed by one type of sensor may be similar to data sensed by another sensor. For example, an object that is sensed by a laser sensor when providing reference positioning data is expected to also be sensed by a camera of the vehicle during the day. However, the method may include only those points in the 3D point cloud that are expected to be detectable by a sensor or sensors of the type associated with the vehicle and/or that are expected to be detected under the current conditions. The positioning reference data may include data that enables generation of an appropriate reference point cloud.
In some embodiments, as described above, each pixel of the positioning reference data further includes at least one channel indicative of a value of the sensed reflectivity. Each pixel may include one or more of a channel indicating a value of sensed laser reflectivity and a channel indicating a value of sensed radar reflectivity. Preferably, a channel is provided that indicates both radar and laser reflectivity. Next, the step of generating a 3D point cloud based on the positioning reference data is preferably performed using the sensed reflectivity data. The generation of the 3D point cloud may also be based on the type of sensor or sensors of the vehicle. The method may include selecting a 3D point included in the reference 3D point cloud using the reflectivity data and data indicative of a type of sensor or sensors of the vehicle. The data of the reflectivity channels is used to select data from the depth channels for generating a 3D point cloud. The reflectivity channel gives an indication of whether a particular object will be sensed by the relevant sensor type (under the current conditions where appropriate).
For example, where the reference data is based on data obtained from a laser scanner and a radar scanner and the vehicle has only a radar scanner, radar reflectivity values may be used to select those points included in the 3D points expected to be sensed by the radar scanner of the vehicle. In some embodiments, each pixel includes a channel that indicates radar reflectivity, and the method includes the step of using radar reflectivity data to generate a 3D reference point cloud containing only those points to be sensed by the radar sensor. In case the method further comprises comparing the 3D reference point cloud with a 3D point cloud obtained based on the real-time scan data, the 3D point cloud of the real-time scan data is thus based on data obtained from the radar scanner. The vehicle may include only a radar scanner.
While the vehicle may include radar and/or laser scanners, in many cases the vehicle may include only one camera or multiple cameras. The laser reflectivity data may provide a way to obtain a 3D reference point cloud related to a 3D point cloud expected to be sensed in dark conditions by a vehicle having only one camera or multiple cameras as sensors. The laser reflectivity data provides an indication of those objects that may be expected to be detected by the camera at night. In some embodiments, each pixel includes a channel that indicates laser reflectivity, and the method includes the step of using the laser reflectivity data to generate a 3D reference point cloud containing only those points that are to be sensed by the vehicle's camera during dark conditions. In case the method further comprises comparing the 3D reference point cloud with a 3D point cloud obtained based on real-time scan data, the 3D point cloud of real-time scan data may thus be based on data obtained from the camera in dark conditions.
It is believed to be advantageous in itself to obtain reference positioning data in the form of a three-dimensional point cloud, and to use this data to reconstruct a reference map, such as an image that is expected to be available from one or more cameras of the vehicle under applicable conditions, and then to be able to compare it to the image obtained by the cameras.
In some embodiments, the method may include using generated positioning reference data in any aspect or embodiment of the invention in reconstructing a view expected to be obtained under applicable conditions from one or more cameras associated with a vehicle traveling along a navigable element of a navigable network or through a junction represented by a digital map, the method comprising obtaining positioning reference data associated with the digital map for a considered current position of the vehicle along the navigable element or junction of the navigable network or at the junction, determining a reference point cloud indicative of an environment surrounding the vehicle using the positioning reference data, the reference point cloud comprising a set of first data points in a three-dimensional coordinate system, wherein each first data point represents a surface of an object in the environment, and reconstructing a reference view expected to be obtainable under applicable conditions by the one or more cameras associated with the vehicle when traversing the navigable element or junction using the reference point cloud. The method may further include determining, using the one or more cameras, a real-time view of the environment surrounding the vehicle, and comparing the reference view to the real-time view obtained by the one or more cameras.
According to another aspect of the present invention there is provided a method of reconstructing views expected to be obtainable under applicable conditions from one or more cameras associated with a vehicle travelling along a navigable element of a navigable network represented by a digital map, the method comprising:
Obtaining positioning reference data associated with a digital map for a considered current position of a navigable element of the vehicle along a navigable network, wherein the positioning reference data comprises at least one depth map indicative of an environment surrounding the vehicle projected onto a reference plane defined by a reference line associated with a navigable element, each pixel in the at least one depth map being associated with a position in the reference plane associated with the navigable element along which the vehicle travels, and the pixel including a depth channel representing a distance along a predetermined direction from the associated position of a pixel in the reference plane to a surface of an object in the environment;
Determining, using the positioning reference data, a reference point cloud indicative of the environment around the vehicle, the reference point cloud including a set of first data points in a three-dimensional coordinate system, wherein each first data point represents a surface of an object in the environment;
Reconstructing, using the reference point cloud, a reference view expected to be available under applicable conditions by one or more cameras associated with the vehicle when traversing the navigable element;
determining a real-time view of the environment surrounding the vehicle using the one or more cameras, and
The reference view is compared to the real-time view obtained by the one or more cameras.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
According to another aspect of the present invention there is provided a method of reconstructing views expected to be obtainable under applicable conditions from one or more cameras associated with a vehicle travelling through a junction of a navigable network represented by a digital map, the method comprising:
Obtaining positioning reference data associated with a digital map for a considered current position of the vehicle along a navigable element of a navigable network, wherein the positioning reference data comprises at least one depth map indicative of an environment surrounding the vehicle projected onto a reference plane defined by a reference line defined by a radius centered on a reference point associated with the junction point, each pixel in the at least one depth map being associated with a location in the reference plane associated with the junction point through which the vehicle travels, and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment;
Determining, using the positioning reference data, a reference point cloud indicative of the environment around the vehicle, the reference point cloud including a set of first data points in a three-dimensional coordinate system, wherein each first data point represents a surface of an object in the environment;
Reconstructing, using the reference point cloud, a reference view expected to be available under applicable conditions by one or more cameras associated with the vehicle when traversing the navigable element;
determining a real-time view of the environment surrounding the vehicle using the one or more cameras, and
The reference view is compared to the real-time view obtained by the one or more cameras.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
These aspects of the invention are particularly advantageous in allowing the construction of reference views that are comparable to real-time views obtained by cameras of vehicles, but based on positioning reference data that can be obtained from different types of sensors. It has been recognized that in practice, many vehicles will be equipped with only one camera or multiple cameras, rather than more specific or complex sensors, such as may be used to obtain reference data.
In these further aspects and embodiments of the invention, the comparison of the reference view to the real-time view may be used as desired. For example, the comparison results may be used to determine the location of the vehicle as in the earlier described aspects and embodiments. The method may include calculating a correlation between the real-time view and the reference view to determine an alignment offset between the views, and adjusting a considered current position of the vehicle using the determined alignment offset to determine a position of the vehicle relative to the digital map.
Suitable conditions are those that are suitable at the current time, and may be lighting conditions. In some embodiments, the applicable condition is a dark condition.
According to any of the embodiments described above, the reference view is reconstructed using a 3D reference point cloud that is obtainable from positioning reference data. The step of reconstructing a reference view expected to be obtainable by one or more cameras preferably comprises using data of a reflectivity data channel associated with pixels of a depth map in which the reference data is located. Preferably, therefore, each pixel of the positioning reference data further comprises at least one channel indicative of a value of the sensed laser reflectivity, and the step of generating the 3D point cloud based on the positioning reference data is performed using the sensed laser reflectivity data. The laser reflectivity data may be used to select data from the depth channel for use in generating a reference 3D point cloud to result in a reconstructed reference view corresponding to a view expected to be available from one or more cameras of the vehicle, e.g., including those objects that are desired to be visible under applicable conditions (e.g., darkness). The one or more cameras of the vehicle may be a single camera, or a pair of stereoscopic cameras, as described above.
Comparison of real-time scan data with positioning reference data, whether by comparison of depth maps or by comparison of point clouds or comparison of reconstructed images with real-time images, which may be performed in accordance with various aspects and embodiments of the present invention, may be performed on a data window. The data window is a data window in the direction of travel, such as longitudinal data. Thus, the windowed data allows the comparison to take into account a subset of the available data. The comparison may be performed periodically for overlapping windows. At least some overlap in the windows of data for comparison is desirable. This may ensure, for example, that differences between adjacent calculated, e.g., longitudinal offset values smooth the data. The window may have a length sufficient for the accuracy of the offset calculation to not vary with the transient characteristics, preferably a length of at least 100m. Such transient features may be, for example, parked vehicles, vehicles that overtake, or vehicles that travel in opposite directions on the same route. In some embodiments, the length is at least 50m. In some embodiments, the length is 200m. In this way, sensed environmental data is determined for a segment of road (e.g., a longitudinal segment) ('window', e.g., 200 m), and then the resulting data is compared to positioning reference data for that segment. By performing the comparison on a road segment of this size (i.e., a road segment that is substantially greater than the length of the vehicle), non-stationary or temporary objects (e.g., other vehicles on the road, vehicles stopped beside the road, etc.) typically do not affect the comparison result.
At least a portion of the positioning reference data used in accordance with any aspect or embodiment of the invention may be stored remotely. Preferably, in the case of a vehicle, at least part of the positioning reference data is stored locally on the vehicle. Thus, even if the positioning reference data is available throughout the route, it need not be continuously transmitted to the vehicle and the comparison can be performed on the vehicle.
The positioning reference data may be stored in a compressed format. The positioning reference data may have a size corresponding to 30KB/km or less.
The positioning reference data may be stored for at least part (and preferably all) of the navigable elements of the navigable network represented in the digital map. Thus, the position of the vehicle may be continuously determined anywhere along the route traveled by the vehicle.
In an embodiment, reference positioning data may have been obtained from a reference scan using at least one device positioned on a mobile mapping vehicle that has previously traveled along navigable elements that are subsequently traveled by the vehicle. Thus, the reference scan may have been acquired using a different vehicle than the current vehicle whose position was continuously determined. In some embodiments, the mobile mapping vehicle has a similar design as the vehicle whose location is continuously determined.
Real-time scan data and/or reference scan data may be obtained using at least one rangefinder sensor. The rangefinder sensor may be configured to operate along a single axis. The rangefinder sensor may be arranged to perform scanning on a vertical axis. When a scan is performed on the vertical axis, distance information for planes at multiple heights is collected, and thus the resulting scan is significantly more detailed. Alternatively or additionally, the rangefinder sensor may be arranged to perform scanning on a horizontal axis.
The rangefinder sensor may be a laser scanner. The laser scanner may include a laser beam that is scanned across the lateral environment using a mirror. Additionally or alternatively, the rangefinder sensor may be a radar scanner and/or a pair of stereo cameras.
The invention extends to a device, such as a navigation device, vehicle, or the like, having means, such as one or more processors, arranged (e.g., programmed) to perform any of the methods described herein.
The step of generating positioning reference data described herein is preferably performed by a server or another similar computing device.
Means for implementing any steps of the method may include a set of one or more processors configured (e.g., programmed) to do so. The given step may be implemented using the same or a different set of processors as any other step. Any given step may be implemented using a combination of processor sets. The system may further comprise data storage means, such as computer memory, for storing, for example, digital maps, positioning reference data, and/or real-time scan data.
In a preferred embodiment, the method of the present invention is implemented by a server or similar computing device. In other words, the proposed method of the invention is preferably a computer implemented method. Thus, in embodiments, the system of the present invention comprises a server or similar computing device comprising means for implementing the various steps described, and the method steps described herein are implemented by the server.
The invention further extends to a computer program product comprising computer readable instructions executable to perform or cause a device to perform any of the methods described herein. The computer program product is preferably stored in a non-transitory physical storage medium.
As will be appreciated by those skilled in the art, aspects and embodiments of the invention may, and preferably do, include any or all of the preferred and optional features of the invention described herein with respect to any other aspect of the invention, as appropriate.
Drawings
Embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
FIG. 1 is a representation of a portion of a planning map;
FIG. 2 shows a portion of a planning map overlaid on an image of a road network;
FIGS. 3 and 4 show exemplary mobile mapping systems that may be used to collect data for building maps;
Fig. 5 shows a 3D view of data obtained from a laser scanner, while fig. 6 shows a side view projection of data obtained from a laser scanner;
FIG. 7 shows a vehicle sensing its surroundings while traveling along a road, according to an embodiment;
FIG. 8 shows a comparison of positioning reference data compared to sensed environmental data (e.g., collected by the vehicle of FIG. 7);
FIG. 9 shows an exemplary format of how positioning reference data may be stored;
FIG. 10A shows an example point cloud acquired by a ranging sensor mounted to a vehicle traveling along a road, while FIG. 10B shows that this point cloud data has been converted into two depth maps;
FIG. 11 shows an offset determined from normalized cross-correlation calculations in an embodiment;
FIG. 12 shows another example of correlation performed between a "reference" data set and a "local measurement" data set;
FIG. 13 shows a system within a vehicle according to an embodiment;
FIG. 14A shows an exemplary raster image as part of a piece of positioning reference data;
FIG. 14B shows a bird's eye perspective view of the data of FIG. 14A as two separate planes on the left and right sides of the roadway;
FIG. 15A shows a fixed longitudinal resolution and a variable (e.g., non-linear) vertical and/or depth resolution of positioning reference data and real-time scan data;
FIG. 15B shows a function mapping the height on the reference line to a pixel Y-coordinate value;
FIG. 15C shows a function of mapping distance from a reference line to pixel depth values;
FIG. 15D shows a fixed vertical pixel resolution, a variable vertical pixel resolution, and a variable depth value resolution in a three-dimensional map;
FIG. 16A shows an orthogonal projection onto a reference plane defined by a reference line associated with a road element;
FIG. 16B shows a side depth map obtained using orthogonal projection;
FIG. 16C shows a non-orthogonal projection onto a reference plane defined by a reference line associated with a road element;
FIG. 16D shows a side depth map obtained using non-orthogonal projection;
FIG. 17 shows a multi-channel data format of a depth map;
FIG. 18 shows circular and linear reference lines that may be used to construct a depth map at an intersection;
FIG. 19A shows the manner in which objects may be projected onto a circular depth map at different angular positions;
FIG. 19B shows an orthogonal projection of an object used to provide a depth map onto a reference plane;
FIG. 20A shows a reference depth map and a corresponding real-time depth map;
FIG. 20B shows a longitudinal correction derived from longitudinal correlations of reference and real-time depth maps;
FIG. 20C shows lateral corrections derived from histogram differences between pixel depth values of corresponding pixels in reference and real-time depth maps;
FIG. 20D shows how the longitudinal position of the vehicle on the road and then the lateral position can be corrected;
FIG. 21A shows a set of vertical slices through a corresponding portion of a reference depth map;
FIG. 21B shows the average pixel depth difference for a vertical slice plotted against the longitudinal distance along the vertical slice of the depth map;
FIG. 22 shows an image of a curved road of a road and a corresponding linear reference image;
23A and 23B show methods for establishing a position of a vehicle, for example, in a non-linear environment;
FIG. 24 shows an exemplary system in which data vehicle sensors are correlated with reference data to position a vehicle relative to a digital map;
25A, 25B and 25C show a first example use case in which a 3D point cloud is constructed using a reference depth map, then the 3D point cloud is compared to a 3D point cloud obtained from a vehicle laser sensor;
26A, 26B, 26C, and 26D show a second example use case in which a reference depth map is used to construct a 3D point cloud or view, which is then compared to 3D scenes or views obtained from multiple vehicle cameras or a single vehicle camera;
FIGS. 27A, 27B and 27C show a third example use case in which a 3D point cloud or view is constructed using reflectivity data of a depth map, then compared to a 3D scene or view obtained from a vehicle camera;
28A and 28B show a fourth example use case in which a 3D point cloud is constructed using radar data of a depth map, then the 3D point cloud is compared to a 3D scene obtained using vehicle radar;
FIG. 29 shows different coordinate systems used in embodiments of the invention;
FIG. 30 depicts steps performed when correlating vehicle sensor data with reference data in order to determine the location of a vehicle;
FIG. 31 illustrates steps performed to determine a laser point cloud in the method of FIG. 30;
FIG. 32A illustrates a first exemplary method for performing the relevant steps in the method of FIG. 30, and
Fig. 32B illustrates a second exemplary method for performing the relevant steps in the method of fig. 30.
Detailed description of the preferred embodiments
It has been recognized that there is a need for an improved method for determining the location of a device (e.g., a vehicle) relative to a digital map (representing a navigable network, such as a road network). In particular, it is desirable to be able to accurately determine (e.g., with sub-meter accuracy) the longitudinal position of a device relative to a digital map. The term "longitudinal" in this disclosure refers to a direction along a portion of a navigable network upon which a device (e.g., a vehicle) moves, in other words along the length of a roadway upon which the vehicle travels. The term "transverse" in the present application has a normal meaning perpendicular to the longitudinal direction and thus refers to a direction along the width of a road.
As will be appreciated, when the digital map includes a planning map as described above (e.g., a three-dimensional vector model in which each lane of a road is represented separately (as opposed to a centerline relative to the road as in a standard map), the lateral position of the device (e.g., a vehicle) simply involves determining the lane in which the device is currently traveling; for example, a great deal of research has been conducted in recent years, in which image data from one or more cameras mounted within a vehicle is analyzed, for example, using various image processing techniques, to detect and track lanes in which the vehicle travels, one exemplary technique is set forth in papers written by Hejunhua (Junhwa Hur), kang Xiaona (Sean-Nam Kang) and Xu Chengyou (Seung-Woo Seo), "Multi-lane detection in urban driving environments using conditional random fields" (Multi-lane detection in urban driving environments using conditional random fields), published in Intelligent vehicle conference record (the proceedings of THE INTELLIGENT VEHICLES Sympoium), pages 1297 through 1302, institute of Electrical and Electronics Engineers (IEEE), (2013),. Here, the device may have data feeds from cameras, radar and/or laser radar sensors, and processes the received data in real-time using an appropriate algorithm to determine the current lane of the device or vehicle in which the device is traveling. Alternatively, another device or apparatus, such as a mobile eye system commercially available from mobile eye company (Mobileye n.v.) in nevada, may provide a determination of the current lane of the vehicle based on these data feeds, and then feed the determination of the current lane to the device, such as through a wired connection or a bluetooth connection.
In an embodiment, the longitudinal position of the vehicle may be determined by comparing a real-time scan of the environment surrounding the vehicle (and preferably on one or both sides of the vehicle) with a reference scan of the environment associated with the digital map. From this comparison, a longitudinal offset (if present) can be determined, and the determined offset can be used to match the location of the vehicle with the digital map. Thus, the position of the vehicle relative to the digital map can always be known with high accuracy.
Real-time scanning of the environment surrounding the vehicle may be obtained using at least one rangefinder sensor positioned on the vehicle. The at least one rangefinder sensor may take any suitable form, but in a preferred embodiment comprises a laser scanner, i.e. a LIDAR device. The laser scanner may be configured to scan the laser beam throughout the environment and create a point cloud representation of the environment, each point indicating a location of a surface of the object reflecting the laser light. As should be appreciated, the laser scanner is configured to record the time it takes for the laser beam to return to the scanner after being reflected from the surface of the object, and the recorded time can then be used to determine the distance to each point. In a preferred embodiment, the rangefinder sensor is configured to operate along a single axis in order to obtain data within a certain acquisition angle (e.g., between 50 and 90 °, such as 70 °), such as when the sensor comprises a laser scanner, a mirror within the device is used to scan the laser beam.
An embodiment in which the vehicle 100 travels along a roadway is shown in fig. 7. The vehicle is equipped with rangefinder sensors 101, 102 on each side of the vehicle. While sensors are shown on each side of the vehicle, in other embodiments, only a single sensor may be used on one side of the vehicle. Preferably, the sensors are properly aligned so that the data from each sensor can be combined, as discussed in more detail below.
WO 2011/146523 A2 provides an example of a scanner that may be used on-board a vehicle to capture reference data in the form of a 3-dimensional point cloud, or that may also be used on an autonomous vehicle to obtain real-time data relating to the surrounding environment.
As discussed above, the rangefinder sensor may be arranged to operate along a single axis. In one embodiment, the sensor may be arranged to perform scanning in a horizontal direction (i.e. in a plane parallel to the road surface). This is shown, for example, in fig. 7. By continually scanning the environment as the vehicle travels along the road, sensed environmental data as shown in fig. 8 can be collected. The data 200 is data collected from the left sensor 102 and shows the object 104. Data 202 is data collected from right sensor 101 and shows objects 106 and 108. In other embodiments, the sensor may be arranged to perform scanning in a vertical direction (i.e. in a plane perpendicular to the road surface). By continuously scanning the environment as the vehicle travels along the road, it is possible to collect environmental data in the manner of fig. 6. It will be appreciated that by performing the scan in the vertical direction, distance information is collected for planes at multiple heights, and thus the resulting scan is significantly more detailed. It will of course be appreciated that scanning may be performed along any axis as desired.
A reference scan of the environment is obtained from one or more vehicles that have previously traveled along the road, and then properly aligned with and associated with the digital map. The reference scan is stored in a database associated with the digital map and is referred to herein as positioning reference data. When matched to a digital map, the combination of positioning reference data may be referred to as a positioning map. As will be appreciated, the location map will be created remotely from the vehicle, typically provided by a digital mapping company (e.g., thomson international b.v. (TomTom International b.v.) or HERE corporation, nokia corporation).
The reference scan may be obtained from a dedicated vehicle, such as a mobile mapping vehicle (e.g., as shown in fig. 3). However, in a preferred embodiment, the reference scan may be determined from sensed environmental data collected by the vehicle as it travels along the navigable network. This sensed environmental data may be stored and periodically sent to a digital mapping company to create, maintain, and update a location map.
While the positioning reference data is preferably stored locally at the vehicle, it should be appreciated that the data may be stored remotely. In an embodiment, and in particular when locally storing the positioning reference data, the data is stored in a compressed format.
In an embodiment, positioning reference data is collected for each side of a road in a road network. In such embodiments, the reference data for each side of the road may be stored separately, or alternatively it may be stored together in a combined dataset.
In an embodiment, the positioning reference data may be stored as image data. The image data may be a color (e.g., RGB) image or a grayscale image.
Fig. 9 shows an exemplary format of how positioning reference data may be stored. In this embodiment, the reference data for the left side of the road is provided on the left side of the image and the reference data for the right side of the road is provided on the right side of the image, the data sets being aligned such that the left side reference data set for a particular longitudinal position is shown as opposed to the right side reference data set for the same longitudinal position.
In the image of fig. 9, and for illustrative purposes only, the longitudinal pixel size is 0.5m, with 40 pixels on each side of the centerline. It has also been determined that images may be stored as grayscale images, rather than color (RGB) images. By storing the image in this format, the positioning reference data has a size corresponding to 30 KB/km.
Another example can be seen in fig. 10A and 10B. FIG. 10A shows an example point cloud acquired by ranging sensors mounted to a vehicle traveling along a road. In fig. 10B, this point cloud data has been converted into two depth maps, one for the left side of the vehicle and the other for the right side of the vehicle, which have been placed close to each other to form a composite image.
As discussed above, sensed environmental data determined by the vehicle is compared to positioning reference data to determine if an offset exists. Any determined offset can then be used to adjust the position of the vehicle so that it exactly matches the correct position on the digital map. This determined offset is referred to herein as the correlation index.
In an embodiment, sensed environmental data is determined for a longitudinal road segment (e.g., 200 m), and then the resulting data (e.g., image data) is compared to positioning reference data for the road segment. By performing the comparison on a road segment of this size (i.e., a road segment that is substantially greater than the length of the vehicle), non-stationary or temporary objects (e.g., other vehicles on the road, vehicles stopped beside the road, etc.) will generally not affect the comparison result.
Preferably, the comparison is performed by calculating a cross-correlation between the sensed environmental data and the positioning reference data in order to determine the longitudinal position at which the data set is aligned to the highest degree. The difference between the longitudinal positions of the two data sets of maximum alignment allows for determination of the longitudinal offset. This can be seen, for example, by the offset indicated between the sensed environmental data and the positioning reference data of fig. 8.
In an embodiment, when the data set is provided as an image, the cross-correlation includes a normalized cross-correlation operation such that differences in brightness, lighting conditions, etc. between the positioning reference data and the sensed environmental data may be mitigated. Preferably, the comparison is performed periodically on overlapping windows (e.g., 200m long) such that any offset is continuously determined as the vehicle travels along the road. Fig. 11 shows, in an exemplary embodiment, the determined offset as a function of normalized cross-correlation calculation between the depicted positioning reference data and the depicted sensed environmental data.
Fig. 12 illustrates another example of a correlation performed between a "reference" data set and a "local measurement" data set (acquired by a vehicle as it travels along a road). The result of the correlation between the two images can be seen in the plot of "shift" versus "longitudinal correlation index", where the location of the maximum peak is used to determine the illustrated best fit shift, which can then be used to adjust the longitudinal position of the vehicle relative to the digital map.
As can be seen from fig. 9, 10B, 11 and 12, the positioning reference data and the sensed environmental data are preferably in the form of a depth map, wherein each element (e.g., pixel when the depth map is stored as an image) comprises a first value indicative of a longitudinal position (along the road), a second value indicative of a height (i.e., a height above the ground), and a third value indicative of a lateral position (across the road). Each element (e.g., pixel) of the depth map thus effectively corresponds to a portion of the surface of the environment surrounding the vehicle. As will be appreciated, the size of the surface represented by each element (e.g., pixel) will vary with the amount of compression such that the element (e.g., pixel) will represent a larger surface area with a higher level of compression of the depth map (or image).
In embodiments, where the positioning reference data is stored in a data storage means (e.g., memory) of the device, the comparing step may be performed on one or more processors within the vehicle. In other embodiments, where the positioning reference data is stored remotely from the vehicle, the sensed environmental data may be sent to a server over a wireless connection, for example, via a mobile telecommunications network. The server capable of accessing the positioning reference data will then return any determined offset to the vehicle (e.g., also using the mobile telecommunications network).
An exemplary system within a vehicle according to an embodiment of the invention is depicted in fig. 13. In this system, a processing device, referred to as a correlation index provider unit, receives data feeds from ranging sensors positioned to detect the environment on the left side of the vehicle and ranging sensors positioned to detect the environment on the right side of the vehicle. The processing device also accesses a database of digital maps, preferably in the form of planning maps, and positioning reference data that appropriately matches the digital maps. The processing means is arranged to perform the above method and thus to compare the data feed from the ranging sensor with the positioning reference data to determine the longitudinal offset and hence the exact position of the vehicle relative to the digital map, optionally after converting the data feed into a suitable form (e.g. combining image data of the data from the two sensors). The system also includes a horizon provider unit, and the horizon provider unit uses the determined position of the vehicle and data within the digital map to provide information (referred to as "horizon data") about an upcoming portion of the navigable network that the vehicle is about to traverse. This horizon data may then be used to control one or more systems within the vehicle to perform various assistance or autopilot operations, such as adaptive cruise control, automated lane changing, emergency braking assistance, and the like.
In summary, the present invention relates, at least in preferred embodiments, to a positioning method based on longitudinal correlation. The 3D space around the vehicle is represented in the form of two depth maps that cover the left and right sides of the road and which can be combined into a single image. The reference image stored in the digital map is cross-correlated with a depth map derived from a laser or other ranging sensor of the vehicle to accurately locate the vehicle along (i.e., longitudinally) a representation of the road in the digital map. In an embodiment, the depth information may then be used to position the vehicle across (i.e., laterally across) the road.
In a preferred implementation, the 3D space around the vehicle is projected onto two grids parallel to the road trajectory, and the projected values are averaged within each cell of the grids. The pixels of the longitudinal correlator depth map have a dimension along the direction of travel of about 50cm and a height of about 20 cm. The depth encoded by the pixel values is quantized to about 10cm. Although the depth map image resolution along the direction of travel is 50cm, the resolution of the positioning is much higher. The cross-correlation image represents a grid in which the laser spots are distributed and averaged. Proper upsampling enables the shift vector of the sub-pixel coefficients to be found. Similarly, a depth quantization of about 10cm does not mean a positioning accuracy of 10cm across the road, since the quantization error is averaged over all relevant pixels. In practice, therefore, the accuracy of the positioning is limited mainly by the laser accuracy and calibration, while the quantization error of the longitudinal correlator index contributes only very little.
It should therefore be appreciated that positioning information, such as depth maps (or images), is always available (even if there are no sharp objects in the surrounding environment), compact (storing the road network throughout the world is possible), and makes the accuracy comparable to or even better than other methods due to their availability anywhere and thus higher error averaging possibilities.
FIG. 14A shows an exemplary raster image as part of a piece of positioning reference data. The raster image is formed by orthogonally projecting the collected 3D laser point data onto a hyperplane defined by reference lines and oriented perpendicular to the road surface. Due to the orthogonality of the projections, any height information is independent of the distance from the reference line. The reference line itself extends generally parallel to the lane/road boundary. The actual representation of the hyperplane is a raster format with a fixed horizontal resolution and a nonlinear vertical resolution. This approach aims to maximize the information density about those heights that are important for the detection of vehicle sensors. Experiments have shown that a grating plane height of 5 to 10 meters is sufficient to capture enough relevant information necessary for later use in vehicle positioning. Each individual pixel in the grating reflects a set of laser measurements. Like the vertical resolution, the resolution in the depth information is also represented in a nonlinear manner, but is typically stored in 8-bit values (i.e., as values from 0 to 255). Fig. 14A shows data for both sides of a road. Fig. 14B shows a bird's eye perspective view of the data of fig. 14A as two separate planes on the left and right sides of the road.
As discussed above, vehicles equipped with front or side mounted horizontally mounted laser scanner sensors are capable of generating 2D planes in real time that are similar to the 2D planes of the positioning reference data. The positioning of the vehicle relative to the digital map is achieved by the correlation of the image space of the a priori map data with the real-time sensed and processed data. Longitudinal vehicle positioning is obtained by applying an average non-negative normalized cross-correlation (NCC) operation calculated in overlapping moving windows to an image with a 1-pixel blur in the height domain and a Sobel (Sobel) operator in the longitudinal domain.
Fig. 15A shows fixed longitudinal resolution and variable (e.g., non-linear) vertical and/or depth resolution of positioning reference data and real-time scan data. Thus, although the longitudinal distances represented by the values a, b and c are the same, the height ranges represented by the values D, E and F are different. In particular, the height range represented by D is less than the height range represented by E, and the height range represented by E is less than the height range represented by F. Similarly, the depth range represented by the value 0 (i.e., the surface closest to the vehicle) is less than the depth range represented by the value 100, and the depth range represented by the value 100 is less than the depth range represented by the value 255, i.e., the surface furthest from the vehicle. For example, a value of 0 may represent a depth of 1cm, while a value of 255 may represent a depth of 10 cm.
Fig. 15B illustrates how the vertical resolution may vary. In this example, the vertical resolution varies based on a nonlinear function that maps the height above the reference line to the pixel Y-coordinate value. As shown in fig. 15B, pixels closer to the reference line (equal to 40 at Y in this example) represent lower heights. As also shown in fig. 15B, the vertical resolution is closer to the reference line, i.e., the change in height relative to the pixel location is smaller for pixels closer to the reference line and larger for pixels farther from the reference line.
Fig. 15C illustrates how depth resolution may vary. In this example, the depth resolution varies based on a nonlinear function that maps distance from a reference line to pixel depth (color) values. As shown in fig. 15C, lower pixel depth values represent a shorter distance from the reference line. As also shown in fig. 15C, the depth resolution is greater at lower pixel depth values, i.e., the distance change relative to the pixel depth values is smaller for lower pixel depth values and greater for higher pixel depth values.
Fig. 15D illustrates how a subset of pixels may map to distances along a reference line. As shown in fig. 15D, each pixel along the reference line is the same width, such that the vertical pixel resolution is fixed. Fig. 15D also illustrates how a subset of pixels may map to a height above a reference line. As shown in fig. 15D, the pixels gradually widen at greater distances from the reference line, such that the vertical pixel resolution is lower at greater heights above the reference line. Fig. 15D also illustrates how a subset of pixel depth values may map to a distance from a reference line. As shown in fig. 15D, the distance covered by the pixel depth values gradually widens at greater distances from the reference line, such that the depth resolution is lower at greater depth distances from the reference line.
Some further embodiments and features of the present invention will now be described.
As described with respect to fig. 14A, a depth map (e.g., a raster image) of positioning reference data may be provided by orthogonal projection onto a reference plane defined by a reference line associated with a road element. Fig. 16A illustrates the result of using this projection. The reference plane is perpendicular to the road reference line shown. Here, although the height information is independent of the distance from the reference line, which may provide some advantages, one limitation of orthogonal projection is that information about surfaces perpendicular to the road element may be lost. This is illustrated by the side depth map of fig. 16B obtained using orthogonal projection.
If non-orthogonal projection is used, for example at 45 degrees, this information about the surface perpendicular to the road element can be saved. This is shown by fig. 16C and 16D. Fig. 16C illustrates a 45 degree projection onto a reference plane, again defined perpendicular to the road reference line. As shown in fig. 16D, the side depth map obtained using this projection includes more information about those surfaces of the object that are perpendicular to the road elements. By using non-orthogonal projections, information about such vertical surfaces may be captured by depth map data, but need not include additional data channels, or otherwise increase storage capacity. It will be appreciated that in the case where this non-orthogonal projection is used for depth map data of the positioning reference data, then the corresponding projection should be used for real-time sensing data to be compared with.
Each pixel of depth map data locating the reference data is based on a set of sensing measurements, e.g., laser measurements. These measurements correspond to sensor measurements indicating the distance of the object from the reference plane along the relevant predetermined direction at the location of the pixel. Due to the way the data is compressed, a set of sensor measurements will be mapped to a particular pixel. Instead of determining depth values corresponding to averages of different distances according to the set of sensor measurements to be associated with pixels, it has been found that using nearest distances from among the distances corresponding to the various sensor measurements for the pixel depth values may achieve greater accuracy. It is important that the depth value of a pixel accurately reflects the distance from the reference plane to the nearest surface of the object. This is of most concern when the position of the vehicle is accurately determined in a manner that will minimize the risk of collision. If the average of a set of sensor measurements is used to provide the depth value of a pixel, there is a possibility that the depth value will indicate a greater distance to the object surface than in fact would be the case at the pixel location. This is because one object may be temporarily located between the reference plane and another more distant object, for example, a tree may be located in front of a building. In this case, some sensor measurements for providing pixel depth values will be related to the building and other sensor measurements will be related to the tree as a result of the sensor measurements mapping to areas extending to pixels on one or more sides of the tree. The applicant has appreciated that measuring the closest various sensors as the depth value associated with the pixel is the safest and most reliable in order to ensure reliable capture of the distance to the surface of the nearest object, in this case a tree. Alternatively, a distribution of sensor measurements for the pixels may be derived, and a closest mode may be employed to provide the pixel depth. This will provide a more reliable indication of the pixel depth in a manner similar to the nearest distance.
As described above, the pixels of the depth map data of the positioning reference data include depth channels including data indicating depths from the locations of the pixels in the reference plane to the surface of the object. One or more additional pixel channels may be included in the positioning reference data. This will result in a multi-channel or layer depth map and thus a raster image. In some preferred embodiments, the second channel includes data indicative of a laser reflectivity of the object at the location of the pixel, and the third channel includes data indicative of a radar reflectivity of the object at the location of the pixel.
Each pixel has a position corresponding to a particular distance along the road reference line (x-direction) and a height above the road reference line (y-direction). The depth value associated with a pixel in the first channel c1 indicates the distance of the pixel in the reference plane to the surface of the nearest object (preferably corresponding to the nearest distance of a set of sensing measurements used to obtain the pixel depth value) along a predetermined direction (which may be orthogonal or non-orthogonal to the reference plane depending on the projection used). Each pixel may have a laser reflectivity value in the second channel c2 indicating the average local reflectivity of the laser spot near the distance c1 from the reference plane. In the third channel c3, the pixel may have a radar reflectivity value indicating an average local reflectivity of the radar point at a distance of about c1 from the reference plane. This is shown, for example, in fig. 17. The multi-channel format allows for a large amount of data to be contained in the depth map. Further possible channels that may be used are object thickness (which may be used to recover information about surfaces perpendicular to the road trajectory using orthogonal projections), reflection point density and color and/or texture (obtained, for example, from a camera used to provide reference scan data).
Although the invention has been described with respect to embodiments in which the depth map of the positioning reference data relates to the environment on the lateral side of the road, it has been recognized that the use of differently configured depth maps may be useful for assisting in positioning vehicles at intersections. These additional embodiments may be used in conjunction with side depth maps of areas remote from the intersection.
In some further embodiments, the reference line is defined as circular. In other words, the reference line is non-linear. The circle is defined by a given radius centered at the center of the digital map intersection. The radius of the circle may be selected depending on one side of the intersection. The reference plane may be defined as a 2-dimensional surface perpendicular to this reference line. A (circular) depth map may then be defined, wherein each pixel includes a channel indicating a distance along a predetermined direction from the position of the pixel in the reference plane to the surface of the object (i.e., the depth value) in the same manner as when using a linear reference line. The projections onto the reference plane may similarly be orthogonal or non-orthogonal, and each pixel may have multiple channels. The depth value of a given pixel is preferably based on the nearest sensing distance to the object.
Fig. 18 indicates circular and linear reference lines that may be used to construct depth maps at and away from the intersection, respectively. FIG. 19A illustrates the manner in which objects may be projected onto a circular depth map at different angular positions. Fig. 19B indicates that each of the objects is projected onto a reference plane using orthogonal projection to provide a depth map.
The manner in which a depth map (whether circular or otherwise) of positioning reference data can be compared to real-time sensor data obtained from a vehicle in order to determine a longitudinal alignment offset between the reference and real-time sensed data has been described. In some further embodiments, a lateral alignment offset is also obtained. This involves a series of steps that can be performed in the image domain.
Referring to an example using a side depth map, in a first step of the process, a longitudinal alignment offset between a reference-based side depth map and a real-time sensor data-based side depth map is determined in the manner previously described. The depth maps are shifted relative to each other until they are longitudinally aligned. Next, the reference depth map, i.e., the raster image, is cropped to correspond in size to the depth map based on the real-time sensor data. Next, the reference-based side depth map based on such alignment is compared with the depth values of pixels in the corresponding positions of the real-time sensor-based side depth map, i.e. the values of the depth channels of the pixels. The difference in depth values for each corresponding pixel indicates the lateral offset of the pixel. This can be evaluated by taking into account the color difference of the pixels, where the depth value of each pixel is represented by the color. The most common lateral offset (mode difference) so determined between corresponding pixel pairs is determined and is considered to correspond to the lateral alignment offset of the two depth maps. The most common lateral offset may be obtained using a histogram of depth differences between pixels. Once the lateral offset is determined, it can be used to correct the perceived lateral position of the vehicle on the road.
Fig. 20A illustrates a reference depth map (i.e., image) that may be compared to determine a lateral offset alignment of the depth map and a corresponding depth map or image based on real-time sensor data from the vehicle. As illustrated in fig. 20B, the images are first shifted relative to each other to longitudinally align them. Next, after clipping the reference image, a lateral alignment offset between the depth maps is determined using a histogram of differences in pixel depth values of corresponding pixels in the two depth maps-fig. 20C. Fig. 20D illustrates how this can achieve a longitudinal position, and then how the lateral position of the vehicle on the road is corrected.
Once the lateral alignment offset between the reference-based depth map and the real-time data-based depth map has been obtained, the heading of the vehicle may also be corrected. It has been found that in the event that there is an offset between the actual direction of travel of the vehicle and the perceived direction of travel, this will result in a non-constant lateral alignment offset being determined between corresponding pixels in the reference-based depth map and the real-time sensing data-based depth map that varies as a function of longitudinal distance along the depth map.
Fig. 21A illustrates a set of vertical slices through corresponding portions of a reference depth map image (up) and a real-time sensor-based depth map image (down). The average difference (i.e., lateral alignment offset) of the pixel depth values of the corresponding pixels in each slice is plotted against the longitudinal distance (x-axis) along the map/image (y-axis). This figure is shown in fig. 21B. A function describing the relationship between the average pixel depth distance and the longitudinal distance along the depth map may then be derived by suitable regression analysis. The gradient of this function indicates the forward direction offset of the vehicle.
The depth map used in embodiments of the present invention may be transformed so as to be always relative to a straight line reference line, i.e. so as to be a linear reference image, for example as described in WO 2009/045096 A1. This has the advantage as shown in fig. 22. At the left side of fig. 22 is an image of a curved road. In order to mark the center line of a curved road, several marks 1102 must be placed. At the right hand side of fig. 22, a corresponding linear reference image is shown corresponding to a curved road in the left side of the figure. To obtain a linear reference image, the centerline of the curved road is mapped to a straight line reference line of the linear reference image. In view of this transformation, the reference line can now be defined simply by the two endpoints 1104 and 1106.
When on a perfectly straight road, the calculated shift from the comparison of the reference depth map with the real-time depth map can be directly applied, which is not possible on a curved road due to the non-linear nature of the linearization process used to generate the linear reference image. 23A and 23B show computationally efficient methods for establishing the position of a vehicle in a nonlinear environment through a series of incremental independent linear update steps. As shown in fig. 23A, the method involves applying a longitudinal correction, then a lateral correction, and then a heading correction in a series of incrementally independent linear update steps. In particular, in step (1), a longitudinal offset is determined using the vehicle sensor data and a reference depth map based on a current considered position of the vehicle relative to the digital map (e.g., obtained using GPS). The longitudinal offset is then applied to adjust the perceived position of the vehicle relative to the digital map, and the reference depth map is recalculated based on the adjusted position. Next, in step (2), the lateral offset is determined using the vehicle sensor data and the recalculated reference depth map. The lateral offset is then applied to further adjust the perceived position of the vehicle relative to the digital map, and the reference depth map is recalculated again based on the adjusted position. Finally, at step (3), the heading offset or skew is determined using the vehicle sensor data and the recalculated reference depth map. The heading offset is then applied to further adjust the perceived position of the vehicle relative to the digital map, and the reference depth map is recalculated again based on the adjusted position. These steps are repeated as many times as necessary for there to be a substantially zero longitudinal offset, lateral offset, and forward direction offset between the real-time depth map and the reference depth map. FIG. 23B shows the continuous and repeated application of longitudinal, lateral, and forward direction offsets to a point cloud generated from vehicle sensor data until that point cloud is substantially aligned with a point cloud generated from a reference depth map.
A series of exemplary use cases of positioning reference data are also depicted.
For example, in some embodiments, rather than using a depth map of positioning reference data for comparison purposes with a depth map based on real-time sensor data, a depth map of positioning reference data is used to generate a reference point cloud, including a set of data points in a three-dimensional coordinate system, each point representing a surface of an object in the environment. This reference point cloud may be compared to a corresponding three-dimensional point cloud based on real-time sensor data obtained by the vehicle sensors. The comparison may be used to determine an alignment offset between the depth maps and thus adjust the determined position of the vehicle.
A reference depth map may be used to obtain a reference 3D point cloud, which may be compared to a corresponding point cloud based on real-time sensor data of the vehicle (regardless of which type of sensor that vehicle has). While the reference data may be based on sensor data obtained from various types of sensors, including laser scanners, radar scanners, and cameras, the vehicle may not have a corresponding set of sensors. The 3D reference point cloud may be constructed from a reference depth map that may be compared to a 3D point cloud obtained based on real-time sensor data of a particular type available for the vehicle.
For example, where the depth map of the reference positioning data includes channels indicative of radar reflectivity, this may be considered when generating a reference point cloud, which may be compared to a 3D point cloud obtained using real-time sensor data of a vehicle having only radar sensors. The radar reflectivity data associated with the pixels helps to identify those data points that should be included in the 3D reference point cloud, i.e., the data points represent the surface of the object that will be desired to be detected by the vehicle radar sensor.
In another example, the vehicle may have only one or more cameras for providing real-time sensor data. In this case, the data from the laser reflectivity channels of the reference depth map may be used to construct a 3D reference point cloud that includes data points that relate only to surfaces that may be expected to be detected by the vehicle's camera in the current state. For example, on a dark day, only relatively reflective objects should be included.
A 3D point cloud based on real-time sensing data of the vehicle may be obtained as needed. In the case where the vehicle contains only a single camera as a sensor, a "motion-from-recovery" technique may be used, where a series of images from the camera are used to reconstruct a 3D scene from which a 3D point cloud may be obtained. In the case where the vehicle includes a stereo camera, the 3D scene may be directly generated and used to provide a 3-dimensional point cloud. This may be achieved using a disparity-based 3D model.
In yet other embodiments, rather than comparing the reference point cloud to the real-time sensor data point cloud, the reference point cloud is used to reconstruct an image that is expected to be seen by one or more cameras of the vehicle. The images may then be compared and used to determine an alignment offset between the images, which in turn may be used to correct the perceived location of the vehicle.
In these embodiments, additional channels of the reference depth map may be used as described above to reconstruct an image tool based on including only those points in the 3-dimensional reference point cloud that are expected to be detected by the vehicle's camera. For example, in darkness, a laser reflectivity channel may be used to select those points included in a 3-dimensional point cloud that correspond to the surface of an object that may be detected in darkness by a camera. It has been found that the use of non-orthogonal projections onto the reference plane is particularly useful in this context when determining the reference depth map, thereby preserving more information about the surface of the object that is still detectable in the dark.
FIG. 24 depicts an exemplary system in which data collected by one or more vehicle sensors (lasers, cameras, and radar) is used to generate an "actual footprint" of the environment as seen by the vehicle, according to an embodiment of the invention. The "actual footprint" is compared (i.e., correlated) with a corresponding "reference footprint" determined from reference data associated with the digital map, wherein the reference data includes at least one distance channel, and may include laser reflectivity channels and/or radar reflectivity channels, as discussed above. By this correlation, the position of the vehicle can be accurately determined with respect to the digital map.
In a first example use case, as depicted in fig. 25A, an actual footprint is determined from a laser-based distance sensor (e.g., a LIDAR sensor) in the vehicle and correlated to a reference footprint determined from data in a distance channel of reference data in order to achieve a sustained positioning of the vehicle. Fig. 25B shows a first method in which a laser point cloud as determined by a laser-based distance sensor is converted into a depth map of the same format as reference data, and the two depth map images are compared. A second alternative method is shown in fig. 25C, in which a laser point cloud is reconstructed from reference data, and this reconstructed point cloud is compared to the laser point cloud as seen by the vehicle.
In a second example use case, as depicted in fig. 26A, the actual footprint is determined from the camera in the vehicle and correlated with the reference footprint determined from the data in the range channel of the reference data in order to achieve a continuous positioning of the vehicle, albeit only during the day. In other words, in this example use case, a reference depth map is used to construct a 3D point cloud or view, which is then compared to 3D scenes or views obtained from multiple vehicle cameras or a single vehicle camera. A first method is shown in fig. 26B, in which a disparity-based 3D model is built using a stereo vehicle camera, which is then used to build a 3D point cloud for correlation with the 3D point cloud built from the reference depth map. A second method is shown in fig. 26C, in which a 3D scene is constructed using a sequence of vehicle camera images, then a 3D point cloud is constructed using the 3D scene to correlate with the 3D point cloud constructed from the reference depth map. Finally, a third method is shown in fig. 26D, in which the vehicle camera image is compared to a view created from a 3D point cloud constructed from a reference depth map.
In a third example use case, as shown in fig. 27A, is a modification to the second example use case, in which laser reflectivity data of reference data located in channels of a depth map may be used to construct a 3D point cloud or view, which may be compared to a 3D point cloud or view based on images captured by one or more cameras. A first method is shown in fig. 27B, where a 3D scene is constructed using a sequence of vehicle camera images, then a 3D point cloud is constructed using the 3D scene to correlate with the 3D point cloud constructed from the reference depth map (using both distance and laser reflectivity channels). A second method is shown in fig. 27C, in which the vehicle camera image is compared to a view created from a 3D point cloud (reusing both distance and laser reflectivity channels) constructed from a reference depth map.
In a fourth example use case, as depicted in fig. 28A, an actual footprint is determined from radar-based distance sensors in the vehicle and correlated with a reference footprint determined from the distance of the reference data and the data in the radar reflectivity channels in order to achieve sparse positioning of the vehicle. A first method is shown in fig. 28B, where reference data is used to reconstruct a 3D scene and data in the radar reflectivity channels is used to leave only radar reflection points. This 3D scene is then correlated with a radar point cloud as seen by the vehicle.
Of course, it should be understood that various use cases, i.e., fusion, may be used together to allow for more accurate positioning of the vehicle relative to the digital map.
A method of correlating vehicle sensor data with reference data in order to determine the position of a vehicle, for example, as discussed above, will now be described with reference to fig. 29-32B. FIG. 29 depicts various coordinate systems used in the method, a local coordinate system (local CS), a carriage coordinate system (CF CS), and a Linear Reference Coordinate System (LRCS) along the vehicle trajectory. Another coordinate system, although not depicted, is the World Geodetic System (WGS), wherein a location is given as a latitude coordinate, longitude coordinate pair, as is known in the art. In fig. 30 a general method is shown, wherein details of the steps are performed to determine the laser point cloud shown in fig. 31. Fig. 32A shows a first exemplary method of performing the correlation step of fig. 30, wherein the position of the vehicle is corrected by image correlation between, for example, a depth map raster image of reference data and a corresponding depth map raster image created from vehicle sensor data. Fig. 32B shows a second exemplary method of performing the correlation step of fig. 30, wherein the position of the vehicle is corrected by a 3D correlation between a 3D scene constructed from reference data and a 3D scene captured by the vehicle sensor.
Any method according to the invention may be implemented at least in part using software (e.g., a computer program). Thus, the invention also extends to a computer program comprising computer readable instructions executable to perform or cause a navigation device to perform a method according to any aspect or embodiment of the invention. Accordingly, the disclosure contemplates a computer program product that, when executed by one or more processors, causes the one or more processors to generate suitable images (or other graphical information) for display on a display screen. The invention correspondingly extends to a computer software carrier comprising such software which, when used to operate a system or apparatus comprising data processing means, together with said data processing means, causes said apparatus or system to perform the steps of the method of the invention. Such a computer software carrier may be a non-transitory physical storage medium such as a ROM chip, CD ROM or diskette, or may be a signal such as an electronic signal via wires, an optical signal or a radio signal (e.g., to a satellite) or the like. The present invention provides a machine-readable medium containing instructions which, when read by a machine, cause the machine to operate in accordance with the methods of any aspect or embodiment of the present invention.
Where not explicitly stated, it is to be understood that the invention may include in any aspect thereof any or all of the features described in relation to other aspects or embodiments of the invention, provided that they are not mutually exclusive. In particular, while various embodiments of operations have been described which may be performed in the method and by the apparatus, it should be understood that any or more or all of these operations may be performed in the method and by the apparatus in any combination, as desired.
The following are some examples of the present disclosure.
According to one example of the present disclosure, there is provided a method of generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the method comprising:
For at least one navigable element represented by the digital map, obtaining a set of data points in a three-dimensional coordinate system, wherein each data point represents a surface of an object of the environment surrounding the at least one navigable element of the navigable network;
generating positioning reference data from the set of data points, the positioning reference data comprising at least one depth map indicative of the environment around the navigable element projected onto a reference plane, the reference plane being defined by a reference line associated with the navigable element, each pixel in the at least one depth map being associated with a location in the reference plane associated with the navigable element, and the pixel including a depth channel representing a distance from the associated location of the pixel in the reference plane to a surface of an object in the environment along a predetermined direction, wherein the distance to the surface of the object represented by the depth channel of each pixel is determined based on a set of multiple sensed data points, each sensed data point indicating a sensed distance from the location of the pixel to the surface of the object along the predetermined direction, and wherein the distance to the surface of the object represented by the depth channel of the pixel is based on the set of data points or a closest distance, and a closest pattern of data points
The generated positioning reference data is associated with the digital map data.
At least some of the sensed data points of the set of multiple sensed data points of a particular pixel may be related to surfaces of different objects.
The different objects may be positioned at different depths relative to the reference plane.
The distance to the surface of the object represented by the depth channel of a particular pixel may not be based on an average of the set of multiple sensed data points of the particular pixel.
According to another example of the present disclosure, there is provided a method of generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the method comprising:
For at least one navigable element represented by the digital map, obtaining a set of data points in a three-dimensional coordinate system, wherein each data point represents a surface of an object of the environment surrounding the at least one navigable element of the navigable network;
Generating positioning reference data from the set of data points, the positioning reference data comprising at least one depth map indicative of the environment around the navigable element projected onto a reference plane, the reference plane defined by a longitudinal reference line oriented parallel to the navigable element and perpendicular to a surface of the navigable element, each pixel in the at least one depth map being associated with a position in the reference plane associated with the navigable element and the pixel including a depth channel representing a lateral distance in a predetermined direction from the associated position of the pixel in the reference plane to a surface of an object in the environment, wherein the at least one depth map has a fixed longitudinal resolution and a variable vertical and/or depth resolution, and
The generated positioning reference data is associated with the digital map data.
The variable vertical and/or depth resolution may be non-linear.
Portions of the depth map closer to the ground may be displayed at a higher resolution than portions of the depth map above the ground.
Portions of the depth map that are closer to the navigable element may be shown at a higher resolution than portions of the depth map that are farther from the navigable element.
According to another example of the present disclosure, there is provided a method of generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the method comprising:
For at least one navigable element represented by the digital map, obtaining a set of data points in a three-dimensional coordinate system, wherein each data point represents a surface of an object of the environment surrounding the at least one navigable element of the navigable network;
Generating positioning reference data from the set of data points, the positioning reference data comprising at least one depth map indicative of the environment around the navigable element projected onto a reference plane, the reference plane defined by a reference line parallel to the navigable element, each pixel in the at least one depth map being associated with a location in the reference plane associated with the navigable element, and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment, wherein the predetermined direction is not perpendicular to the reference plane, and
The generated positioning reference data is associated with the digital map data indicative of the navigable element.
The projection of the environment onto the reference plane may be a non-orthogonal projection.
The predetermined direction may be along a direction that is substantially 45 degrees with respect to the reference plane.
The navigable elements may include roads and the navigable network includes a road network.
The positioning reference data may be generated for a plurality of navigable elements of the navigable network represented by the digital map.
The reference line associated with the navigable element may be defined by a point or points associated with the navigable element.
The reference line associated with the navigable element is an edge, boundary, lane, or centerline of the navigable element.
The positioning reference data may provide a representation of the environment on one or more sides of the navigable element.
The depth map may take the form of a raster image.
Each pixel of the depth map may be associated with a particular longitudinal position and elevation in the depth map.
Associating the generated positioning reference data with the digital map data may include storing the positioning reference data in association with the navigable element to which it relates.
The positioning reference data may include representations of the environment on a left side of the navigable element and a right side of the navigable element.
The positioning reference data for each side of the navigable element may be stored in a combined dataset.
According to another example of the present disclosure, there is provided a method of generating positioning reference data associated with a digital map representing elements of a navigable network, the positioning reference data providing a compressed representation of the environment surrounding at least one junction of the navigable network represented by the digital map, the method comprising:
Obtaining, for at least one junction represented by the digital map, a set of data points in a three-dimensional coordinate system, wherein each data point represents a surface of an object in the environment surrounding the at least one junction of the navigable network;
Generating positioning reference data from the set of data points, the positioning reference data comprising at least one depth map indicative of the environment around the junction projected onto a reference plane defined by a reference line defined by a radius centered on a reference point associated with the junction, each pixel in the at least one depth map being associated with a location in the reference plane associated with the junction and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment, and
The generated positioning reference data is associated with digital map data indicative of the point of engagement.
The depth map may extend about 360 degrees to provide a 360 degree representation of the environment around the junction.
The depth map may extend less than about 360 degrees.
The reference point may be located at the centre of the junction.
The reference point may be associated with a node of the digital map representing the junction or a navigable element at the junction.
The junction may be an intersection.
The set of data points may be obtained using at least one rangefinder sensor on a mobile mapping vehicle that has previously traveled along the at least one navigable element.
The at least one rangefinder sensor may include one or more of a laser scanner, a radar scanner, and a pair of stereo cameras.
According to another example of the present disclosure, there is provided a method of determining a position of a vehicle relative to a digital map, the digital map including data representative of a junction through which the vehicle travels, the method comprising:
Obtaining positioning reference data associated with the digital map for a considered current position of the vehicle in the navigable network, wherein the positioning reference data comprises at least one depth map indicative of an environment surrounding the vehicle projected onto a reference plane defined by a reference line defined by a radius centered on a reference point associated with the junction point, each pixel in the at least one depth map being associated with a position in the reference plane associated with the junction point through which the vehicle travels, and the pixel including a depth channel representing a distance along a predetermined direction from the associated position of the pixel in the reference plane to a surface of an object in the environment;
Determining real-time scan data by scanning the environment around the vehicle using at least one sensor, wherein the real-time scan data comprises at least one depth map indicative of the environment around the vehicle, each pixel in the at least one depth map being associated with a location in the reference plane associated with the junction, and the pixel including a depth channel representing a distance from the associated location of the pixel in the reference plane to a surface of an object in the environment, determined using the at least one sensor, along the predetermined direction;
Calculating a correlation between the positioning reference data and the real-time scan data to determine an alignment offset between the depth maps, and
The determined alignment offset is used to adjust the considered current position to determine the position of the vehicle relative to the digital map.
According to another example of the present disclosure, there is provided a computer program product comprising computer readable instructions executable to cause a system to perform a method as described above, optionally stored on a non-transitory computer readable medium.
According to another example of the present disclosure, there is provided a system for generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the system comprising processing circuitry configured to, for the at least one navigable element represented by the digital map:
Obtaining a set of data points in a three-dimensional coordinate system, wherein each data point represents a surface of an object in the environment surrounding the at least one navigable element of the navigable network;
Generating positioning reference data from the set of data points, the positioning reference data comprising at least one depth map indicative of the environment around the navigable element projected onto a reference plane, the reference plane being defined by a reference line associated with the navigable element, each pixel of the at least one depth map being associated with a location in the reference plane associated with the navigable element, and the pixel including a depth channel representing a distance from the associated location of the pixel in the reference plane to a surface of an object in the environment in a predetermined direction, wherein the distance to the surface of the object represented by the depth channel of each pixel is determined based on a set of multiple sensed data points, each sensed data point indicating a sensed distance from the location of the pixel to the surface of the object in the predetermined direction, and wherein the distance to the surface of the object represented by the depth channel of the pixel is based on the set of sensed data points or a closest distance and a closest mode of distance to the object
The generated positioning reference data is associated with the digital map data.
According to another example of the present disclosure, there is provided a system for generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the system comprising processing circuitry configured to, for the at least one navigable element represented by the digital map:
Obtaining a set of data points in a three-dimensional coordinate system, wherein each data point represents a surface of an object in the environment surrounding the at least one navigable element of the navigable network;
Generating positioning reference data from the set of data points, the positioning reference data comprising at least one depth map indicative of the environment around the navigable element projected onto a reference plane, the reference plane defined by a longitudinal reference line oriented parallel to the navigable element and perpendicular to a surface of the navigable element, each pixel of the at least one depth map being associated with a position in the reference plane associated with the navigable element and the pixel including a depth channel representing a lateral distance along a predetermined direction from the associated position of the pixel in the reference plane to a surface of an object in the environment, wherein the at least one depth map has a fixed longitudinal resolution and a variable vertical and/or depth resolution, and
The generated positioning reference data is associated with the digital map data.
According to another example of the present disclosure, there is provided a system for generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the system comprising processing circuitry configured to, for the at least one navigable element represented by the digital map:
Obtaining a set of data points in a three-dimensional coordinate system, wherein each data point represents a surface of an object in the environment surrounding the at least one navigable element of the navigable network;
Generating positioning reference data from the set of data points, the positioning reference data comprising at least one depth map indicative of the environment around the navigable element projected onto a reference plane, the reference plane defined by a reference line parallel to the navigable element, each pixel in the at least one depth map being associated with a location in the reference plane associated with the navigable element, and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment, wherein the predetermined direction is not perpendicular to the reference plane, and
The generated positioning reference data is associated with digital map data indicative of the navigable elements.
According to another example of the present disclosure, there is provided a system for generating positioning reference data associated with a digital map representing elements of a navigable network, the positioning reference data providing a compressed representation of an environment surrounding at least one junction of the navigable network represented by the digital map, the system comprising processing circuitry configured to, for at least one junction represented by the digital map:
Obtaining a set of data points in a three-dimensional coordinate system, wherein each data point represents a surface of an object in the environment around the at least one junction of the navigable network;
Generating positioning reference data from the set of data points, the positioning reference data comprising at least one depth map indicative of the environment around the junction projected onto a reference plane defined by a reference line defined by a radius centered on a reference point associated with the junction, each pixel in the at least one depth map being associated with a location in the reference plane associated with the junction and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment, and
The generated positioning reference data is associated with digital map data indicative of the point of engagement.
According to yet another example of the present disclosure, there is provided a system for determining a position of a vehicle relative to a digital map, the digital map including data representative of a junction through which the vehicle travels, the system comprising processing circuitry configured to:
Obtaining positioning reference data associated with the digital map for a considered current position of the vehicle in the navigable network, wherein the positioning reference data comprises at least one depth map indicative of an environment surrounding the vehicle projected onto a reference plane defined by a reference line defined by a radius centered on a reference point associated with the junction point, each pixel in the at least one depth map being associated with a position in the reference plane associated with the junction point through which the vehicle travels, and the pixel including a depth channel representing a distance along a predetermined direction from the associated position of the pixel in the reference plane to a surface of an object in the environment;
Determining real-time scan data by scanning the environment around the vehicle using at least one sensor, wherein the real-time scan data comprises at least one depth map indicative of the environment around the vehicle, each pixel in the at least one depth map being associated with a location in the reference plane associated with the junction, and the pixel including a depth channel representing a distance from the associated location of the pixel in the reference plane to a surface of an object in the environment, determined using the at least one sensor, along the predetermined direction;
Calculating a correlation between the positioning reference data and the real-time scan data to determine an alignment offset between the depth maps, and
The determined alignment offset is used to adjust the considered current position to determine the position of the vehicle relative to the digital map.

Claims (18)

CN202111620810.3A2015-08-032016-08-03 Method and system for generating and using positioning reference dataActiveCN114111812B (en)

Applications Claiming Priority (8)

Application NumberPriority DateFiling DateTitle
US201562200611P2015-08-032015-08-03
US201562200613P2015-08-032015-08-03
US62/200,6112015-08-03
US62/200,6132015-08-03
US201562218538P2015-09-142015-09-14
US62/218,5382015-09-14
CN201680044930.4ACN107850450B (en)2015-08-032016-08-03Method and system for generating and using positioning reference data
PCT/EP2016/068593WO2017021473A1 (en)2015-08-032016-08-03Methods and systems for generating and using localisation reference data

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
CN201680044930.4ADivisionCN107850450B (en)2015-08-032016-08-03Method and system for generating and using positioning reference data

Publications (2)

Publication NumberPublication Date
CN114111812A CN114111812A (en)2022-03-01
CN114111812Btrue CN114111812B (en)2025-06-13

Family

ID=56567615

Family Applications (5)

Application NumberTitlePriority DateFiling Date
CN201680044918.3AActiveCN107850445B (en)2015-08-032016-08-03Method and system for generating and using positioning reference data
CN201680044925.3AActiveCN107850449B (en)2015-08-032016-08-03Method and system for generating and using positioning reference data
CN201680044924.9AActiveCN107850448B (en)2015-08-032016-08-03Method and system for generating and using positioning reference data
CN202111620810.3AActiveCN114111812B (en)2015-08-032016-08-03 Method and system for generating and using positioning reference data
CN201680044930.4AActiveCN107850450B (en)2015-08-032016-08-03Method and system for generating and using positioning reference data

Family Applications Before (3)

Application NumberTitlePriority DateFiling Date
CN201680044918.3AActiveCN107850445B (en)2015-08-032016-08-03Method and system for generating and using positioning reference data
CN201680044925.3AActiveCN107850449B (en)2015-08-032016-08-03Method and system for generating and using positioning reference data
CN201680044924.9AActiveCN107850448B (en)2015-08-032016-08-03Method and system for generating and using positioning reference data

Family Applications After (1)

Application NumberTitlePriority DateFiling Date
CN201680044930.4AActiveCN107850450B (en)2015-08-032016-08-03Method and system for generating and using positioning reference data

Country Status (6)

CountryLink
US (5)US11137255B2 (en)
EP (7)EP3332219B1 (en)
JP (5)JP6899370B2 (en)
KR (5)KR102630740B1 (en)
CN (5)CN107850445B (en)
WO (5)WO2017021475A1 (en)

Families Citing this family (131)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
DE102016209232B4 (en)*2016-05-272022-12-22Volkswagen Aktiengesellschaft Method, device and computer-readable storage medium with instructions for determining the lateral position of a vehicle relative to the lanes of a roadway
US11408740B2 (en)*2016-05-302022-08-09Mitsubishi Electric CorporationMap data update apparatus, map data update method, and computer readable medium
CN107515006A (en)*2016-06-152017-12-26华为终端(东莞)有限公司A kind of map updating method and car-mounted terminal
US10345107B2 (en)*2016-06-222019-07-09Aptiv Technologies LimitedAutomated vehicle sensor selection based on map data density and navigation feature density
US10502577B2 (en)*2016-06-302019-12-10Here Global B.V.Iterative map learning based on vehicle on-board sensor data
WO2018031678A1 (en)*2016-08-092018-02-15Nauto Global LimitedSystem and method for precision localization and mapping
US10802450B2 (en)2016-09-082020-10-13Mentor Graphics CorporationSensor event detection and fusion
US11067996B2 (en)2016-09-082021-07-20Siemens Industry Software Inc.Event-driven region of interest management
WO2018060313A1 (en)*2016-09-282018-04-05Tomtom Global Content B.V.Methods and systems for generating and using localisation reference data
EP3551967A2 (en)2016-12-092019-10-16TomTom Global Content B.V.Method and system for video-based positioning and mapping
WO2018126067A1 (en)*2016-12-302018-07-05DeepMap Inc.Vector data encoding of high definition map data for autonomous vehicles
KR102265376B1 (en)*2017-03-072021-06-16현대자동차주식회사Vehicle and controlling method thereof and autonomous driving system
EP3602749A4 (en)2017-03-292020-03-25SZ DJI Technology Co., Ltd.Hollow motor apparatuses and associated systems and methods
CN211236238U (en)2017-03-292020-08-11深圳市大疆创新科技有限公司Light detection and ranging (LIDAR) system and unmanned vehicle
CN110199204A (en)*2017-03-292019-09-03深圳市大疆创新科技有限公司 LiDAR sensor system with small form factor
US10552691B2 (en)2017-04-252020-02-04TuSimpleSystem and method for vehicle position and velocity estimation based on camera and lidar data
CN109923488A (en)*2017-04-272019-06-21深圳市大疆创新科技有限公司The system and method for generating real-time map using loose impediment
CN110573901A (en)2017-04-282019-12-13深圳市大疆创新科技有限公司calibration of laser sensor and vision sensor
EP3616159A4 (en)2017-04-282020-05-13SZ DJI Technology Co., Ltd.Calibration of laser sensors
EP3615979A4 (en)2017-04-282020-03-25SZ DJI Technology Co., Ltd.Angle calibration in light detection and ranging system
US11175146B2 (en)*2017-05-112021-11-16Anantak Robotics Inc.Autonomously moving machine and method for operating an autonomously moving machine
NL2018911B1 (en)*2017-05-122018-11-15Fugro Tech BvSystem and method for mapping a railway track
WO2019000417A1 (en)2017-06-302019-01-03SZ DJI Technology Co., Ltd. SYSTEMS AND METHODS OF GENERATING CARDS
CN109214248B (en)*2017-07-042022-04-29阿波罗智能技术(北京)有限公司Method and device for identifying laser point cloud data of unmanned vehicle
DE102017211607A1 (en)*2017-07-072019-01-10Robert Bosch Gmbh Method for verifying a digital map of a higher automated vehicle (HAF), in particular a highly automated vehicle
CN107357894B (en)*2017-07-132020-06-02杭州智诚惠通科技有限公司Road traffic facility data acquisition and correction method and system
WO2019014896A1 (en)2017-07-202019-01-24SZ DJI Technology Co., Ltd.Systems and methods for optical distance measurement
KR102284565B1 (en)2017-07-312021-07-30에스지 디제이아이 테크놀러지 코., 엘티디 Correction of motion-based inaccuracies in point clouds
WO2019031851A1 (en)2017-08-082019-02-14엘지전자 주식회사Apparatus for providing map
US10551838B2 (en)*2017-08-082020-02-04Nio Usa, Inc.Method and system for multiple sensor correlation diagnostic and sensor fusion/DNN monitor for autonomous driving application
CN110892281B (en)*2017-08-282023-08-18黑拉有限责任两合公司Method for operation of radar system
JP7043755B2 (en)*2017-08-292022-03-30ソニーグループ株式会社 Information processing equipment, information processing methods, programs, and mobiles
CN111033312A (en)2017-08-312020-04-17深圳市大疆创新科技有限公司Delay time calibration for optical distance measurement devices and associated systems and methods
EP3707530A4 (en)2017-09-042021-09-22Commonwealth Scientific and Industrial Research Organisation METHOD AND SYSTEM FOR USE IN IMPLEMENTING LOCALIZATION
JP6970330B6 (en)*2017-09-112021-12-22国際航業株式会社 How to give coordinates of roadside features
DE102017216954A1 (en)*2017-09-252019-03-28Robert Bosch Gmbh Method and device for determining a highly accurate position and for operating an automated vehicle
CN107741233A (en)*2017-11-102018-02-27邦鼓思电子科技(上海)有限公司A kind of construction method of the outdoor map of three-dimensional
US10699135B2 (en)2017-11-202020-06-30Here Global B.V.Automatic localization geometry generator for stripe-shaped objects
US10739784B2 (en)*2017-11-292020-08-11Qualcomm IncorporatedRadar aided visual inertial odometry initialization
DE102017222810A1 (en)2017-12-142019-06-19Robert Bosch Gmbh Method for creating a feature-based localization map for a vehicle taking into account characteristic structures of objects
US11321914B1 (en)*2018-01-102022-05-03Amazon Technologies, Inc.System for generating a navigational map of an environment
DE102018204500A1 (en)*2018-03-232019-09-26Continental Automotive Gmbh System for generating confidence values in the backend
CN114440898A (en)*2018-04-032022-05-06御眼视觉技术有限公司System and method for vehicle navigation
KR102466940B1 (en)*2018-04-052022-11-14한국전자통신연구원Topological map generation apparatus for traveling robot and method thereof
US10598498B2 (en)*2018-04-062020-03-24GM Global Technology Operations LLCMethods and systems for localization of a vehicle
EP3673233B1 (en)*2018-04-182025-09-10Mobileye Vision Technologies Ltd.Vehicle environment modeling with a camera
DE102018206067A1 (en)*2018-04-202019-10-24Robert Bosch Gmbh Method and device for determining a highly accurate position of a vehicle
US10890461B2 (en)*2018-04-302021-01-12International Business Machines CorporationMap enriched by data other than metadata
US11377119B2 (en)*2018-05-182022-07-05Baidu Usa LlcDrifting correction between planning stage and controlling stage of operating autonomous driving vehicles
CN110515089B (en)*2018-05-212023-06-02华创车电技术中心股份有限公司Driving auxiliary method based on optical radar
CN109064506B (en)*2018-07-042020-03-13百度在线网络技术(北京)有限公司High-precision map generation method and device and storage medium
EP3605383B1 (en)*2018-07-302024-07-10Aptiv Technologies AGDevice and method for controlling a headlight of a vehicle
CN109146976B (en)2018-08-232020-06-02百度在线网络技术(北京)有限公司Method and device for locating unmanned vehicles
EP3617749B1 (en)*2018-09-032020-11-11Zenuity ABMethod and arrangement for sourcing of location information, generating and updating maps representing the location
CN110361022B (en)*2018-09-302021-06-22毫末智行科技有限公司Method and system for constructing travelling coordinate system
EP3859273B1 (en)2018-09-302023-09-06Great Wall Motor Company LimitedMethod for constructing driving coordinate system, and application thereof
KR102233260B1 (en)*2018-10-022021-03-29에스케이텔레콤 주식회사Apparatus and method for updating high definition map
US11016175B2 (en)*2018-10-102021-05-25Ford Global Technologies, LlcTransportation infrastructure communication and control
WO2020073982A1 (en)*2018-10-112020-04-16Shanghaitech UniversitySystem and method for extracting planar surface from depth image
CN109270545B (en)*2018-10-232020-08-11百度在线网络技术(北京)有限公司Positioning true value verification method, device, equipment and storage medium
CN109459734B (en)*2018-10-302020-09-11百度在线网络技术(北京)有限公司Laser radar positioning effect evaluation method, device, equipment and storage medium
US11906325B2 (en)*2018-11-012024-02-20Lg Electronics Inc.Vehicular electronic device, operation method of vehicular electronic device, and system
CN110609268B (en)*2018-11-012022-04-29驭势科技(北京)有限公司Laser radar calibration method, device and system and storage medium
CN111174777A (en)*2018-11-092020-05-19阿里巴巴集团控股有限公司Positioning method and device and electronic equipment
US11422253B2 (en)*2018-11-192022-08-23Tdk CorportationMethod and system for positioning using tightly coupled radar, motion sensors and map information
US10810759B2 (en)*2018-11-202020-10-20International Business Machines CorporationCreating a three-dimensional model from a sequence of images
CN111238465B (en)*2018-11-282022-02-18台达电子工业股份有限公司Map building equipment and map building method thereof
US11119192B2 (en)*2018-12-072021-09-14Here Global B.V.Automatic detection of overhead obstructions
CN109737977A (en)2018-12-102019-05-10北京百度网讯科技有限公司 Automatic driving vehicle positioning method, device and storage medium
US11348453B2 (en)*2018-12-212022-05-31Here Global B.V.Method and apparatus for dynamic speed aggregation of probe data for high-occupancy vehicle lanes
CN111476879B (en)*2019-01-242025-01-14北京京东乾石科技有限公司 Point cloud data processing method, terminal and storage medium
US11531110B2 (en)2019-01-302022-12-20Baidu Usa LlcLiDAR localization using 3D CNN network for solution inference in autonomous driving vehicles
US10803333B2 (en)*2019-01-302020-10-13StradVision, Inc.Method and device for ego-vehicle localization to update HD map by using V2X information fusion
CN111771206B (en)*2019-01-302024-05-14百度时代网络技术(北京)有限公司Map partitioning system for an autonomous vehicle
US11069085B2 (en)*2019-02-132021-07-20Toyota Research Institute, Inc.Locating a vehicle based on labeling point cloud data of a scene
JP7133251B2 (en)*2019-03-132022-09-08学校法人千葉工業大学 Information processing device and mobile robot
CN109949303B (en)*2019-03-282021-10-29凌云光技术股份有限公司Workpiece shape detection method and device
CN111854748B (en)*2019-04-092022-11-22北京航迹科技有限公司Positioning system and method
WO2020206639A1 (en)*2019-04-102020-10-15深圳市大疆创新科技有限公司Target object fitting method, point cloud sensor and mobile platform
DE102019205994A1 (en)*2019-04-262020-10-29Robert Bosch Gmbh Method for forming a localization layer of a digital localization map for automated driving
DE102019206996A1 (en)*2019-05-142020-11-19Volkswagen Aktiengesellschaft Method for embedding local sensor data in a card
EP3971525B1 (en)*2019-05-152023-11-08Nissan Motor Co., Ltd.Self-position correction method and self-position correction device
KR20200133863A (en)2019-05-202020-12-01삼성전자주식회사Advanced driver assist device, method of calibrationg the same and method of detecting object in the saem
US11506512B2 (en)*2019-05-222022-11-22TDK JapanMethod and system using tightly coupled radar positioning to improve map performance
KR20220031574A (en)*2019-06-042022-03-11플리어 언맨드 에어리얼 시스템즈 에이에스 3D positioning and mapping system and method
DE102019208384A1 (en)*2019-06-072020-12-10Robert Bosch Gmbh Method for creating a universally applicable feature map
JP7374432B2 (en)*2019-08-302023-11-07株式会社日野 Road condition measuring device, method and program
CN120275990A (en)*2019-10-212025-07-08派珀网络公司Traffic positioning system and method using LIDAR
JP2022553805A (en)*2019-11-042022-12-26クリアモーション,インコーポレイテッド Characterization and Tracking Algorithms for Multilane Roads
US11189007B2 (en)*2019-12-032021-11-30Imagry (Israel) LtdReal-time generation of functional road maps
US10969232B1 (en)*2019-12-062021-04-06Ushr Inc.Alignment of standard-definition and High-Definition maps
RU2734070C9 (en)*2019-12-242022-04-27Федеральное государственное бюджетное военное образовательное учреждение высшего образования "Военно-космическая академия имени А.Ф. Можайского" Министерства обороны Российской ФедерацииMethod of measuring spatial distance between small objects
US11037328B1 (en)*2019-12-312021-06-15Lyft, Inc.Overhead view image generation
US11288522B2 (en)2019-12-312022-03-29Woven Planet North America, Inc.Generating training data from overhead view images
US11244500B2 (en)2019-12-312022-02-08Woven Planet North America, Inc.Map feature extraction using overhead view images
CN111189412B (en)*2020-01-062021-09-28珠海丽亭智能科技有限公司3D scanning method for vehicle
EP4104968B1 (en)*2020-02-142023-10-25Yamazaki Mazak CorporationWorkpiece mounting method for machining apparatus, workpiece mounting support system, and workpiece mounting support program
DE102020103906B4 (en)2020-02-142022-12-29Audi Aktiengesellschaft Method and processor circuit for updating a digital road map
GB202002409D0 (en)2020-02-202020-04-08Tomtom Global Content BvHigh definition maps used in automated driving systems in automomous vehicles
GB202002410D0 (en)2020-02-202020-04-08Tomtom Global Content BvHigh definition maps used in automated driving systems in autonomous vehicles
US20240263960A1 (en)*2020-02-202024-08-08Tomtom Global Content B.V.Generating Map Change Data
GB202002612D0 (en)*2020-02-252020-04-08Tomtom Global Content BvDigital map data with enhanced functional safety
JP7684954B2 (en)2020-04-102025-05-28株式会社半導体エネルギー研究所 Location Estimation System
FR3109213B1 (en)2020-04-142022-03-11Renault Sas Method for correcting the future relative pose for controlling the motor vehicle in autonomous driving
US11472442B2 (en)2020-04-232022-10-18Zoox, Inc.Map consistency checker
DE102020112482A1 (en)2020-05-082021-11-11Car.Software Estonia As Method and device for determining a position of a vehicle in a road network
GB202007211D0 (en)2020-05-152020-07-01Tomtom Navigation BvMethods and systems of generating digital map data
CN111695489B (en)*2020-06-092023-08-11阿波罗智能技术(北京)有限公司Modeling route verification method and device, unmanned vehicle and storage medium
CN111938513B (en)*2020-06-302021-11-09珠海市一微半导体有限公司 A method, chip and robot for selecting an edge path for robot to overcome obstacles
DE102020118627A1 (en)2020-07-152022-01-20Bayerische Motoren Werke Aktiengesellschaft positioning of a vehicle
KR20220037128A (en)*2020-09-172022-03-24삼성전자주식회사Electronic device and server that determine information related to user's location
KR102237451B1 (en)*2020-10-052021-04-06성현석Apparatus for evaluating safety of cut-slopes
US20220144304A1 (en)*2020-11-092022-05-12Nvidia CorporationSafety decomposition for path determination in autonomous systems
US11679766B2 (en)2020-11-232023-06-20Fca Us LlcTechniques for vehicle lane level localization using a high-definition map and perception sensors
US12174641B2 (en)*2020-12-172024-12-24Aptiv Technologies AGVehicle localization based on radar detections
CN112669670A (en)*2020-12-302021-04-16北京精英智通科技股份有限公司Method for building learning vehicle training system and learning vehicle training system
RU2752687C1 (en)*2021-01-062021-07-29Дмитрий Александрович РощинRange determination method using digital video camera and three light sources
JP7108329B1 (en)*2021-02-152022-07-28リンクウィズ株式会社 Information processing method, information processing system, program
US12209869B2 (en)*2021-04-092025-01-28Zoox, Inc.Verifying reliability of data used for autonomous driving
CN113065076B (en)*2021-04-252025-05-16北京四维图新科技股份有限公司 Map data processing method, device, electronic device and storage medium
US11555466B1 (en)2021-09-102023-01-17Toyota Motor North America, Inc.Minimal route determination
US11867514B2 (en)*2021-09-242024-01-09Telenav, Inc.Navigation system with independent positioning mechanism and method of operation thereof
CN117897737A (en)*2021-10-112024-04-16深圳市大疆创新科技有限公司Unmanned aerial vehicle monitoring method and device, unmanned aerial vehicle and monitoring equipment
KR102768784B1 (en)*2021-11-122025-02-19고려대학교 산학협력단Apparatus and method for estimating location of measuring sensor
CN114187341B (en)*2021-11-162022-09-06泰瑞数创科技(北京)股份有限公司Artificial neural network road texture mapping method and system based on mobile following recognition
US12344244B2 (en)*2022-02-282025-07-01Nissan North America, Inc.Vehicle lane marking detection system
US12361575B2 (en)*2022-06-072025-07-15Toyota Research Institute, Inc.Depth estimation with sparse range sensor depth and uncertainty projection
CN115265571A (en)*2022-07-212022-11-01武汉中海庭数据技术有限公司 A testing method and device for longitudinal accuracy deviation caused by encryption plug-in of National Bureau of Testing and Testing
JPWO2024042704A1 (en)*2022-08-262024-02-29
CN116012806B (en)*2023-03-292023-06-13苏州浪潮智能科技有限公司Vehicle detection method, device, detector, system and model training method
DE102023212860A1 (en)*2023-12-182025-06-18Robert Bosch Gesellschaft mit beschränkter Haftung Method for mapping at least one location on a road suitable for overtaking

Family Cites Families (128)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US4700307A (en)1983-07-111987-10-13General Dynamics Corp./Convair DivisionFeature navigation system and method
US6233361B1 (en)1993-08-242001-05-15Roger Colston DownsTopography processor system
DE4408329C2 (en)1994-03-111996-04-18Siemens Ag Method for building up a cellular structured environment map of a self-moving mobile unit, which is oriented with the help of sensors based on wave reflection
US5961571A (en)1994-12-271999-10-05Siemens Corporated Research, IncMethod and apparatus for automatically tracking the location of vehicles
US7418346B2 (en)1997-10-222008-08-26Intelligent Technologies International, Inc.Collision avoidance methods and systems
US6526352B1 (en)2001-07-192003-02-25Intelligent Technologies International, Inc.Method and arrangement for mapping a road
US7085637B2 (en)1997-10-222006-08-01Intelligent Technologies International, Inc.Method and system for controlling a vehicle
US6292721B1 (en)*1995-07-312001-09-18Allied Signal Inc.Premature descent into terrain visual awareness enhancement to EGPWS
DE19532104C1 (en)1995-08-301997-01-16Daimler Benz Ag Method and device for determining the position of at least one location of a track-guided vehicle
JPH1031799A (en)1996-07-151998-02-03Toyota Motor Corp Automatic driving control device
US5999866A (en)1996-11-051999-12-07Carnegie Mellon UniversityInfrastructure independent position determining system
US6047234A (en)1997-10-162000-04-04Navigation Technologies CorporationSystem and method for updating, enhancing or refining a geographic database using feedback
US6184823B1 (en)*1998-05-012001-02-06Navigation Technologies Corp.Geographic database architecture for representation of named intersections and complex intersections and methods for formation thereof and use in a navigation application program
JP3588418B2 (en)*1998-09-182004-11-10富士写真フイルム株式会社 Image correction method, image correction device, and recording medium
US6266442B1 (en)1998-10-232001-07-24Facet Technology Corp.Method and apparatus for identifying objects depicted in a videostream
DE19930796A1 (en)1999-07-032001-01-11Bosch Gmbh Robert Method and device for transmitting navigation information from a data center to a vehicle-based navigation system
EP1242966B1 (en)1999-12-292012-11-07Geospan CorporationSpherical rectification of image pairs
US6671615B1 (en)2000-05-022003-12-30Navigation Technologies Corp.Navigation system with sign assistance
US6608913B1 (en)2000-07-172003-08-19Inco LimitedSelf-contained mapping and positioning system utilizing point cloud data
US7375728B2 (en)*2001-10-012008-05-20University Of MinnesotaVirtual mirror
US20050149251A1 (en)2000-07-182005-07-07University Of MinnesotaReal time high accuracy geospatial database for onboard intelligent vehicle applications
EP1244310A1 (en)*2001-03-212002-09-25Canal+ Technologies Société AnonymeData referencing system
US6772062B2 (en)2001-05-312004-08-03The Regents Of The University Of CaliforniaIntelligent ultra high speed distributed sensing system and method for sensing roadway markers for intelligent vehicle guidance and control
RU2216781C2 (en)*2001-06-292003-11-20Самсунг Электроникс Ко., ЛтдImage-based method for presenting and visualizing three-dimensional object and method for presenting and visualizing animated object
JP3910582B2 (en)2001-07-312007-04-25株式会社キャドセンター Three-dimensional structure shape automatic generation apparatus, automatic generation method, program thereof, and recording medium recording the program
KR100446635B1 (en)*2001-11-272004-09-04삼성전자주식회사Apparatus and method for depth image-based representation of 3-dimensional object
JP2003232888A (en)2001-12-072003-08-22Global Nuclear Fuel-Japan Co LtdIntegrity confirmation inspection system and integrity confirmation method for transported object
DE10202756A1 (en)2002-01-252003-08-07Daimler Chrysler AgDigital map of traffic route network with temperature data has route data relating to traffic route network, attribute data including temperature data for air and/or for road surface
US8369607B2 (en)*2002-03-272013-02-05Sanyo Electric Co., Ltd.Method and apparatus for processing three-dimensional images
DE10223201C1 (en)*2002-05-242003-05-28Fraunhofer Ges ForschungOptical detection device for optical data has primary detection diode and secondary detection diodes coupled to respective read-out circuits with different read-out rates
US7433889B1 (en)2002-08-072008-10-07Navteq North America, LlcMethod and system for obtaining traffic sign data using navigation systems
US6728608B2 (en)2002-08-232004-04-27Applied Perception, Inc.System and method for the creation of a terrain density model
US7324666B2 (en)2002-11-152008-01-29Whitegold Solutions, Inc.Methods for assigning geocodes to street addressable entities
US6847887B1 (en)2003-03-042005-01-25Navteq North America, LlcMethod and system for obtaining road grade data
FI115668B (en)2003-03-252005-06-15Sandvik Tamrock Oy Initialization of position and direction of mining vehicles
FR2854473A1 (en)*2003-04-292004-11-05France Telecom METHOD FOR MODELING REFERENTIAL DATA AND ITS USE FOR LOCATING REFERENTIAL DATA IN AN INFORMATION SYSTEM
US7035733B1 (en)2003-09-222006-04-25Navteq North America, LlcMethod and system for obtaining road grade data
US6856897B1 (en)2003-09-222005-02-15Navteq North America, LlcMethod and system for computing road grade data
US7251558B1 (en)2003-09-232007-07-31Navteq North America, LlcMethod and system for developing traffic messages
US7050903B1 (en)2003-09-232006-05-23Navteq North America, LlcMethod and system for developing traffic messages
US7096115B1 (en)2003-09-232006-08-22Navteq North America, LlcMethod and system for developing traffic messages
US6990407B1 (en)2003-09-232006-01-24Navteq North America, LlcMethod and system for developing traffic messages
DE102004055069B4 (en)2004-07-152007-02-15Daimlerchrysler Ag Multidimensional road survey
US20060023197A1 (en)*2004-07-272006-02-02Joel Andrew HMethod and system for automated production of autostereoscopic and animated prints and transparencies from digital and non-digital media
DE102004046589A1 (en)2004-08-052006-02-23Volkswagen Ag Device for a motor vehicle
DE102005008185A1 (en)2005-02-232006-08-31Daimlerchrysler AgDigital road data verification method, by signaling need for verification when value representing quality of map-matching exceeds threshold
US8964029B2 (en)2005-04-292015-02-24Chubb Protection CorporationMethod and device for consistent region of interest
DE602005016311D1 (en)2005-06-062009-10-08Tomtom Int Bv NAVIGATION DEVICE WITH CAMERA INFO
US7728869B2 (en)2005-06-142010-06-01Lg Electronics Inc.Matching camera-photographed image with map data in portable terminal and travel route guidance method
DE112006001864T5 (en)2005-07-142008-06-05GM Global Technology Operations, Inc., Detroit System for monitoring the vehicle environment from a remote perspective
US20070055441A1 (en)2005-08-122007-03-08Facet Technology Corp.System for associating pre-recorded images with routing information in a navigation system
CN101322418B (en)*2005-12-022010-09-01皇家飞利浦电子股份有限公司Depth dependent filtering of image signal
US8229166B2 (en)*2009-07-072012-07-24Trimble Navigation, LtdImage-based tracking
US8050863B2 (en)2006-03-162011-11-01Gray & Company, Inc.Navigation and control system for autonomous vehicles
US9373149B2 (en)*2006-03-172016-06-21Fatdoor, Inc.Autonomous neighborhood vehicle commerce network and community
WO2007143756A2 (en)*2006-06-092007-12-13Carnegie Mellon UniversitySystem and method for autonomously convoying vehicles
JP4600357B2 (en)2006-06-212010-12-15トヨタ自動車株式会社 Positioning device
CN101405573B (en)2006-06-272012-04-18通腾科技股份有限公司Navigation apparatus and method for providing alarm for speed trap
WO2008003272A1 (en)2006-07-062008-01-10Siemens AktiengesellschaftDevice for locating a vehicle tied to a roadway
CA2655221A1 (en)*2006-07-282008-01-31Kabushiki Kaisha TopElectrode needle device with temperature sensor
DE502006005666D1 (en)2006-08-282010-01-28Ibeo Automobile Sensor Gmbh Method for determining the global position
US8996172B2 (en)*2006-09-012015-03-31Neato Robotics, Inc.Distance sensor system and method
JP5069439B2 (en)2006-09-212012-11-07パナソニック株式会社 Self-position recognition system
US20080243378A1 (en)2007-02-212008-10-02Tele Atlas North America, Inc.System and method for vehicle navigation and piloting including absolute and relative coordinates
CN101641610A (en)2007-02-212010-02-03电子地图北美公司System and method for vehicle navigation and piloting including absolute and relative coordinates
US7865302B2 (en)2007-03-292011-01-04Alpine Electronics, Inc.Method and apparatus for displaying house number and building footprint in arrival screen for navigation system
CA2627999C (en)*2007-04-032011-11-15Her Majesty The Queen In Right Of Canada, As Represented By The Minister Of Industry Through The Communications Research Centre CanadaGeneration of a depth map from a monoscopic color image for rendering stereoscopic still and video images
JP2010533282A (en)2007-06-082010-10-21テレ アトラス ベスローテン フエンノートシャップ Method and apparatus for generating a multi-view panorama
WO2009045096A1 (en)2007-10-022009-04-09Tele Atlas B.V.Method of capturing linear features along a reference-line across a surface for use in a map database
JP4994256B2 (en)*2008-01-282012-08-08株式会社ジオ技術研究所 Data structure of route guidance database
CN101952688A (en)*2008-02-042011-01-19电子地图北美公司Method for map matching with sensor detected objects
GB2457508B (en)*2008-02-182010-06-09Ltd Sony Computer EntertainmenSystem and method of audio adaptaton
DE112009001639T5 (en)2008-07-072011-09-29Mitsubishi Electric Corporation Vehicle traveling environment detection device
CN102037325A (en)*2008-07-312011-04-27电子地图有限公司Computer arrangement and method for displaying navigation data in 3D
US20110109618A1 (en)*2008-07-312011-05-12Wojciech Tomasz NowakMethod of displaying navigation data in 3d
WO2010048036A2 (en)2008-10-222010-04-29Terratrim, Inc.Systems and methods for managing utility consumption
JP2012511697A (en)*2008-12-092012-05-24トムトム ノース アメリカ インコーポレイテッド How to generate a geodetic reference database
EP2209091B1 (en)2009-01-162012-08-08Honda Research Institute Europe GmbHSystem and method for object motion detection based on multiple 3D warping and vehicle equipped with such system
DE102009009047A1 (en)*2009-02-162010-08-19Daimler Ag Method for object detection
US8284997B2 (en)*2009-03-112012-10-09Honeywell International Inc.Vision-based vehicle navigation system and method
WO2010140613A1 (en)*2009-06-032010-12-09学校法人中部大学Object detection device
US20140379254A1 (en)*2009-08-252014-12-25Tomtom Global Content B.V.Positioning system and method for use in a vehicle navigation system
WO2011023244A1 (en)2009-08-252011-03-03Tele Atlas B.V.Method and system of processing data gathered using a range sensor
US8301374B2 (en)2009-08-252012-10-30Southwest Research InstitutePosition estimation for ground vehicle navigation based on landmark identification/yaw rate and perception of landmarks
US9052207B2 (en)*2009-10-222015-06-09Tomtom Polska Sp. Z O.O.System and method for vehicle navigation using lateral offsets
CN101701828B (en)*2009-11-232012-10-03常州超媒体与感知技术研究所有限公司Blind autonomous navigation method based on stereoscopic vision and information fusion
TWI391874B (en)2009-11-242013-04-01Ind Tech Res InstMethod and device of mapping and localization method using the same
US8861842B2 (en)2010-02-052014-10-14Sri InternationalMethod and apparatus for real-time pedestrian detection for urban driving
JP2011175477A (en)*2010-02-242011-09-08Canon IncThree-dimensional measurement apparatus, processing method and program
JP5062497B2 (en)2010-03-312012-10-31アイシン・エィ・ダブリュ株式会社 Vehicle position detection system using landscape image recognition
US8954265B2 (en)*2010-04-092015-02-10Tomtom North America, Inc.Method of resolving a location from data representative thereof
CN101825442A (en)*2010-04-302010-09-08北京理工大学Mobile platform-based color laser point cloud imaging system
EP3786668A1 (en)2010-05-172021-03-03Velodyne Lidar, Inc.High definition lidar system
US8594425B2 (en)2010-05-312013-11-26Primesense Ltd.Analysis of three-dimensional scenes
NL2004996C2 (en)*2010-06-292011-12-30Cyclomedia Technology B V A METHOD FOR MANUFACTURING A DIGITAL PHOTO, AT LEAST PART OF THE IMAGE ELEMENTS INCLUDING POSITION INFORMATION AND SUCH DIGITAL PHOTO.
IT1401367B1 (en)2010-07-282013-07-18Sisvel Technology Srl METHOD TO COMBINE REFERENCE IMAGES TO A THREE-DIMENSIONAL CONTENT.
EP2420799B1 (en)*2010-08-182015-07-22Harman Becker Automotive Systems GmbHMethod and system for displaying points of interest
US20120044241A1 (en)2010-08-202012-02-23Himax Technologies LimitedThree-dimensional on-screen display imaging system and method
WO2012089263A1 (en)2010-12-302012-07-05Tele Atlas Polska Sp.Z.O.O.System and method for generating textured map object images
US8711206B2 (en)*2011-01-312014-04-29Microsoft CorporationMobile camera localization using depth maps
US9140792B2 (en)2011-06-012015-09-22GM Global Technology Operations LLCSystem and method for sensor based environmental model construction
US10088317B2 (en)*2011-06-092018-10-02Microsoft Technologies Licensing, LLCHybrid-approach for localization of an agent
US8630805B2 (en)2011-10-202014-01-14Robert Bosch GmbhMethods and systems for creating maps with radar-optical imaging fusion
US9194949B2 (en)2011-10-202015-11-24Robert Bosch GmbhMethods and systems for precise vehicle localization using radar maps
US8553942B2 (en)*2011-10-212013-10-08Navteq B.V.Reimaging based on depthmap information
US8831336B2 (en)2011-11-112014-09-09Texas Instruments IncorporatedMethod, system and computer program product for detecting an object in response to depth information
US9070216B2 (en)2011-12-142015-06-30The Board Of Trustees Of The University Of IllinoisFour-dimensional augmented reality models for interactive visualization and automated construction progress monitoring
US9024970B2 (en)*2011-12-302015-05-05Here Global B.V.Path side image on map overlay
WO2013162673A2 (en)*2012-02-102013-10-31Deere & CompanySystem and method of material handling using one or more imaging devices on the transferring vehicle to control the material distribution into the storage portion of the receiving vehicle
US20130249899A1 (en)2012-03-072013-09-26Willow Garage Inc.Point cloud data hierarchy
GB2501466A (en)2012-04-022013-10-30Univ OxfordLocalising transportable apparatus
US9591274B2 (en)*2012-07-272017-03-07Nissan Motor Co., Ltd.Three-dimensional object detection device, and three-dimensional object detection method
US9175975B2 (en)*2012-07-302015-11-03RaayonNova LLCSystems and methods for navigation
US9111444B2 (en)2012-10-312015-08-18Raytheon CompanyVideo and lidar target detection and tracking system and method for segmenting moving targets
US9117306B2 (en)*2012-12-262015-08-25Adshir Ltd.Method of stencil mapped shadowing
US20140176532A1 (en)*2012-12-262014-06-26Nvidia CorporationMethod for image correction and an electronic device embodying the same
WO2014165244A1 (en)*2013-03-132014-10-09Pelican Imaging CorporationSystems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies
US9275078B2 (en)*2013-09-052016-03-01Ebay Inc.Estimating depth from a single image
CN103630122B (en)*2013-10-152015-07-15北京航天科工世纪卫星科技有限公司Monocular vision lane line detection method and distance measurement method thereof
US9424672B2 (en)*2013-11-072016-08-23Here Global B.V.Method and apparatus for processing and aligning data point clouds
KR20150088525A (en)*2014-01-242015-08-03연세대학교 산학협력단Method and apparatus for estimating the current position using video information
US9438891B2 (en)2014-03-132016-09-06Seiko Epson CorporationHolocam systems and methods
US10062180B2 (en)*2014-04-222018-08-28Microsoft Technology Licensing, LlcDepth sensor calibration and per-pixel correction
US20150347833A1 (en)*2014-06-032015-12-03Mark Ries RobinsonNoncontact Biometrics with Small Footprint
US9652031B1 (en)*2014-06-172017-05-16Amazon Technologies, Inc.Trust shifting for user position detection
GB2528699B (en)*2014-07-292017-05-03Sony Computer Entertainment Europe LtdImage processing
US10028102B2 (en)*2014-12-262018-07-17Here Global B.V.Localization of a device using multilateration
US9792521B2 (en)*2014-12-262017-10-17Here Global B.V.Extracting feature geometries for localization of a device

Also Published As

Publication numberPublication date
CN107850445B (en)2021-08-27
WO2017021781A1 (en)2017-02-09
EP3332219A2 (en)2018-06-13
EP3995783A1 (en)2022-05-11
EP3332217A1 (en)2018-06-13
EP3998455A1 (en)2022-05-18
JP2018532979A (en)2018-11-08
EP3332216A1 (en)2018-06-13
JP6899369B2 (en)2021-07-07
KR102653953B1 (en)2024-04-02
CN107850449A (en)2018-03-27
KR20180038475A (en)2018-04-16
EP3332217B1 (en)2021-11-10
US20180209796A1 (en)2018-07-26
US10948302B2 (en)2021-03-16
US11137255B2 (en)2021-10-05
CN107850450B (en)2022-01-07
US11287264B2 (en)2022-03-29
CN107850448B (en)2021-11-16
KR102650541B1 (en)2024-03-26
WO2017021473A1 (en)2017-02-09
JP2018533721A (en)2018-11-15
US11629962B2 (en)2023-04-18
JP2018529938A (en)2018-10-11
EP3332216B1 (en)2020-07-22
US20180202814A1 (en)2018-07-19
KR102826498B1 (en)2025-06-30
JP7066607B2 (en)2022-05-13
WO2017021475A1 (en)2017-02-09
JP7398506B2 (en)2023-12-14
CN107850445A (en)2018-03-27
JP6899368B2 (en)2021-07-07
EP3332219B1 (en)2021-11-03
KR20180037241A (en)2018-04-11
WO2017021778A2 (en)2017-02-09
WO2017021778A3 (en)2017-04-06
EP3332218B1 (en)2021-11-03
KR20180037243A (en)2018-04-11
EP3332218A1 (en)2018-06-13
CN107850449B (en)2021-09-03
JP2018532099A (en)2018-11-01
EP3998456A1 (en)2022-05-18
WO2017021474A1 (en)2017-02-09
JP2022110001A (en)2022-07-28
US20190003838A1 (en)2019-01-03
US20180364349A1 (en)2018-12-20
CN107850450A (en)2018-03-27
KR102698523B1 (en)2024-08-23
CN114111812A (en)2022-03-01
JP6899370B2 (en)2021-07-07
KR20240040132A (en)2024-03-27
CN107850448A (en)2018-03-27
KR102630740B1 (en)2024-01-29
US20220214174A1 (en)2022-07-07
KR20180037242A (en)2018-04-11
US11274928B2 (en)2022-03-15

Similar Documents

PublicationPublication DateTitle
US11629962B2 (en)Methods and systems for generating and using localization reference data
CN109791052B (en) Method and system for classifying data points of a point cloud using a digital map

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp