The scheme is a divisional application. The parent of the division is the patent application of the invention with the application date of 2016, 08 and 03, the application number of 201680044930.4 and the name of a method and a system for generating and using positioning reference data.
Disclosure of Invention
According to a first aspect of the present invention there is provided a method of generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the method comprising, for at least one navigable element represented by the digital map:
generating positioning reference data comprising at least one depth map indicative of an environment surrounding the navigable element projected onto a reference plane defined by a reference line associated with the navigable element, each pixel in the at least one depth map being associated with a location in the reference plane associated with the navigable element and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment, and
The generated positioning reference data is associated with the digital map data.
It should be appreciated that the digital map (in this and any other aspects or embodiments of the invention) includes data representing navigable elements of a navigable network, such as roads of a road network.
According to a first aspect of the invention, positioning reference data associated with one or more navigable elements of a navigable network represented by a digital map is generated. This data may be generated for at least part and preferably all of the navigable elements represented by the map. The generated data provides a compressed representation of the environment surrounding the navigable element. This is achieved using at least one depth map indicating an environment surrounding the element projected onto a reference plane defined by a reference line, which in turn is defined relative to the navigable element. Each pixel of the depth map is associated with a location in the reference plane and includes a depth channel representing a distance along a predetermined direction from the location of the pixel in the reference plane to a surface of an object in the environment.
Various features of the at least one depth map of the positioning reference data will now be described. It should be appreciated that such features may alternatively or additionally be applied to at least one depth map of real-time scan data used in certain further aspects or embodiments of the invention, provided that they are not mutually exclusive.
The reference line associated with the navigable element and used to define the reference plane may be set with respect to any manner of navigable element. The reference line is defined by a point or points associated with the navigable element. The reference line may have a predetermined orientation relative to the navigable element. In a preferred embodiment, the reference line is parallel to the navigable element. This may be suitable for providing positioning reference data (and/or real-time scan data) related to the lateral environment on one or more sides of the navigable element. The reference line may be linear or non-linear, i.e. depending on whether the navigable element is straight. The reference line may include straight lines and non-linearities, e.g., curved portions, e.g., remaining parallel to the navigable elements. It should be appreciated that in some further embodiments, the reference line may not be parallel to the navigable elements. For example, as described below, the reference line may be defined by a radius centered on a point associated with the navigable element (e.g., one point on the navigable element). The reference line may be circular. This may then provide a 360 degree representation of the environment around the junction.
The reference line is preferably a longitudinal reference line and may be, for example, an edge or boundary of a navigable element or its lane, or a centerline of a navigable element. The positioning reference data (and/or real-time scan data) will then provide a representation of the environment on one or more sides of the element. The reference line may be located on the element.
In an embodiment, since the reference line of the navigable element (e.g., the edge or centerline of the navigable element) and the associated depth information may undergo a mapping to a linear reference line, the reference line may be linear even when the navigable element is curved. This mapping or transformation is described in more detail in WO2009/045096A1, WO2009/045096A1 is incorporated herein by reference in its entirety.
The reference plane defined by the reference line is preferably oriented perpendicular to the surface of the navigable element. As used herein, a reference plane refers to a 2-dimensional surface, which may be curved or non-curved.
Where the reference line is a longitudinal reference line parallel to the navigable element, the depth channel of each pixel preferably represents a lateral distance to the surface of the object in the environment.
Each depth map may be in the form of a raster image. It should be appreciated that each depth map represents a distance along a predetermined direction from a surface of an object in the environment to a reference plane of a plurality of longitudinal locations and altitudes (i.e., locations corresponding to each pixel associated with the reference plane). The depth map includes a plurality of pixels. Each pixel of the depth map is associated with a particular longitudinal position and elevation in the depth map (e.g., raster image).
In some preferred embodiments, the reference plane is defined by a longitudinal reference line parallel to the navigable element, and the reference plane is oriented perpendicular to the surface of the navigable element. Each pixel then includes a depth channel representing a lateral distance to a surface of an object in the environment.
In a preferred embodiment, at least one depth map may have a fixed longitudinal resolution and a variable vertical and/or depth resolution.
According to a second aspect of the present invention there is provided a method of generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the method comprising, for at least one navigable element represented by the digital map:
Generating positioning reference data comprising at least one depth map indicative of an environment surrounding the navigable element projected onto a reference plane, the reference plane being defined by a longitudinal reference line oriented parallel to the navigable element and perpendicular to a surface of the navigable element, each pixel of the at least one depth map being associated with a position in the reference plane associated with the navigable element and the pixel including a depth channel representing a lateral distance along a predetermined direction from the associated position of the pixel in the reference plane to a surface of an object in the environment, preferably wherein the at least one depth map has a fixed longitudinal resolution and a variable vertical and/or depth resolution, and
The generated positioning reference data is associated with the digital map data.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
Regardless of the orientation of the reference line, the reference plane, and the line along which the environment is projected onto the reference plane, it is advantageous in accordance with the present invention in its various aspects and embodiments that at least one depth map has a fixed longitudinal resolution and a variable vertical and/or depth resolution. The at least one depth map of the positioning reference data (and/or the real-time scan data) preferably has a fixed longitudinal resolution and a variable vertical and/or depth resolution. The variable vertical and/or depth resolution is preferably non-linear. The higher resolution may show portions of the depth map (e.g., raster image) that are closer to the ground and to the navigable element (and thus to the vehicle) than portions of the depth map (e.g., raster image) that are higher than the ground and further from the navigable element (and thus further from the vehicle). This maximizes the information density at the height and depth that are more important to the detection of the vehicle sensors.
Regardless of the orientation of the reference lines and planes and the resolution of the depth map along the various directions, the projection of the environment onto the reference plane is along a predetermined direction, which may be selected as desired. In some embodiments, the projection is an orthogonal projection. In these embodiments, the depth channel of each pixel represents a distance from the associated location of the pixel in the reference plane to the surface of the object in the environment along a direction perpendicular to the reference plane. Thus, in some embodiments in which the distance represented by the depth channel is a lateral distance, the lateral distance is along a direction perpendicular to the reference plane (although non-orthogonal projection is not limited to the case in which the depth channel is related to the lateral distance). The use of orthogonal projection may be advantageous in some contexts, as this will result in any height information independent of distance from the reference line (and thus independent of distance from the reference plane).
In other embodiments, it has been found to be potentially advantageous to use non-orthogonal projections. Thus, in some embodiments of the invention in any of its aspects, unless mutually exclusive, the depth channel of each pixel (whether or not the predetermined distance is a lateral distance) represents the distance from the associated location of the pixel in the reference plane to the surface of an object in the environment in a direction that is not perpendicular to the reference plane. The use of non-orthogonal projections has the advantage that information about surfaces oriented perpendicular to the navigable elements (i.e. where the reference line is parallel to the elements) can be saved. This may be accomplished without providing additional data channels associated with the pixels. Thus, information about objects in the vicinity of the navigable element may be captured more efficiently and in more detail without increasing storage capacity. The predetermined direction may be along any desired direction relative to the reference plane, for example at 45 degrees.
The use of non-orthogonal projections has also been found to be particularly useful in preserving a greater amount of information about the surface of an object detectable by a camera or cameras of a vehicle in dark conditions, and thus in connection with some aspects and embodiments of the invention in which a reference image or point cloud is compared to an image or point cloud obtained based on real-time data sensed by a camera of a vehicle.
According to another aspect of the present invention there is provided a method of generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the method comprising, for at least one navigable element represented by the digital map:
Generating positioning reference data comprising at least one depth map indicative of an environment surrounding the navigable element projected onto a reference plane, the reference plane being defined by a reference line parallel to the navigable element, each pixel in the at least one depth map being associated with a location in the reference plane associated with the navigable element, and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment, wherein the predetermined direction is non-perpendicular to the reference plane, and
The generated positioning reference data is associated with digital map data indicative of the navigable elements.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
According to any aspect or embodiment of the invention, the positioning reference data (and/or the real-time scan data) is based on scan data obtained by scanning the environment surrounding the navigable element using one or more sensors. The one or more scanners may include one or more of a laser scanner, a radar scanner, and a camera, such as a single camera or a pair of stereo cameras.
Preferably, the distance to the surface of the object represented by the depth channel of each pixel of the positioning reference data (and/or the real-time scan data) is determined based on a set of multiple sensed data points, each indicative of a distance from the location of the pixel to the surface of the object along a predetermined direction. Data points may be obtained when a scan of the environment surrounding the navigable element is performed. The set of sensed data points may be obtained from one or more types of sensors. However, in some preferred embodiments, the sensed data points comprise or include a set of data points sensed by a laser scanner. In other words, the sensed data points comprise or include laser measurements.
It has been found that using an average of multiple sensed data points in determining the distance value for a depth channel for a given pixel may lead to erroneous results. This is because there is a possibility that at least some of the sensed data points that indicate the position of the surface of the object from the reference plane along the applicable predetermined direction and that are considered to be mapped to a particular pixel may be related to the surface of a different object. It should be appreciated that due to the compressed data format, the extension area of the environment may map to an area of pixels in the reference plane. A considerable amount of sensing data, i.e. a number of sensing data points, is thus available for that pixel. Within that zone, there may be objects positioned at different depths relative to the reference plane, including objects that may overlap with another object in any dimension by only a short distance, such as trees, lampposts, walls, and moving objects. The depth values to the object surface represented by the sensor data points for a particular pixel may thus exhibit considerable variation.
According to any aspect or embodiment of the present invention, wherein the distance to the surface of the object represented by the depth channel of each pixel of the positioning reference data (and/or the real-time scan data) is determined based on a set of multiple sensed data points, each sensed data point indicating a sensed distance from the position of the pixel to the surface of the object along a predetermined direction, preferably the distance represented by the depth channel of the pixel is not based on an average of the set of multiple sensed data points. In a preferred embodiment, the distance represented by the depth channel of the pixel is the closest sensed distance from the object surface from among the set of sensed data points, or the closest mode value obtained using a distribution of sensed depth values. It will be appreciated that the detected nearest value or values most likely reflect the depth of the object surface to the pixel most accurately. For example, consider the case where a tree is positioned between a building and a road. Different sensing depth values for a particular pixel may be building or tree based detection. If all of these sensed values are taken into account to provide an average depth value, the average will indicate that the depth measured from the pixel to the object surface is somewhere between the depth to the tree and the depth to the building. This will lead to misleading depth values for the pixels, which can lead to problems in correlating real-time vehicle sensing data with reference data, and can potentially be dangerous, as it is very important to know with certainty how close an object is to the road. In contrast, the most recent depth value or most recent mode value is likely to be related to a tree, but not a building, reflecting the true position of the most recent object.
According to another aspect of the present invention there is provided a method of generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the method comprising, for at least one navigable element represented by the digital map:
Generating positioning reference data comprising at least one depth map indicative of an environment surrounding the navigable element projected onto a reference plane, the reference plane being defined by a reference line associated with the navigable element, each pixel in the at least one depth map being associated with a location in the reference plane associated with the navigable element, and the pixel including a depth channel representing a distance from the associated location of the pixel in the reference plane to a surface of an object in the environment along a predetermined direction, wherein the distance to the surface of the object represented by the depth channel of each pixel is determined based on a set of a plurality of sensed data points, each sensed data point indicating a sensed distance from the location of the pixel to the surface of the object along the predetermined direction, and wherein the distance to the surface of the object represented by the depth channel of the pixel is based on the set of sensed data points, a nearest or nearest distance, and a pattern of distances to the object
The generated positioning reference data is associated with digital map data.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
According to any aspect or embodiment of the invention, each pixel (in the positioning reference data and/or the real-time scan data) includes a depth channel representing a distance to a surface of an object in the environment. In a preferred embodiment, each pixel includes one or more additional channels. This may provide a depth map with one or more additional information layers. Each channel preferably indicates a value of a property obtained based on one or more sensed data points and preferably based on a set of multiple sensed data points. The sensing data may be obtained from one or more of the sensors described earlier. In a preferred embodiment, the or each pixel includes at least one channel indicative of a value of a given type of sensed reflectance. Each pixel may include one or more of a channel indicating a value of sensed laser reflectivity and a channel indicating a value of sensed radar reflectivity. The sensed reflectance value of the pixel indicated by the channel is related to the sensed reflectance in the applicable portion of the environment represented by the pixel. The sensed reflectance value of the pixel preferably indicates the sensed reflectance at a distance from the reference plane that corresponds to the depth of the pixel from the reference plane indicated by the depth channel of the pixel, i.e. the sensed reflectance around the depth value of the pixel. This may then be considered to indicate the relevant reflectivity properties of the object present at that depth. Preferably, the sensed reflectivity is an average reflectivity. The sensed reflectance data may be based on a reflectance associated with the same data point used to determine the depth value for a set of larger data points. For example, the reflectivity associated with the sensed depth values applicable to the pixels (and in addition to those most recent ones of the depth values preferably used to determine the depth channel) may additionally be considered.
In this way, a multi-channel depth map, such as a raster image, is provided. This format may enable more efficient compression of larger amounts of data related to the environment surrounding the navigable elements, facilitating storage and processing, and providing the ability to implement improved correlation with sensing of real-time data by the vehicle under different conditions, and the vehicle need not necessarily have the same type of sensor as used in generating the reference positioning data. As will be described in more detail below, this data may also help reconstruct data sensed by the vehicle, or images of the surrounding of the navigable element that would be obtained using the camera of the vehicle under certain conditions (e.g., at night). For example, radar or laser reflectivity may enable identification of those objects that will be visible under certain conditions (e.g., at night).
According to another aspect of the present invention there is provided a method of generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the method comprising, for at least one navigable element represented by the digital map:
Generating positioning reference data comprising at least one depth map indicative of an environment surrounding the navigable element projected onto a reference plane defined by a reference line associated with the navigable element, each pixel in the at least one depth map being associated with a location in the reference plane associated with the navigable element and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment, wherein each pixel further includes one or more of a channel indicative of a value of sensed laser reflectivity and a channel indicative of a value of sensed radar reflectivity, and
The generated positioning reference data is associated with digital map data.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
Other channels associated with pixels may alternatively or additionally be used in accordance with any aspect or embodiment of the present invention. For example, the additional channels may be one or more of a thickness of the object proximate to the distance indicated by the depth channel of the pixel from the reference plane to the position of the pixel in the predetermined direction, a density of reflected data points proximate to the distance indicated by the depth channel of the pixel from the reference plane to the position of the pixel in the predetermined direction, a color proximate to the distance indicated by the depth channel of the pixel from the reference plane to the position of the pixel in the predetermined direction, and a texture proximate to the distance indicated by the depth channel of the pixel from the reference plane to the position of the pixel in the predetermined direction. Each channel may include a value indicative of a relevant property. The values are based on available sensor data, which may optionally be obtained from one or more different types of sensors, e.g., cameras for color or texture data. Each value may be based on a plurality of sensed data points and may be an average from the plurality of sensed data points.
It should be appreciated that while the depth channel indicates the distance of the object from a reference plane at the location of the pixel along a predetermined direction, other channels may indicate other properties of the object, such as the reflectivity of the object, or its color, texture, etc. This may be useful in reconstructing scan data that may be expected to have been sensed by the vehicle and/or camera images taken by the vehicle. Data indicative of the thickness of the object may be used to recover information related to the surface of the object perpendicular to the navigable element using orthogonal projection of the environment onto a reference plane. This may provide an alternative to the embodiments described above for determining information related to such surfaces of objects, which use non-orthogonal projections.
In many embodiments, the positioning reference data is used to provide a compressed representation of the environment of one or more sides of the navigable element, i.e., to provide a side depth map. The reference line may then be parallel to the navigable element, wherein the depth channel of the pixel indicates the lateral distance of the object surface from the reference plane. However, the use of depth maps may also be helpful in other contexts. Applicant has appreciated that it would be useful for providing a circular depth map in the area of a junction (e.g., intersection). This may provide improved ability to position the vehicle relative to the junction (e.g., intersection), or, if desired, reconstruct data indicative of the environment surrounding the junction (e.g., intersection). Preferably a 360 degree representation of the environment around the junction is provided, although it will be appreciated that the depth map need not extend around a complete circle and may therefore extend around less than 360 degrees. In some embodiments, the reference plane is defined by a reference line defined by a radius centered on a reference point associated with the navigable element. In these embodiments, the reference line is curved, and preferably circular. The reference point is preferably located on a navigable stretch at the junction. For example, the reference point may be located at the center of a junction (e.g., intersection). The radius defining the reference line may be selected as desired, e.g., depending on the size of the junction.
According to another aspect of the present invention there is provided a method of generating positioning reference data associated with a digital map representing elements of a navigable network, the positioning reference data providing a compressed representation of the environment surrounding at least one junction of the navigable network represented by the digital map, the method comprising, for at least one junction represented by the digital map:
Generating positioning reference data comprising at least one depth map indicative of an environment surrounding the junction projected onto a reference plane defined by a reference line defined by a radius centered on a reference point associated with the junction, each pixel in the at least one depth map being associated with a location in the reference plane associated with the junction and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment, and
The generated positioning reference data is associated with digital map data indicative of the point of engagement.
As described with respect to the earlier embodiments, the junction may be an intersection. The reference point may be located at the center of the junction. The reference point may be associated with a node or navigable element at the node of the digital map representing the point of engagement. These additional aspects or embodiments of the invention may be used in conjunction with a side depth map representing an environment of sides of navigable elements away from the junction.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
According to any aspect or embodiment of the invention related to the generation of positioning reference data, the method may comprise associating the generated positioning reference data about the navigable element or junction with digital map data indicative of the element or junction. The method may include storing generated positioning data associated with the digital map data, for example with navigable elements or joints to which it relates.
In some embodiments, the positioning reference data may include a reference scan representing, for example, a lateral environment to the left of the navigable element and to the right of the navigable element. The positioning reference data for each side of the navigable element may be stored in the combined dataset. Thus, data from multiple portions of the navigable network may be stored together in an efficient data format. The data stored in the combined dataset may be compressed, allowing more portions of the data of the navigable network to be stored within the same storage capacity. Data compression will also allow for the use of reduced network bandwidth if the reference scan data is transmitted to the vehicle over a wireless network connection. However, it should be appreciated that the positioning reference data need not necessarily relate to the lateral environment on either side of the navigable element. For example, as discussed in certain embodiments above, the reference data may relate to the environment surrounding the junction.
The invention also extends to a data product storing positioning reference data generated in accordance with any aspect or embodiment of the invention.
The data products in any of these further aspects or embodiments of the invention may be in any suitable form. In some embodiments, the data product may be stored on a computer readable medium. The computer readable medium may be, for example, a floppy disk, CD ROM, RAM, flash memory, or a hard disk. The invention extends to a computer readable medium comprising a data product according to any aspect or embodiment of the invention.
Positioning reference data generated in accordance with any aspect or embodiment of the invention related to the generation of this data may be used in a variety of ways. In further aspects related to using data, the step of obtaining reference data may be extended to generating data, or generally include retrieving data. The reference data is preferably generated by a server. The step of using the data is preferably performed by a device that may be associated with the vehicle, such as a navigation device or similar device.
In some preferred embodiments, the data is used to determine the position of the vehicle relative to the digital map. The digital map thus includes data representing navigable elements along which the vehicle travels. The method may include obtaining positioning reference data associated with a digital map for a considered current position of the vehicle along a navigable element of a navigable network, determining real-time scan data by scanning an environment surrounding the vehicle using at least one sensor, wherein the real-time scan data includes at least one depth map indicative of the environment surrounding the vehicle, each pixel in the at least one depth map being associated with a position in the reference plane associated with the navigable element, and the pixel including a depth channel representing a distance from the associated position of the pixel in the reference plane to a surface of an object in the environment determined using the at least one sensor, calculating a correlation between the positioning reference data and the real-time scan data to determine an alignment offset between the depth map, and adjusting the considered current position using the determined alignment offset to determine the position of the vehicle relative to the digital map. It should be appreciated that the obtained positioning reference data relates to navigable elements along which the vehicle travels. The depth map of the positioning reference data indicative of the environment around the navigable element is thus indicative of the environment around the vehicle.
According to another aspect of the present invention there is provided a method of determining the position of a vehicle relative to a digital map comprising data representing navigable elements of a navigable network along which the vehicle travels, the method comprising:
Obtaining positioning reference data associated with the digital map for a considered current position of a navigable element of the navigable network for the vehicle, wherein the positioning reference data comprises at least one depth map indicative of an environment surrounding the vehicle projected onto a reference plane defined by a reference line associated with a navigable element, each pixel in the at least one depth map being associated with a position in the reference plane associated with the navigable element along which the vehicle travels, and the pixel including a depth channel representing a distance along a predetermined direction from the associated position of the pixel in the reference plane to a surface of an object in the environment;
Determining real-time scan data by scanning the environment around the vehicle using at least one sensor, wherein the real-time scan data comprises at least one depth map indicative of an environment around the vehicle, each pixel in the at least one depth map being associated with a location in the reference plane associated with the navigable element along which the vehicle traveled, and the pixel including a depth channel representing a distance from the associated location of the pixel in the reference plane to a surface of an object in the environment determined using the at least one sensor along the predetermined direction;
Calculating a correlation between the positioning reference data and the real-time scan data to determine an alignment offset between the depth maps, and
The determined alignment offset is used to adjust the considered current position to determine the position of the vehicle relative to the digital map.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
In further aspects and embodiments of the invention related to using positioning reference data and real-time scan data in determining the position of a vehicle, the current position of the vehicle may be a longitudinal position. The real-time scan data may be related to the lateral environment surrounding the vehicle. The depth map of the positioning reference data and/or real-time sensor data will then be defined by a reference line parallel to the navigable elements and include depth channels representing lateral distances to the surface of objects in the environment. The determined offset may then be a longitudinal offset.
According to another aspect of the present invention there is provided a method of determining the position of a vehicle relative to a digital map comprising data representative of a junction through which the vehicle travels, the method comprising:
Obtaining positioning reference data associated with the digital map for a considered current position of the vehicle in the navigable network, wherein the positioning reference data comprises at least one depth map indicative of an environment surrounding the vehicle projected onto a reference plane defined by a reference line defined by a radius centered on a reference point associated with the junction point, each pixel in the at least one depth map being associated with a position in the reference plane associated with the junction point through which the vehicle travels, and the pixel including a depth channel representing a distance along a predetermined direction from the associated position of the pixel in the reference plane to a surface of an object in the environment;
Determining real-time scan data by scanning the environment around the vehicle using at least one sensor, wherein the real-time scan data comprises at least one depth map indicative of the environment around the vehicle, each pixel in the at least one depth map being associated with a location in the reference plane associated with the junction, and the pixel including a depth channel representing a distance from the associated location of the pixel in the reference plane to a surface of an object in the environment, determined using the at least one sensor, along the predetermined direction;
Calculating a correlation between the positioning reference data and the real-time scan data to determine an alignment offset between the depth maps, and
The determined alignment offset is used to adjust the considered current position to determine the position of the vehicle relative to the digital map.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
According to another aspect of the present invention there is provided a method of determining the position of a vehicle relative to a digital map comprising data representing navigable elements of a navigable network along which the vehicle travels, the method comprising:
Obtaining positioning reference data associated with the digital map for a considered current position of a navigable element of the navigable network for the vehicle, wherein the positioning reference data comprises at least one depth map indicative of an environment surrounding the vehicle, each pixel of the at least one depth map being associated with a position in the reference plane associated with the navigable element, the reference plane being defined by a longitudinal reference line oriented parallel to the navigable element and perpendicular to a surface of the navigable element, and each pixel including a depth channel representing a lateral distance to a surface of an object in the environment, optionally wherein the at least one depth map has a fixed longitudinal resolution and a variable vertical and/or depth resolution;
Determining real-time scan data by scanning the environment around the vehicle using at least one sensor;
Determining real-time scan data using the sensor data, wherein the real-time scan data comprises at least one depth map indicative of an environment surrounding the vehicle, each pixel in the at least one depth map being associated with a location in the reference plane associated with the navigable element, and each pixel including a depth channel representing a lateral distance to a surface of an object in the environment determined from the sensor data, optionally wherein the at least one depth map has a fixed longitudinal resolution and a variable vertical and/or depth resolution;
Calculating a correlation between the positioning reference data and the real-time scan data to determine an alignment offset between the depth maps, and
The determined alignment offset is used to adjust the considered current position to determine the position of the vehicle relative to the digital map.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
In a further aspect of the invention relating to the use of positioning reference data, the data may be generated in accordance with any of the earlier aspects of the invention. The real-time scan data used in determining the position of the vehicle or otherwise should have a form corresponding to the positioning reference data. Thus, the determined depth map will comprise pixels having positions in a reference plane defined relative to a reference line associated with the navigable element in the same manner as the positioning reference data, so that the real time scan data and the positioning reference data are related to each other. The depth channel data of the depth map may be determined in a manner corresponding to the manner of the reference data, e.g., without using an average of the sensed data, and thus may include a closest distance from the plurality of sensed data points to the surface. The real-time scan data may include any additional channels. In case the depth map of the positioning reference data has a fixed longitudinal resolution and a variable vertical and/or depth resolution, the depth map of the real-time scan data may also have this resolution.
Thus, in accordance with these aspects or embodiments of the present invention, a method is provided for continuously determining a position of a vehicle relative to a digital map comprising data representing navigable elements (e.g., roads) of a navigable network (e.g., road network) along which the vehicle is traveling. The method includes receiving real-time scan data obtained by scanning an environment surrounding the vehicle, retrieving positioning reference data associated with the digital map for a considered current position of the vehicle relative to the digital map (e.g., wherein the positioning reference data includes a reference scan of the environment surrounding the considered current position), optionally wherein the reference scan has been obtained throughout the digital map from at least one device that has previously traveled along a route, comparing the real-time scan data to the positioning reference data to determine an offset between the real-time scan data and the positioning reference data, and adjusting the considered current position based on the offset. The position of the vehicle relative to the digital map is thus always known with high accuracy. Examples in the prior art have attempted to determine the position of a vehicle by comparing collected data with known reference data for predetermined landmarks along a route. However, landmarks may be sparsely distributed across many lines, resulting in significant estimation errors of vehicle position as the vehicle travels between landmarks. This is a problem in the case of, for example, highly automated driving systems, where such errors can lead to catastrophic consequences, such as vehicle collision accidents leading to serious injury or loss of life. The present invention solves this problem in at least some aspects by having reference scan data throughout the digital map and by scanning the environment surrounding the vehicle in real time. In this way, the present invention may allow for comparison of real-time scan data with reference data so that the position of the vehicle relative to the digital map is always known with high accuracy.
According to another aspect of the present invention there is provided a method of determining a longitudinal position of a vehicle relative to a digital map comprising data representative of navigable elements of a navigable network along which the vehicle travels, the method comprising:
Obtaining positioning reference data associated with the digital map for a considered current position of the vehicle along a navigable element of the navigable network, wherein the positioning reference data comprises contours of objects in the environment surrounding the vehicle projected onto a reference plane defined by a longitudinal reference line oriented parallel to the navigable element and perpendicular to a surface of the navigable element;
Obtaining sensor data by scanning the environment around the vehicle using at least one sensor;
determining real-time scan data using the sensor data, wherein the real-time scan data includes contours of objects in an environment surrounding the vehicle projected onto a reference plane as determined from the sensor data;
calculating a correlation between the positioning reference data and the real-time scan data to determine a longitudinal alignment offset, and
The determined alignment offset is used to adjust the considered current position to determine the longitudinal position of the vehicle relative to the digital map.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
The positioning reference data may be stored in association with a digital map, for example in association with related navigable elements, such that the contours of objects in the environment surrounding the vehicle projected onto the reference plane have been determined. However, in other embodiments, the positioning reference data may be stored in a different format, and the stored data processed in order to determine the profile. For example, in an embodiment, as in the earlier described aspects of the disclosure, the positioning reference data includes one or more depth maps, such as raster images, each depth map representing lateral distances to a surface in an environment of multiple longitudinal locations and altitudes. The depth map may be according to any of the previous aspects and embodiments. In other words, the positioning reference data comprises at least one depth map, such as a raster image, indicative of the environment surrounding the vehicle, wherein each pixel of the at least one depth map is associated with a position in the reference plane, and each pixel includes a channel representing a lateral distance (e.g., perpendicular to the reference plane) to a surface of an object in the environment. In such embodiments, the relevant depth map, e.g., a raster image, is processed using an edge detection algorithm to generate a contour of the object in the environment. The edge detection algorithm may include Canny operator, prewitt operator, and the like. However, in a preferred embodiment, edge detection is performed using the Sobel operator. The edge detection operator may be applied in both the elevation (or elevation) and longitudinal domains, or in only one of the domains. For example, in a preferred embodiment, the edge detection operator is applied only in the longitudinal domain.
Similarly, the contour of an object in the environment surrounding the vehicle projected onto the reference plane can be determined directly from the sensor data obtained by the at least one sensor. Alternatively, in other embodiments, the sensor data may be used to determine one or more depth maps, such as raster images, each depth map representing lateral distances to a surface in an environment of multiple longitudinal locations and altitudes. In other words, the real-time scan data comprises at least one depth map, such as a raster image, indicative of the environment surrounding the vehicle, wherein each pixel of the at least one depth map is associated with a location in a reference plane, and each pixel includes a channel representing a lateral distance (e.g., perpendicular to the reference plane) to a surface of an object in the environment determined using at least one sensor. The relevant depth map, e.g. a raster image, may then be processed using an edge detection algorithm, preferably the same edge detection algorithm applied to the positioning reference data, to determine the contour of the real-time scan data. The edge detection operator may be applied in both the elevation (or elevation) and longitudinal domains, or in only one of the domains. For example, in a preferred embodiment, the edge detection operator is applied only in the longitudinal domain.
In an embodiment, the blurring operator is applied to the contour of at least one of the positioning reference data and the real-time scan data before correlating the two sets of data. The blurring operator may be applied in both the elevation (or altitude) and longitudinal domains, or in only one of the domains. For example, in a preferred embodiment, the blurring operator is applied only in the height domain. In obtaining real-time scan data and/or positioning reference data, the blurring operator may take into account any tilt of the vehicle such that, for example, the contour is slightly shifted up or down in the elevation domain.
According to any aspect or embodiment of the invention, the considered current (for example) longitudinal position of the vehicle may be obtained at least initially from an absolute positioning system, such as a satellite navigation device (e.g. GPS, GLONASS), european galileo positioning system, COMPASS positioning system or IRNSS (indian regional navigation satellite system). However, it should be appreciated that other location determination means may be used, such as using mobile telecommunications, surface beacons, or the like.
The digital map may include a three-dimensional vector model representing navigable elements of a navigable network (e.g., roads of a road network), with each lane of the navigable elements (e.g., roads) being represented separately. Thus, the lateral position of the vehicle on the road may be known by determining the lane in which the vehicle is traveling, for example, by image processing of a camera mounted to the vehicle. In such embodiments, the longitudinal reference line may be, for example, an edge or boundary of a lane of a navigable element or a centerline of a lane of a navigable element.
The real-time scan data may be obtained on the left side of the vehicle and on the right side of the vehicle. This helps to reduce the impact of transient features on position estimation. Such transient features may be, for example, parked vehicles, vehicles that overtake, or vehicles that travel in opposite directions on the same route. Thus, real-time scan data can record features present on both sides of the vehicle. In some embodiments, the real-time scan data may be obtained from the left side of the vehicle or the right side of the vehicle.
In embodiments in which the positioning reference data and the real-time scan data are each about the left and right sides of the vehicle, the comparison of the real-time scan data from the left side of the vehicle to the positioning reference data from the left side of the navigable element and the comparison of the real-time scan data from the right side of the vehicle to the positioning reference data from the right side of the navigable element may be a single comparison. Thus, when the scan data includes data from the left side of the navigable element and data from the right side of the navigable element, the scan data may be compared as a single data set, significantly reducing processing requirements compared to a case where the comparison for the left side of the navigable element and the comparison for the right side of the navigable element are performed separately.
Comparing the real-time scan data with the positioning reference data may comprise calculating a cross-correlation, preferably a normalized cross-correlation, between the real-time scan data and the positioning reference data, whether or not it relates to the left and right sides of the vehicle. The method may include determining a location at which the data set is most aligned. Preferably, the determined alignment offset between the depth maps is at least a longitudinal alignment offset and the position at which the data set is most aligned is a longitudinal position. The step of determining the longitudinal position at which the data set is most aligned may comprise longitudinally shifting a depth map (e.g. a raster image provided by the depth map based on real-time scan data) and a depth map (e.g. a raster image provided by the depth map based on positioning reference data) relative to each other until the depth maps are aligned. This may be performed in the image domain.
The determined longitudinal alignment offset is used to adjust the current position considered to adjust the longitudinal position of the vehicle relative to the digital map.
Alternatively or preferably in addition to determining the longitudinal alignment offset between the depth maps, it is desirable to determine the lateral alignment offset between the depth maps. The determined lateral alignment offset may then be used to adjust the considered current lateral position of the vehicle and thus determine the position of the vehicle relative to the digital map. Preferably, a longitudinal alignment offset is determined, which may be implemented in any of the ways described above, and a lateral alignment offset is additionally determined. The determined lateral and longitudinal alignment deviations are then used together to adjust both the longitudinal and lateral positions of the vehicle relative to the digital map.
The method may include determining a longitudinal alignment offset between the depth maps, such as by calculating a correlation between positioning reference data and real-time scan data, and may further include determining a lateral offset between the depth maps, and adjusting the considered current position using the determined lateral and longitudinal alignment offsets to determine the position of the vehicle relative to the digital map.
The longitudinal alignment offset is preferably determined before the lateral alignment offset. According to certain embodiments described below, the lateral alignment offset may be determined based on first determining a longitudinal offset between the depth maps and longitudinally aligning the depth maps relative to one another based on the offset.
The lateral offset is preferably determined based on the most common lateral offset, i.e. the mode lateral offset, between corresponding pixels of the depth map.
According to another aspect of the present invention there is provided a method of determining the position of a vehicle relative to a digital map comprising data representative of navigable elements of a navigable network along which the vehicle travels, the method comprising:
obtaining positioning reference data associated with the digital map for a considered current position of a navigable element of the navigable network by the vehicle, wherein the positioning reference data comprises at least one depth map indicative of an environment surrounding the vehicle projected onto a reference plane defined by a reference line associated with the navigable element, each pixel of the at least one depth map being associated with a position in the reference plane associated with the navigable element along which the vehicle travels, and the pixel including a depth channel representing a distance along a predetermined direction from the associated position of the pixel in the reference plane to a surface of an object in the environment;
Determining real-time scan data by scanning an environment surrounding the vehicle using at least one sensor, wherein the real-time scan data comprises at least one depth map indicative of the environment surrounding the vehicle, each pixel of the at least one depth map being associated with a location in the reference plane associated with the navigable element, and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment determined using the at least one sensor;
determining a longitudinal alignment offset between the positioning reference data and the depth map of the real-time scan data by calculating a correlation between the positioning reference data and the real-time scan data;
determining a lateral alignment offset between the depth maps, wherein the lateral offset is based on the most common lateral offset between corresponding pixels of the depth maps, and
The determined longitudinal and lateral alignment offsets are used to adjust the considered current position to determine the position of the vehicle relative to the digital map.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
According to these aspects and embodiments of the present invention in which a lateral alignment offset is determined, the most common lateral alignment offset may be determined by considering depth channel data of corresponding pixels of the depth map. The most common lateral alignment offset is a determined lateral alignment offset determined between respective pairs of corresponding positioned pixels based on the depth map, and preferably based on the lateral alignment offset of each pair of corresponding pixels. In order to determine the lateral alignment offset between corresponding pixels of the depth map, corresponding pairs of pixels in the depth map must be identified. The method may include identifying corresponding pairs of pixels in a depth map. Preferably, the longitudinal alignment offset is determined before the lateral alignment offset. The depth maps are desirably shifted relative to each other until they are longitudinally aligned to enable identification of corresponding pixels in each depth map.
Accordingly, the method may further include longitudinally aligning the depth maps relative to each other based on the determined longitudinal alignment offset. The step of longitudinally aligning the depth maps with each other may include longitudinally shifting one or both of the depth maps. Longitudinal shifting of depth maps relative to each other may be implemented in the image domain. The step of aligning the depth maps may thus comprise longitudinally shifting the raster images corresponding to each depth map relative to each other. The method may further include cropping a size of the image provided by the positioning reference data depth map to correspond to a size of the image provided by the real-time scan data depth map. This may facilitate a comparison between depth maps.
Once the corresponding pixels in the two depth maps have been identified, a lateral offset between each pair of corresponding pixels may be determined. This may be accomplished by comparing the distances from the locations of the pixels in the reference plane to the surface of the object in the environment along a predetermined direction indicated by the depth channel data associated with each pixel. As described earlier, the depth map preferably has a variable depth resolution. The lateral alignment offset between each pair of corresponding pixels may be based on the difference in distance indicated by the depth channel data of the pixels. The method may include identifying a most common lateral alignment offset between corresponding pixels of a depth map using a histogram. The histogram may indicate the frequency of occurrence of different lateral alignment offsets between corresponding pixel pairs. The histogram may indicate a probability density function of lateral alignment offset, where the pattern reflects the most likely shift.
In some embodiments, each pixel has a color that indicates a value of a depth channel of the pixel. Thus, the comparison of the depth values of the corresponding pixels may include comparing the colors of the corresponding pixels of the depth map. The difference in color between corresponding pixels may indicate a lateral alignment offset between pixels, such as when the depth map has a fixed depth resolution.
In these embodiments, where a lateral alignment offset is determined, the current longitudinal and lateral positions of the vehicle relative to the digital map may be adjusted.
According to any aspect or embodiment of the invention in which the current position of the vehicle (whether longitudinal and/or lateral) is adjusted, the adjusted current position may be an estimate of the current position obtained in any suitable manner, such as from an absolute position determination system or other position determination system, as described above. For example, GPS or dead reckoning may be used. As should be appreciated, the absolute position is preferably matched to the digital map to determine an initial position relative to the digital map, and then longitudinal and/or lateral corrections are applied to the initial position to improve position relative to the digital map.
The inventors have recognized that while the techniques described above may be useful in adjusting the position of a vehicle relative to a digital map, they will not correct the heading of the vehicle. In a preferred embodiment, the method further comprises adjusting the perceived heading of the vehicle using the positioning reference data and the real-time scan data depth map. This further step is preferably implemented in addition to determining the longitudinal and lateral alignment offsets of the depth map according to any of the above described embodiments. In these embodiments, the perceived heading of the vehicle may be determined in any suitable manner, for example using GPS data or the like, as described with respect to determining the perceived location of the vehicle.
It has been found that when the perceived forward direction of the vehicle is incorrect, the lateral alignment offset between corresponding pixels of the depth map will vary along the depth map (i.e., along the depth map image) in the longitudinal direction. It has been found that the forward direction offset may be determined based on a function indicative of a change in lateral alignment offset between corresponding pixels of the depth map relative to a longitudinal position along the depth map. The step of determining the forward direction offset may incorporate any of the features described earlier with respect to determining the lateral alignment offset of the corresponding pixel. Thus, the method preferably first comprises shifting the depth maps relative to each other to longitudinally align the depth maps.
Accordingly, the method may further include determining a longitudinal alignment offset between the depth maps, determining a function indicative of a change in lateral alignment offset between corresponding pixels of the depth maps relative to a longitudinal position of the pixels along the depth maps, and adjusting a considered current heading of the vehicle using the determined function to determine a heading of the vehicle relative to the digital map.
The determined lateral alignment offset between corresponding pixels is, as described above, preferably based on a difference in values indicated by the depth channel data of the pixels, e.g. by referencing the color of the pixels.
In these aspects or embodiments, the determined function is indicative of a heading offset of the vehicle.
The step of determining a function indicative of a change in lateral alignment offset relative to longitudinal position may include determining an average (i.e., mean) lateral alignment offset across corresponding pixels of the depth map in each of a plurality of vertical sections of the depth map along a longitudinal direction of the depth map. The function may then be obtained based on the change in the average lateral alignment offset determined for each vertical section along the longitudinal direction of the depth map. It should be appreciated that at least some, and optionally each, of the corresponding pairs of pixels in the depth map are considered in determining the function.
According to another aspect of the present invention there is provided a method of determining the position of a vehicle relative to a digital map comprising data representative of navigable elements of a navigable network along which the vehicle travels, the method comprising:
Obtaining positioning reference data associated with the digital map for a considered current position of a navigable element of the navigable network by the vehicle, wherein the positioning reference data comprises at least one depth map indicative of an environment surrounding the vehicle projected onto a reference plane defined by a reference line associated with the navigable element, each pixel of the at least one depth map being associated with a position in the reference plane associated with the navigable element along which the vehicle travels, and the pixel including a depth channel representing a distance along a predetermined direction from the associated position of the pixel in the reference plane to a surface of an object in the environment;
Determining real-time scan data by scanning an environment surrounding the vehicle using at least one sensor, wherein the real-time scan data comprises at least one depth map indicative of the environment surrounding the vehicle, each pixel of the at least one depth map being associated with a location in the reference plane associated with the navigable element, and the pixel including a depth channel representing a distance from the associated location of the pixel in the reference plane to a surface of an object in the environment determined using the at least one sensor along a predetermined direction;
Determining a function indicative of a change in lateral alignment offset between corresponding pixels of the depth map of the positioning reference data and the real-time sensor data relative to a longitudinal position of the pixels along the depth map, and
The determined function is used to adjust the considered current heading of the vehicle to determine the heading of the vehicle relative to the digital map.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
In these aspects and embodiments of the present invention, additional steps may be taken to improve the determined heading offset, such as by filtering out noise pixels, or weighting the average pixel depth differences within a longitudinal section of the depth map or image by referring to the number of significant pixels considered in that section.
As mentioned above, the depth map of the positioning reference data, and thus the depth map of the real-time data, may be transformed so as to be always associated with a linear reference line. Due to this linearization of the depth map, when the navigable element is curved, it has been found that it is not possible to directly apply the determined longitudinal, lateral and/or heading correction. Applicants have identified a computationally efficient method of adjusting or correcting the current position of a vehicle relative to a digital map involves applying each of the corrections in a series of incrementally independent linear update steps.
Thus, in a preferred embodiment, the determined longitudinal offset is applied to the current position of the vehicle relative to the digital map, and at least one depth map of the real-time scan data is recalculated based on the adjusted position. Next, the lateral offset determined using the recalculated real-time scan data is applied to the adjusted position of the vehicle relative to the digital map, and at least one depth map of the real-time scan data is recalculated based on the other adjusted position. The skew, i.e., heading offset, determined using the recalculated real-time scan data is then applied to another adjusted position of the vehicle relative to the digital map and at least one depth map of the real-time scan data is recalculated based on the again adjusted position. These steps are preferably repeated any number of times as desired until there is zero or substantially zero longitudinal offset, lateral offset, and skew.
It should be appreciated that the generated positioning reference data obtained in accordance with any aspect or embodiment of the present invention may be otherwise used with real-time scan data to determine a more accurate position of a vehicle, or indeed, for other purposes. In particular, the applicant has realized that it may not always be possible, or at least not always convenient, to use real-time scan data to determine a corresponding depth map for comparison with a depth map of positioning reference scan data. In other words, it may not be appropriate to perform a comparison of the data sets in the image domain. In particular, this may be the case where the type of sensor available on the vehicle is different from the type of sensor used to obtain the positioning reference data.
According to some further aspects and embodiments of the present invention, the method includes determining a reference point cloud indicative of an environment surrounding a navigable element using positioning reference data, the reference point cloud including a set of first data points in a three-dimensional coordinate system, wherein each first data point represents a surface of an object in the environment.
In accordance with another aspect of the present invention, there is provided a method of generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the method comprising, for at least one navigable element represented by the digital map:
generating positioning reference data comprising at least one depth map indicative of an environment surrounding the navigable element projected onto a reference plane, the reference plane being defined by a reference line associated with the navigable element, each pixel in the at least one depth map being associated with a location in the reference plane associated with the navigable element, and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment;
associating the generated positioning reference data with the digital map data, and
A reference point cloud indicative of the environment surrounding the navigable element is determined using the positioning reference data, the reference point cloud comprising a set of first data points in a three-dimensional coordinate system, wherein each first data point represents a surface of an object in the environment.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
According to another aspect of the present invention there is provided a method of generating positioning reference data associated with a digital map representing elements of a navigable network, the positioning reference data providing a compressed representation of the environment surrounding at least one junction of the navigable network represented by the digital map, the method comprising, for at least one junction represented by the digital map:
Generating positioning reference data comprising at least one depth map indicative of an environment surrounding the junction projected onto a reference plane, the reference plane being defined by a reference line defined by a radius centered on a reference point associated with the junction, each pixel in the at least one depth map being associated with a location in the reference plane associated with the junction, and the pixel including a depth channel representing a distance along a predetermined direction from an associated location of the pixel in the reference plane to a surface of an object in the environment;
associating the generated positioning reference data with digital map data indicative of the point of engagement, and
A reference point cloud indicative of the environment surrounding the junction is determined using the positioning reference data, the reference point cloud comprising a set of first data points in a three-dimensional coordinate system, wherein each first data point represents a surface of an object in the environment.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
A reference point cloud that includes a set of first data points in a three-dimensional coordinate system (where each first data point represents a surface of an object in the environment) may be referred to herein as a "3D point cloud. The 3D point cloud obtained according to these further aspects of the invention may be used in determining the position of a vehicle.
In some embodiments, the method may include using the generated positioning reference data in any aspect or embodiment of the invention in determining the position of a vehicle relative to a digital map that includes data representing navigable elements of a navigable network along which the vehicle travels, the method comprising:
Obtaining positioning reference data associated with a digital map for a considered current location of the vehicle along a navigable element or junction of a navigable network, determining a reference point cloud indicative of an environment surrounding the vehicle using the positioning reference data, the reference point cloud comprising a set of first data points in a three-dimensional coordinate system, wherein each first data point represents a surface of an object in the environment;
Determining real-time scan data by scanning an environment surrounding the vehicle using at least one sensor, the real-time scan data comprising a point cloud indicative of the environment surrounding the vehicle, the point cloud comprising a set of second data points in a three-dimensional coordinate system, wherein each data point represents a surface of an object in the environment determined using the at least one sensor;
Calculating a correlation between the point clouds of the real-time scan data and the point clouds of the obtained positioning reference data to determine an alignment offset between the point clouds, and
The determined alignment offset is used to adjust the perceived current position to determine the position of the vehicle relative to the digital map.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
According to another aspect of the present invention there is provided a method of determining the position of a vehicle relative to a digital map comprising data representing navigable elements of a navigable network along which the vehicle travels, the method comprising:
Obtaining positioning reference data associated with the digital map for a considered current position of a navigable element of the navigable network for the vehicle, wherein the positioning reference data comprises at least one depth map indicative of an environment surrounding the vehicle projected onto a reference plane defined by a reference line associated with a navigable element, each pixel in the at least one depth map being associated with a position in the reference plane associated with the navigable element along which the vehicle travels, and the pixel including a depth channel representing a distance along a predetermined direction from the associated position of the pixel in the reference plane to a surface of an object in the environment;
Determining, using the positioning reference data, a reference point cloud indicative of the environment around the vehicle, the reference point cloud including a set of first data points in a three-dimensional coordinate system, wherein each first data point represents a surface of an object in the environment;
Determining real-time scan data by scanning an environment surrounding the vehicle using at least one sensor, the real-time scan data comprising a point cloud indicative of the environment surrounding the vehicle, the point cloud comprising a set of second data points in a three-dimensional coordinate system, wherein each data point represents a surface of an object in the environment determined using the at least one sensor;
Calculating a correlation between the point clouds of the real-time scan data and the point clouds of the obtained positioning reference data to determine an alignment offset between the point clouds, and
The determined alignment offset is used to adjust the perceived current position to determine the position of the vehicle relative to the digital map.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
According to yet another aspect of the present invention there is provided a method of determining the position of a vehicle relative to a digital map comprising data representative of the engagement points of a navigable network through which the vehicle travels, the method comprising:
Obtaining positioning reference data associated with the digital map for a considered current position of the vehicle at a junction of the navigable network, wherein the positioning reference data comprises at least one depth map indicative of an environment surrounding the vehicle projected onto a reference plane defined by a reference line defined by a radius centered on a reference point associated with the junction, each pixel in the at least one depth map being associated with a location in the reference plane associated with the junction through which the vehicle travels, and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment;
Determining, using the positioning reference data, a reference point cloud indicative of the environment around the vehicle, the reference point cloud including a set of first data points in a three-dimensional coordinate system, wherein each first data point represents a surface of an object in the environment;
Determining real-time scan data by scanning the environment around the vehicle using at least one sensor, the real-time scan data comprising a point cloud indicative of the environment around the vehicle, the point cloud comprising a set of second data points in a three-dimensional coordinate system, wherein each data point represents a surface of an object in the environment determined using the at least one sensor;
Calculating a correlation between the point clouds of the real-time scan data and the point clouds of the obtained positioning reference data to determine an alignment offset between the point clouds, and
The determined alignment offset is used to adjust the considered current position to determine the position of the vehicle relative to the digital map.
A reference point cloud that includes a set of second data points in a three-dimensional coordinate system (where each second data point represents a surface of an object in the environment) in these further aspects may be referred to herein as a "3D point cloud".
In these further aspects or embodiments of the invention, the positioning reference data is used to obtain a 3D reference point cloud. This indicates the navigable element to which the data relates or the environment surrounding the junction, and thus the environment surrounding the vehicle as it travels along or through the junction. The point cloud of real-time sensor data relates to the environment surrounding the vehicle and thus may also be referred to as the environment surrounding the navigable element or junction at which the vehicle is positioned. In some preferred embodiments, the 3D point cloud obtained based on the positioning reference data is compared to a 3D point cloud indicative of the environment surrounding the vehicle (i.e., when traveling over the relevant element or through the junction) obtained based on the real-time scan data. The position of the vehicle may then be adjusted based on this comparison, rather than a comparison of the depth map (e.g., raster image).
A real-time scanned data point cloud is obtained using one or more sensors associated with the vehicle. A single sensor or a plurality of such sensors may be used, and in the latter case any combination of sensor types may be used. The sensor may include any one or some of a set of one or more laser scanners, a set of one or more radar scanners, and a set of one or more cameras, such as a single camera or a pair of stereo cameras. A single laser scanner, radar scanner, and/or camera may be used. In the case where the vehicle is associated with a camera or cameras, images obtained from the camera or cameras may be used to construct a three-dimensional scene indicative of the environment surrounding the vehicle, and a 3-dimensional point cloud may be obtained using the three-dimensional scene. For example, where the vehicle uses a single camera, the point cloud may be determined therefrom by obtaining a two-dimensional image sequence from the camera as the vehicle travels along the navigable element or through the junction, constructing a three-dimensional scene using the two-dimensional image sequence, and obtaining a three-dimensional point cloud using the three-dimensional scene. In the case of vehicles associated with stereoscopic cameras, the images obtained from the cameras may be used to obtain a three-dimensional scene, which is then used to obtain a three-dimensional point cloud.
By transforming the depth map of the positioning reference data into a 3D point cloud, it can be compared with a 3D point cloud obtained by real-time scanning with vehicle sensors, irrespective of what the vehicle sensors may be. For example, positioning reference data may be based on reference scanning using a variety of sensor types, including laser scanners, cameras, and radar scanners. The vehicle may or may not have a corresponding set of sensors. For example, a typical vehicle may include only one or more cameras.
The positioning reference data may be used to determine a reference point cloud indicative of an environment surrounding the vehicle that corresponds to a point cloud expected to be generated by at least one sensor of the vehicle. In case the reference point cloud is obtained using sensors of the same type as the sensor type of the vehicle, this may be straightforward and all positioning reference data may be used in constructing the 3D point cloud. Similarly, under certain conditions, data sensed by one type of sensor may be similar to data sensed by another sensor. For example, an object that is sensed by a laser sensor when providing reference positioning data is expected to also be sensed by a camera of the vehicle during the day. However, the method may include only those points in the 3D point cloud that are expected to be detectable by a sensor or sensors of the type associated with the vehicle and/or that are expected to be detected under the current conditions. The positioning reference data may include data that enables generation of an appropriate reference point cloud.
In some embodiments, as described above, each pixel of the positioning reference data further includes at least one channel indicative of a value of the sensed reflectivity. Each pixel may include one or more of a channel indicating a value of sensed laser reflectivity and a channel indicating a value of sensed radar reflectivity. Preferably, a channel is provided that indicates both radar and laser reflectivity. Next, the step of generating a 3D point cloud based on the positioning reference data is preferably performed using the sensed reflectivity data. The generation of the 3D point cloud may also be based on the type of sensor or sensors of the vehicle. The method may include selecting a 3D point included in the reference 3D point cloud using the reflectivity data and data indicative of a type of sensor or sensors of the vehicle. The data of the reflectivity channels is used to select data from the depth channels for generating a 3D point cloud. The reflectivity channel gives an indication of whether a particular object will be sensed by the relevant sensor type (under the current conditions where appropriate).
For example, where the reference data is based on data obtained from a laser scanner and a radar scanner and the vehicle has only a radar scanner, radar reflectivity values may be used to select those points included in the 3D points expected to be sensed by the radar scanner of the vehicle. In some embodiments, each pixel includes a channel that indicates radar reflectivity, and the method includes the step of using radar reflectivity data to generate a 3D reference point cloud containing only those points to be sensed by the radar sensor. In case the method further comprises comparing the 3D reference point cloud with a 3D point cloud obtained based on the real-time scan data, the 3D point cloud of the real-time scan data is thus based on data obtained from the radar scanner. The vehicle may include only a radar scanner.
While the vehicle may include radar and/or laser scanners, in many cases the vehicle may include only one camera or multiple cameras. The laser reflectivity data may provide a way to obtain a 3D reference point cloud related to a 3D point cloud expected to be sensed in dark conditions by a vehicle having only one camera or multiple cameras as sensors. The laser reflectivity data provides an indication of those objects that may be expected to be detected by the camera at night. In some embodiments, each pixel includes a channel that indicates laser reflectivity, and the method includes the step of using the laser reflectivity data to generate a 3D reference point cloud containing only those points that are to be sensed by the vehicle's camera during dark conditions. In case the method further comprises comparing the 3D reference point cloud with a 3D point cloud obtained based on real-time scan data, the 3D point cloud of real-time scan data may thus be based on data obtained from the camera in dark conditions.
It is believed to be advantageous in itself to obtain reference positioning data in the form of a three-dimensional point cloud, and to use this data to reconstruct a reference map, such as an image that is expected to be available from one or more cameras of the vehicle under applicable conditions, and then to be able to compare it to the image obtained by the cameras.
In some embodiments, the method may include using generated positioning reference data in any aspect or embodiment of the invention in reconstructing a view expected to be obtained under applicable conditions from one or more cameras associated with a vehicle traveling along a navigable element of a navigable network or through a junction represented by a digital map, the method comprising obtaining positioning reference data associated with the digital map for a considered current position of the vehicle along the navigable element or junction of the navigable network or at the junction, determining a reference point cloud indicative of an environment surrounding the vehicle using the positioning reference data, the reference point cloud comprising a set of first data points in a three-dimensional coordinate system, wherein each first data point represents a surface of an object in the environment, and reconstructing a reference view expected to be obtainable under applicable conditions by the one or more cameras associated with the vehicle when traversing the navigable element or junction using the reference point cloud. The method may further include determining, using the one or more cameras, a real-time view of the environment surrounding the vehicle, and comparing the reference view to the real-time view obtained by the one or more cameras.
According to another aspect of the present invention there is provided a method of reconstructing views expected to be obtainable under applicable conditions from one or more cameras associated with a vehicle travelling along a navigable element of a navigable network represented by a digital map, the method comprising:
Obtaining positioning reference data associated with a digital map for a considered current position of a navigable element of the vehicle along a navigable network, wherein the positioning reference data comprises at least one depth map indicative of an environment surrounding the vehicle projected onto a reference plane defined by a reference line associated with a navigable element, each pixel in the at least one depth map being associated with a position in the reference plane associated with the navigable element along which the vehicle travels, and the pixel including a depth channel representing a distance along a predetermined direction from the associated position of a pixel in the reference plane to a surface of an object in the environment;
Determining, using the positioning reference data, a reference point cloud indicative of the environment around the vehicle, the reference point cloud including a set of first data points in a three-dimensional coordinate system, wherein each first data point represents a surface of an object in the environment;
Reconstructing, using the reference point cloud, a reference view expected to be available under applicable conditions by one or more cameras associated with the vehicle when traversing the navigable element;
determining a real-time view of the environment surrounding the vehicle using the one or more cameras, and
The reference view is compared to the real-time view obtained by the one or more cameras.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
According to another aspect of the present invention there is provided a method of reconstructing views expected to be obtainable under applicable conditions from one or more cameras associated with a vehicle travelling through a junction of a navigable network represented by a digital map, the method comprising:
Obtaining positioning reference data associated with a digital map for a considered current position of the vehicle along a navigable element of a navigable network, wherein the positioning reference data comprises at least one depth map indicative of an environment surrounding the vehicle projected onto a reference plane defined by a reference line defined by a radius centered on a reference point associated with the junction point, each pixel in the at least one depth map being associated with a location in the reference plane associated with the junction point through which the vehicle travels, and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment;
Determining, using the positioning reference data, a reference point cloud indicative of the environment around the vehicle, the reference point cloud including a set of first data points in a three-dimensional coordinate system, wherein each first data point represents a surface of an object in the environment;
Reconstructing, using the reference point cloud, a reference view expected to be available under applicable conditions by one or more cameras associated with the vehicle when traversing the navigable element;
determining a real-time view of the environment surrounding the vehicle using the one or more cameras, and
The reference view is compared to the real-time view obtained by the one or more cameras.
The invention according to this further aspect may comprise any or all of the features described in relation to the other aspects of the invention, provided that they are not mutually inconsistent.
These aspects of the invention are particularly advantageous in allowing the construction of reference views that are comparable to real-time views obtained by cameras of vehicles, but based on positioning reference data that can be obtained from different types of sensors. It has been recognized that in practice, many vehicles will be equipped with only one camera or multiple cameras, rather than more specific or complex sensors, such as may be used to obtain reference data.
In these further aspects and embodiments of the invention, the comparison of the reference view to the real-time view may be used as desired. For example, the comparison results may be used to determine the location of the vehicle as in the earlier described aspects and embodiments. The method may include calculating a correlation between the real-time view and the reference view to determine an alignment offset between the views, and adjusting a considered current position of the vehicle using the determined alignment offset to determine a position of the vehicle relative to the digital map.
Suitable conditions are those that are suitable at the current time, and may be lighting conditions. In some embodiments, the applicable condition is a dark condition.
According to any of the embodiments described above, the reference view is reconstructed using a 3D reference point cloud that is obtainable from positioning reference data. The step of reconstructing a reference view expected to be obtainable by one or more cameras preferably comprises using data of a reflectivity data channel associated with pixels of a depth map in which the reference data is located. Preferably, therefore, each pixel of the positioning reference data further comprises at least one channel indicative of a value of the sensed laser reflectivity, and the step of generating the 3D point cloud based on the positioning reference data is performed using the sensed laser reflectivity data. The laser reflectivity data may be used to select data from the depth channel for use in generating a reference 3D point cloud to result in a reconstructed reference view corresponding to a view expected to be available from one or more cameras of the vehicle, e.g., including those objects that are desired to be visible under applicable conditions (e.g., darkness). The one or more cameras of the vehicle may be a single camera, or a pair of stereoscopic cameras, as described above.
Comparison of real-time scan data with positioning reference data, whether by comparison of depth maps or by comparison of point clouds or comparison of reconstructed images with real-time images, which may be performed in accordance with various aspects and embodiments of the present invention, may be performed on a data window. The data window is a data window in the direction of travel, such as longitudinal data. Thus, the windowed data allows the comparison to take into account a subset of the available data. The comparison may be performed periodically for overlapping windows. At least some overlap in the windows of data for comparison is desirable. This may ensure, for example, that differences between adjacent calculated, e.g., longitudinal offset values smooth the data. The window may have a length sufficient for the accuracy of the offset calculation to not vary with the transient characteristics, preferably a length of at least 100m. Such transient features may be, for example, parked vehicles, vehicles that overtake, or vehicles that travel in opposite directions on the same route. In some embodiments, the length is at least 50m. In some embodiments, the length is 200m. In this way, sensed environmental data is determined for a segment of road (e.g., a longitudinal segment) ('window', e.g., 200 m), and then the resulting data is compared to positioning reference data for that segment. By performing the comparison on a road segment of this size (i.e., a road segment that is substantially greater than the length of the vehicle), non-stationary or temporary objects (e.g., other vehicles on the road, vehicles stopped beside the road, etc.) typically do not affect the comparison result.
At least a portion of the positioning reference data used in accordance with any aspect or embodiment of the invention may be stored remotely. Preferably, in the case of a vehicle, at least part of the positioning reference data is stored locally on the vehicle. Thus, even if the positioning reference data is available throughout the route, it need not be continuously transmitted to the vehicle and the comparison can be performed on the vehicle.
The positioning reference data may be stored in a compressed format. The positioning reference data may have a size corresponding to 30KB/km or less.
The positioning reference data may be stored for at least part (and preferably all) of the navigable elements of the navigable network represented in the digital map. Thus, the position of the vehicle may be continuously determined anywhere along the route traveled by the vehicle.
In an embodiment, reference positioning data may have been obtained from a reference scan using at least one device positioned on a mobile mapping vehicle that has previously traveled along navigable elements that are subsequently traveled by the vehicle. Thus, the reference scan may have been acquired using a different vehicle than the current vehicle whose position was continuously determined. In some embodiments, the mobile mapping vehicle has a similar design as the vehicle whose location is continuously determined.
Real-time scan data and/or reference scan data may be obtained using at least one rangefinder sensor. The rangefinder sensor may be configured to operate along a single axis. The rangefinder sensor may be arranged to perform scanning on a vertical axis. When a scan is performed on the vertical axis, distance information for planes at multiple heights is collected, and thus the resulting scan is significantly more detailed. Alternatively or additionally, the rangefinder sensor may be arranged to perform scanning on a horizontal axis.
The rangefinder sensor may be a laser scanner. The laser scanner may include a laser beam that is scanned across the lateral environment using a mirror. Additionally or alternatively, the rangefinder sensor may be a radar scanner and/or a pair of stereo cameras.
The invention extends to a device, such as a navigation device, vehicle, or the like, having means, such as one or more processors, arranged (e.g., programmed) to perform any of the methods described herein.
The step of generating positioning reference data described herein is preferably performed by a server or another similar computing device.
Means for implementing any steps of the method may include a set of one or more processors configured (e.g., programmed) to do so. The given step may be implemented using the same or a different set of processors as any other step. Any given step may be implemented using a combination of processor sets. The system may further comprise data storage means, such as computer memory, for storing, for example, digital maps, positioning reference data, and/or real-time scan data.
In a preferred embodiment, the method of the present invention is implemented by a server or similar computing device. In other words, the proposed method of the invention is preferably a computer implemented method. Thus, in embodiments, the system of the present invention comprises a server or similar computing device comprising means for implementing the various steps described, and the method steps described herein are implemented by the server.
The invention further extends to a computer program product comprising computer readable instructions executable to perform or cause a device to perform any of the methods described herein. The computer program product is preferably stored in a non-transitory physical storage medium.
As will be appreciated by those skilled in the art, aspects and embodiments of the invention may, and preferably do, include any or all of the preferred and optional features of the invention described herein with respect to any other aspect of the invention, as appropriate.
Detailed description of the preferred embodiments
It has been recognized that there is a need for an improved method for determining the location of a device (e.g., a vehicle) relative to a digital map (representing a navigable network, such as a road network). In particular, it is desirable to be able to accurately determine (e.g., with sub-meter accuracy) the longitudinal position of a device relative to a digital map. The term "longitudinal" in this disclosure refers to a direction along a portion of a navigable network upon which a device (e.g., a vehicle) moves, in other words along the length of a roadway upon which the vehicle travels. The term "transverse" in the present application has a normal meaning perpendicular to the longitudinal direction and thus refers to a direction along the width of a road.
As will be appreciated, when the digital map includes a planning map as described above (e.g., a three-dimensional vector model in which each lane of a road is represented separately (as opposed to a centerline relative to the road as in a standard map), the lateral position of the device (e.g., a vehicle) simply involves determining the lane in which the device is currently traveling; for example, a great deal of research has been conducted in recent years, in which image data from one or more cameras mounted within a vehicle is analyzed, for example, using various image processing techniques, to detect and track lanes in which the vehicle travels, one exemplary technique is set forth in papers written by Hejunhua (Junhwa Hur), kang Xiaona (Sean-Nam Kang) and Xu Chengyou (Seung-Woo Seo), "Multi-lane detection in urban driving environments using conditional random fields" (Multi-lane detection in urban driving environments using conditional random fields), published in Intelligent vehicle conference record (the proceedings of THE INTELLIGENT VEHICLES Sympoium), pages 1297 through 1302, institute of Electrical and Electronics Engineers (IEEE), (2013),. Here, the device may have data feeds from cameras, radar and/or laser radar sensors, and processes the received data in real-time using an appropriate algorithm to determine the current lane of the device or vehicle in which the device is traveling. Alternatively, another device or apparatus, such as a mobile eye system commercially available from mobile eye company (Mobileye n.v.) in nevada, may provide a determination of the current lane of the vehicle based on these data feeds, and then feed the determination of the current lane to the device, such as through a wired connection or a bluetooth connection.
In an embodiment, the longitudinal position of the vehicle may be determined by comparing a real-time scan of the environment surrounding the vehicle (and preferably on one or both sides of the vehicle) with a reference scan of the environment associated with the digital map. From this comparison, a longitudinal offset (if present) can be determined, and the determined offset can be used to match the location of the vehicle with the digital map. Thus, the position of the vehicle relative to the digital map can always be known with high accuracy.
Real-time scanning of the environment surrounding the vehicle may be obtained using at least one rangefinder sensor positioned on the vehicle. The at least one rangefinder sensor may take any suitable form, but in a preferred embodiment comprises a laser scanner, i.e. a LIDAR device. The laser scanner may be configured to scan the laser beam throughout the environment and create a point cloud representation of the environment, each point indicating a location of a surface of the object reflecting the laser light. As should be appreciated, the laser scanner is configured to record the time it takes for the laser beam to return to the scanner after being reflected from the surface of the object, and the recorded time can then be used to determine the distance to each point. In a preferred embodiment, the rangefinder sensor is configured to operate along a single axis in order to obtain data within a certain acquisition angle (e.g., between 50 and 90 °, such as 70 °), such as when the sensor comprises a laser scanner, a mirror within the device is used to scan the laser beam.
An embodiment in which the vehicle 100 travels along a roadway is shown in fig. 7. The vehicle is equipped with rangefinder sensors 101, 102 on each side of the vehicle. While sensors are shown on each side of the vehicle, in other embodiments, only a single sensor may be used on one side of the vehicle. Preferably, the sensors are properly aligned so that the data from each sensor can be combined, as discussed in more detail below.
WO 2011/146523 A2 provides an example of a scanner that may be used on-board a vehicle to capture reference data in the form of a 3-dimensional point cloud, or that may also be used on an autonomous vehicle to obtain real-time data relating to the surrounding environment.
As discussed above, the rangefinder sensor may be arranged to operate along a single axis. In one embodiment, the sensor may be arranged to perform scanning in a horizontal direction (i.e. in a plane parallel to the road surface). This is shown, for example, in fig. 7. By continually scanning the environment as the vehicle travels along the road, sensed environmental data as shown in fig. 8 can be collected. The data 200 is data collected from the left sensor 102 and shows the object 104. Data 202 is data collected from right sensor 101 and shows objects 106 and 108. In other embodiments, the sensor may be arranged to perform scanning in a vertical direction (i.e. in a plane perpendicular to the road surface). By continuously scanning the environment as the vehicle travels along the road, it is possible to collect environmental data in the manner of fig. 6. It will be appreciated that by performing the scan in the vertical direction, distance information is collected for planes at multiple heights, and thus the resulting scan is significantly more detailed. It will of course be appreciated that scanning may be performed along any axis as desired.
A reference scan of the environment is obtained from one or more vehicles that have previously traveled along the road, and then properly aligned with and associated with the digital map. The reference scan is stored in a database associated with the digital map and is referred to herein as positioning reference data. When matched to a digital map, the combination of positioning reference data may be referred to as a positioning map. As will be appreciated, the location map will be created remotely from the vehicle, typically provided by a digital mapping company (e.g., thomson international b.v. (TomTom International b.v.) or HERE corporation, nokia corporation).
The reference scan may be obtained from a dedicated vehicle, such as a mobile mapping vehicle (e.g., as shown in fig. 3). However, in a preferred embodiment, the reference scan may be determined from sensed environmental data collected by the vehicle as it travels along the navigable network. This sensed environmental data may be stored and periodically sent to a digital mapping company to create, maintain, and update a location map.
While the positioning reference data is preferably stored locally at the vehicle, it should be appreciated that the data may be stored remotely. In an embodiment, and in particular when locally storing the positioning reference data, the data is stored in a compressed format.
In an embodiment, positioning reference data is collected for each side of a road in a road network. In such embodiments, the reference data for each side of the road may be stored separately, or alternatively it may be stored together in a combined dataset.
In an embodiment, the positioning reference data may be stored as image data. The image data may be a color (e.g., RGB) image or a grayscale image.
Fig. 9 shows an exemplary format of how positioning reference data may be stored. In this embodiment, the reference data for the left side of the road is provided on the left side of the image and the reference data for the right side of the road is provided on the right side of the image, the data sets being aligned such that the left side reference data set for a particular longitudinal position is shown as opposed to the right side reference data set for the same longitudinal position.
In the image of fig. 9, and for illustrative purposes only, the longitudinal pixel size is 0.5m, with 40 pixels on each side of the centerline. It has also been determined that images may be stored as grayscale images, rather than color (RGB) images. By storing the image in this format, the positioning reference data has a size corresponding to 30 KB/km.
Another example can be seen in fig. 10A and 10B. FIG. 10A shows an example point cloud acquired by ranging sensors mounted to a vehicle traveling along a road. In fig. 10B, this point cloud data has been converted into two depth maps, one for the left side of the vehicle and the other for the right side of the vehicle, which have been placed close to each other to form a composite image.
As discussed above, sensed environmental data determined by the vehicle is compared to positioning reference data to determine if an offset exists. Any determined offset can then be used to adjust the position of the vehicle so that it exactly matches the correct position on the digital map. This determined offset is referred to herein as the correlation index.
In an embodiment, sensed environmental data is determined for a longitudinal road segment (e.g., 200 m), and then the resulting data (e.g., image data) is compared to positioning reference data for the road segment. By performing the comparison on a road segment of this size (i.e., a road segment that is substantially greater than the length of the vehicle), non-stationary or temporary objects (e.g., other vehicles on the road, vehicles stopped beside the road, etc.) will generally not affect the comparison result.
Preferably, the comparison is performed by calculating a cross-correlation between the sensed environmental data and the positioning reference data in order to determine the longitudinal position at which the data set is aligned to the highest degree. The difference between the longitudinal positions of the two data sets of maximum alignment allows for determination of the longitudinal offset. This can be seen, for example, by the offset indicated between the sensed environmental data and the positioning reference data of fig. 8.
In an embodiment, when the data set is provided as an image, the cross-correlation includes a normalized cross-correlation operation such that differences in brightness, lighting conditions, etc. between the positioning reference data and the sensed environmental data may be mitigated. Preferably, the comparison is performed periodically on overlapping windows (e.g., 200m long) such that any offset is continuously determined as the vehicle travels along the road. Fig. 11 shows, in an exemplary embodiment, the determined offset as a function of normalized cross-correlation calculation between the depicted positioning reference data and the depicted sensed environmental data.
Fig. 12 illustrates another example of a correlation performed between a "reference" data set and a "local measurement" data set (acquired by a vehicle as it travels along a road). The result of the correlation between the two images can be seen in the plot of "shift" versus "longitudinal correlation index", where the location of the maximum peak is used to determine the illustrated best fit shift, which can then be used to adjust the longitudinal position of the vehicle relative to the digital map.
As can be seen from fig. 9, 10B, 11 and 12, the positioning reference data and the sensed environmental data are preferably in the form of a depth map, wherein each element (e.g., pixel when the depth map is stored as an image) comprises a first value indicative of a longitudinal position (along the road), a second value indicative of a height (i.e., a height above the ground), and a third value indicative of a lateral position (across the road). Each element (e.g., pixel) of the depth map thus effectively corresponds to a portion of the surface of the environment surrounding the vehicle. As will be appreciated, the size of the surface represented by each element (e.g., pixel) will vary with the amount of compression such that the element (e.g., pixel) will represent a larger surface area with a higher level of compression of the depth map (or image).
In embodiments, where the positioning reference data is stored in a data storage means (e.g., memory) of the device, the comparing step may be performed on one or more processors within the vehicle. In other embodiments, where the positioning reference data is stored remotely from the vehicle, the sensed environmental data may be sent to a server over a wireless connection, for example, via a mobile telecommunications network. The server capable of accessing the positioning reference data will then return any determined offset to the vehicle (e.g., also using the mobile telecommunications network).
An exemplary system within a vehicle according to an embodiment of the invention is depicted in fig. 13. In this system, a processing device, referred to as a correlation index provider unit, receives data feeds from ranging sensors positioned to detect the environment on the left side of the vehicle and ranging sensors positioned to detect the environment on the right side of the vehicle. The processing device also accesses a database of digital maps, preferably in the form of planning maps, and positioning reference data that appropriately matches the digital maps. The processing means is arranged to perform the above method and thus to compare the data feed from the ranging sensor with the positioning reference data to determine the longitudinal offset and hence the exact position of the vehicle relative to the digital map, optionally after converting the data feed into a suitable form (e.g. combining image data of the data from the two sensors). The system also includes a horizon provider unit, and the horizon provider unit uses the determined position of the vehicle and data within the digital map to provide information (referred to as "horizon data") about an upcoming portion of the navigable network that the vehicle is about to traverse. This horizon data may then be used to control one or more systems within the vehicle to perform various assistance or autopilot operations, such as adaptive cruise control, automated lane changing, emergency braking assistance, and the like.
In summary, the present invention relates, at least in preferred embodiments, to a positioning method based on longitudinal correlation. The 3D space around the vehicle is represented in the form of two depth maps that cover the left and right sides of the road and which can be combined into a single image. The reference image stored in the digital map is cross-correlated with a depth map derived from a laser or other ranging sensor of the vehicle to accurately locate the vehicle along (i.e., longitudinally) a representation of the road in the digital map. In an embodiment, the depth information may then be used to position the vehicle across (i.e., laterally across) the road.
In a preferred implementation, the 3D space around the vehicle is projected onto two grids parallel to the road trajectory, and the projected values are averaged within each cell of the grids. The pixels of the longitudinal correlator depth map have a dimension along the direction of travel of about 50cm and a height of about 20 cm. The depth encoded by the pixel values is quantized to about 10cm. Although the depth map image resolution along the direction of travel is 50cm, the resolution of the positioning is much higher. The cross-correlation image represents a grid in which the laser spots are distributed and averaged. Proper upsampling enables the shift vector of the sub-pixel coefficients to be found. Similarly, a depth quantization of about 10cm does not mean a positioning accuracy of 10cm across the road, since the quantization error is averaged over all relevant pixels. In practice, therefore, the accuracy of the positioning is limited mainly by the laser accuracy and calibration, while the quantization error of the longitudinal correlator index contributes only very little.
It should therefore be appreciated that positioning information, such as depth maps (or images), is always available (even if there are no sharp objects in the surrounding environment), compact (storing the road network throughout the world is possible), and makes the accuracy comparable to or even better than other methods due to their availability anywhere and thus higher error averaging possibilities.
FIG. 14A shows an exemplary raster image as part of a piece of positioning reference data. The raster image is formed by orthogonally projecting the collected 3D laser point data onto a hyperplane defined by reference lines and oriented perpendicular to the road surface. Due to the orthogonality of the projections, any height information is independent of the distance from the reference line. The reference line itself extends generally parallel to the lane/road boundary. The actual representation of the hyperplane is a raster format with a fixed horizontal resolution and a nonlinear vertical resolution. This approach aims to maximize the information density about those heights that are important for the detection of vehicle sensors. Experiments have shown that a grating plane height of 5 to 10 meters is sufficient to capture enough relevant information necessary for later use in vehicle positioning. Each individual pixel in the grating reflects a set of laser measurements. Like the vertical resolution, the resolution in the depth information is also represented in a nonlinear manner, but is typically stored in 8-bit values (i.e., as values from 0 to 255). Fig. 14A shows data for both sides of a road. Fig. 14B shows a bird's eye perspective view of the data of fig. 14A as two separate planes on the left and right sides of the road.
As discussed above, vehicles equipped with front or side mounted horizontally mounted laser scanner sensors are capable of generating 2D planes in real time that are similar to the 2D planes of the positioning reference data. The positioning of the vehicle relative to the digital map is achieved by the correlation of the image space of the a priori map data with the real-time sensed and processed data. Longitudinal vehicle positioning is obtained by applying an average non-negative normalized cross-correlation (NCC) operation calculated in overlapping moving windows to an image with a 1-pixel blur in the height domain and a Sobel (Sobel) operator in the longitudinal domain.
Fig. 15A shows fixed longitudinal resolution and variable (e.g., non-linear) vertical and/or depth resolution of positioning reference data and real-time scan data. Thus, although the longitudinal distances represented by the values a, b and c are the same, the height ranges represented by the values D, E and F are different. In particular, the height range represented by D is less than the height range represented by E, and the height range represented by E is less than the height range represented by F. Similarly, the depth range represented by the value 0 (i.e., the surface closest to the vehicle) is less than the depth range represented by the value 100, and the depth range represented by the value 100 is less than the depth range represented by the value 255, i.e., the surface furthest from the vehicle. For example, a value of 0 may represent a depth of 1cm, while a value of 255 may represent a depth of 10 cm.
Fig. 15B illustrates how the vertical resolution may vary. In this example, the vertical resolution varies based on a nonlinear function that maps the height above the reference line to the pixel Y-coordinate value. As shown in fig. 15B, pixels closer to the reference line (equal to 40 at Y in this example) represent lower heights. As also shown in fig. 15B, the vertical resolution is closer to the reference line, i.e., the change in height relative to the pixel location is smaller for pixels closer to the reference line and larger for pixels farther from the reference line.
Fig. 15C illustrates how depth resolution may vary. In this example, the depth resolution varies based on a nonlinear function that maps distance from a reference line to pixel depth (color) values. As shown in fig. 15C, lower pixel depth values represent a shorter distance from the reference line. As also shown in fig. 15C, the depth resolution is greater at lower pixel depth values, i.e., the distance change relative to the pixel depth values is smaller for lower pixel depth values and greater for higher pixel depth values.
Fig. 15D illustrates how a subset of pixels may map to distances along a reference line. As shown in fig. 15D, each pixel along the reference line is the same width, such that the vertical pixel resolution is fixed. Fig. 15D also illustrates how a subset of pixels may map to a height above a reference line. As shown in fig. 15D, the pixels gradually widen at greater distances from the reference line, such that the vertical pixel resolution is lower at greater heights above the reference line. Fig. 15D also illustrates how a subset of pixel depth values may map to a distance from a reference line. As shown in fig. 15D, the distance covered by the pixel depth values gradually widens at greater distances from the reference line, such that the depth resolution is lower at greater depth distances from the reference line.
Some further embodiments and features of the present invention will now be described.
As described with respect to fig. 14A, a depth map (e.g., a raster image) of positioning reference data may be provided by orthogonal projection onto a reference plane defined by a reference line associated with a road element. Fig. 16A illustrates the result of using this projection. The reference plane is perpendicular to the road reference line shown. Here, although the height information is independent of the distance from the reference line, which may provide some advantages, one limitation of orthogonal projection is that information about surfaces perpendicular to the road element may be lost. This is illustrated by the side depth map of fig. 16B obtained using orthogonal projection.
If non-orthogonal projection is used, for example at 45 degrees, this information about the surface perpendicular to the road element can be saved. This is shown by fig. 16C and 16D. Fig. 16C illustrates a 45 degree projection onto a reference plane, again defined perpendicular to the road reference line. As shown in fig. 16D, the side depth map obtained using this projection includes more information about those surfaces of the object that are perpendicular to the road elements. By using non-orthogonal projections, information about such vertical surfaces may be captured by depth map data, but need not include additional data channels, or otherwise increase storage capacity. It will be appreciated that in the case where this non-orthogonal projection is used for depth map data of the positioning reference data, then the corresponding projection should be used for real-time sensing data to be compared with.
Each pixel of depth map data locating the reference data is based on a set of sensing measurements, e.g., laser measurements. These measurements correspond to sensor measurements indicating the distance of the object from the reference plane along the relevant predetermined direction at the location of the pixel. Due to the way the data is compressed, a set of sensor measurements will be mapped to a particular pixel. Instead of determining depth values corresponding to averages of different distances according to the set of sensor measurements to be associated with pixels, it has been found that using nearest distances from among the distances corresponding to the various sensor measurements for the pixel depth values may achieve greater accuracy. It is important that the depth value of a pixel accurately reflects the distance from the reference plane to the nearest surface of the object. This is of most concern when the position of the vehicle is accurately determined in a manner that will minimize the risk of collision. If the average of a set of sensor measurements is used to provide the depth value of a pixel, there is a possibility that the depth value will indicate a greater distance to the object surface than in fact would be the case at the pixel location. This is because one object may be temporarily located between the reference plane and another more distant object, for example, a tree may be located in front of a building. In this case, some sensor measurements for providing pixel depth values will be related to the building and other sensor measurements will be related to the tree as a result of the sensor measurements mapping to areas extending to pixels on one or more sides of the tree. The applicant has appreciated that measuring the closest various sensors as the depth value associated with the pixel is the safest and most reliable in order to ensure reliable capture of the distance to the surface of the nearest object, in this case a tree. Alternatively, a distribution of sensor measurements for the pixels may be derived, and a closest mode may be employed to provide the pixel depth. This will provide a more reliable indication of the pixel depth in a manner similar to the nearest distance.
As described above, the pixels of the depth map data of the positioning reference data include depth channels including data indicating depths from the locations of the pixels in the reference plane to the surface of the object. One or more additional pixel channels may be included in the positioning reference data. This will result in a multi-channel or layer depth map and thus a raster image. In some preferred embodiments, the second channel includes data indicative of a laser reflectivity of the object at the location of the pixel, and the third channel includes data indicative of a radar reflectivity of the object at the location of the pixel.
Each pixel has a position corresponding to a particular distance along the road reference line (x-direction) and a height above the road reference line (y-direction). The depth value associated with a pixel in the first channel c1 indicates the distance of the pixel in the reference plane to the surface of the nearest object (preferably corresponding to the nearest distance of a set of sensing measurements used to obtain the pixel depth value) along a predetermined direction (which may be orthogonal or non-orthogonal to the reference plane depending on the projection used). Each pixel may have a laser reflectivity value in the second channel c2 indicating the average local reflectivity of the laser spot near the distance c1 from the reference plane. In the third channel c3, the pixel may have a radar reflectivity value indicating an average local reflectivity of the radar point at a distance of about c1 from the reference plane. This is shown, for example, in fig. 17. The multi-channel format allows for a large amount of data to be contained in the depth map. Further possible channels that may be used are object thickness (which may be used to recover information about surfaces perpendicular to the road trajectory using orthogonal projections), reflection point density and color and/or texture (obtained, for example, from a camera used to provide reference scan data).
Although the invention has been described with respect to embodiments in which the depth map of the positioning reference data relates to the environment on the lateral side of the road, it has been recognized that the use of differently configured depth maps may be useful for assisting in positioning vehicles at intersections. These additional embodiments may be used in conjunction with side depth maps of areas remote from the intersection.
In some further embodiments, the reference line is defined as circular. In other words, the reference line is non-linear. The circle is defined by a given radius centered at the center of the digital map intersection. The radius of the circle may be selected depending on one side of the intersection. The reference plane may be defined as a 2-dimensional surface perpendicular to this reference line. A (circular) depth map may then be defined, wherein each pixel includes a channel indicating a distance along a predetermined direction from the position of the pixel in the reference plane to the surface of the object (i.e., the depth value) in the same manner as when using a linear reference line. The projections onto the reference plane may similarly be orthogonal or non-orthogonal, and each pixel may have multiple channels. The depth value of a given pixel is preferably based on the nearest sensing distance to the object.
Fig. 18 indicates circular and linear reference lines that may be used to construct depth maps at and away from the intersection, respectively. FIG. 19A illustrates the manner in which objects may be projected onto a circular depth map at different angular positions. Fig. 19B indicates that each of the objects is projected onto a reference plane using orthogonal projection to provide a depth map.
The manner in which a depth map (whether circular or otherwise) of positioning reference data can be compared to real-time sensor data obtained from a vehicle in order to determine a longitudinal alignment offset between the reference and real-time sensed data has been described. In some further embodiments, a lateral alignment offset is also obtained. This involves a series of steps that can be performed in the image domain.
Referring to an example using a side depth map, in a first step of the process, a longitudinal alignment offset between a reference-based side depth map and a real-time sensor data-based side depth map is determined in the manner previously described. The depth maps are shifted relative to each other until they are longitudinally aligned. Next, the reference depth map, i.e., the raster image, is cropped to correspond in size to the depth map based on the real-time sensor data. Next, the reference-based side depth map based on such alignment is compared with the depth values of pixels in the corresponding positions of the real-time sensor-based side depth map, i.e. the values of the depth channels of the pixels. The difference in depth values for each corresponding pixel indicates the lateral offset of the pixel. This can be evaluated by taking into account the color difference of the pixels, where the depth value of each pixel is represented by the color. The most common lateral offset (mode difference) so determined between corresponding pixel pairs is determined and is considered to correspond to the lateral alignment offset of the two depth maps. The most common lateral offset may be obtained using a histogram of depth differences between pixels. Once the lateral offset is determined, it can be used to correct the perceived lateral position of the vehicle on the road.
Fig. 20A illustrates a reference depth map (i.e., image) that may be compared to determine a lateral offset alignment of the depth map and a corresponding depth map or image based on real-time sensor data from the vehicle. As illustrated in fig. 20B, the images are first shifted relative to each other to longitudinally align them. Next, after clipping the reference image, a lateral alignment offset between the depth maps is determined using a histogram of differences in pixel depth values of corresponding pixels in the two depth maps-fig. 20C. Fig. 20D illustrates how this can achieve a longitudinal position, and then how the lateral position of the vehicle on the road is corrected.
Once the lateral alignment offset between the reference-based depth map and the real-time data-based depth map has been obtained, the heading of the vehicle may also be corrected. It has been found that in the event that there is an offset between the actual direction of travel of the vehicle and the perceived direction of travel, this will result in a non-constant lateral alignment offset being determined between corresponding pixels in the reference-based depth map and the real-time sensing data-based depth map that varies as a function of longitudinal distance along the depth map.
Fig. 21A illustrates a set of vertical slices through corresponding portions of a reference depth map image (up) and a real-time sensor-based depth map image (down). The average difference (i.e., lateral alignment offset) of the pixel depth values of the corresponding pixels in each slice is plotted against the longitudinal distance (x-axis) along the map/image (y-axis). This figure is shown in fig. 21B. A function describing the relationship between the average pixel depth distance and the longitudinal distance along the depth map may then be derived by suitable regression analysis. The gradient of this function indicates the forward direction offset of the vehicle.
The depth map used in embodiments of the present invention may be transformed so as to be always relative to a straight line reference line, i.e. so as to be a linear reference image, for example as described in WO 2009/045096 A1. This has the advantage as shown in fig. 22. At the left side of fig. 22 is an image of a curved road. In order to mark the center line of a curved road, several marks 1102 must be placed. At the right hand side of fig. 22, a corresponding linear reference image is shown corresponding to a curved road in the left side of the figure. To obtain a linear reference image, the centerline of the curved road is mapped to a straight line reference line of the linear reference image. In view of this transformation, the reference line can now be defined simply by the two endpoints 1104 and 1106.
When on a perfectly straight road, the calculated shift from the comparison of the reference depth map with the real-time depth map can be directly applied, which is not possible on a curved road due to the non-linear nature of the linearization process used to generate the linear reference image. 23A and 23B show computationally efficient methods for establishing the position of a vehicle in a nonlinear environment through a series of incremental independent linear update steps. As shown in fig. 23A, the method involves applying a longitudinal correction, then a lateral correction, and then a heading correction in a series of incrementally independent linear update steps. In particular, in step (1), a longitudinal offset is determined using the vehicle sensor data and a reference depth map based on a current considered position of the vehicle relative to the digital map (e.g., obtained using GPS). The longitudinal offset is then applied to adjust the perceived position of the vehicle relative to the digital map, and the reference depth map is recalculated based on the adjusted position. Next, in step (2), the lateral offset is determined using the vehicle sensor data and the recalculated reference depth map. The lateral offset is then applied to further adjust the perceived position of the vehicle relative to the digital map, and the reference depth map is recalculated again based on the adjusted position. Finally, at step (3), the heading offset or skew is determined using the vehicle sensor data and the recalculated reference depth map. The heading offset is then applied to further adjust the perceived position of the vehicle relative to the digital map, and the reference depth map is recalculated again based on the adjusted position. These steps are repeated as many times as necessary for there to be a substantially zero longitudinal offset, lateral offset, and forward direction offset between the real-time depth map and the reference depth map. FIG. 23B shows the continuous and repeated application of longitudinal, lateral, and forward direction offsets to a point cloud generated from vehicle sensor data until that point cloud is substantially aligned with a point cloud generated from a reference depth map.
A series of exemplary use cases of positioning reference data are also depicted.
For example, in some embodiments, rather than using a depth map of positioning reference data for comparison purposes with a depth map based on real-time sensor data, a depth map of positioning reference data is used to generate a reference point cloud, including a set of data points in a three-dimensional coordinate system, each point representing a surface of an object in the environment. This reference point cloud may be compared to a corresponding three-dimensional point cloud based on real-time sensor data obtained by the vehicle sensors. The comparison may be used to determine an alignment offset between the depth maps and thus adjust the determined position of the vehicle.
A reference depth map may be used to obtain a reference 3D point cloud, which may be compared to a corresponding point cloud based on real-time sensor data of the vehicle (regardless of which type of sensor that vehicle has). While the reference data may be based on sensor data obtained from various types of sensors, including laser scanners, radar scanners, and cameras, the vehicle may not have a corresponding set of sensors. The 3D reference point cloud may be constructed from a reference depth map that may be compared to a 3D point cloud obtained based on real-time sensor data of a particular type available for the vehicle.
For example, where the depth map of the reference positioning data includes channels indicative of radar reflectivity, this may be considered when generating a reference point cloud, which may be compared to a 3D point cloud obtained using real-time sensor data of a vehicle having only radar sensors. The radar reflectivity data associated with the pixels helps to identify those data points that should be included in the 3D reference point cloud, i.e., the data points represent the surface of the object that will be desired to be detected by the vehicle radar sensor.
In another example, the vehicle may have only one or more cameras for providing real-time sensor data. In this case, the data from the laser reflectivity channels of the reference depth map may be used to construct a 3D reference point cloud that includes data points that relate only to surfaces that may be expected to be detected by the vehicle's camera in the current state. For example, on a dark day, only relatively reflective objects should be included.
A 3D point cloud based on real-time sensing data of the vehicle may be obtained as needed. In the case where the vehicle contains only a single camera as a sensor, a "motion-from-recovery" technique may be used, where a series of images from the camera are used to reconstruct a 3D scene from which a 3D point cloud may be obtained. In the case where the vehicle includes a stereo camera, the 3D scene may be directly generated and used to provide a 3-dimensional point cloud. This may be achieved using a disparity-based 3D model.
In yet other embodiments, rather than comparing the reference point cloud to the real-time sensor data point cloud, the reference point cloud is used to reconstruct an image that is expected to be seen by one or more cameras of the vehicle. The images may then be compared and used to determine an alignment offset between the images, which in turn may be used to correct the perceived location of the vehicle.
In these embodiments, additional channels of the reference depth map may be used as described above to reconstruct an image tool based on including only those points in the 3-dimensional reference point cloud that are expected to be detected by the vehicle's camera. For example, in darkness, a laser reflectivity channel may be used to select those points included in a 3-dimensional point cloud that correspond to the surface of an object that may be detected in darkness by a camera. It has been found that the use of non-orthogonal projections onto the reference plane is particularly useful in this context when determining the reference depth map, thereby preserving more information about the surface of the object that is still detectable in the dark.
FIG. 24 depicts an exemplary system in which data collected by one or more vehicle sensors (lasers, cameras, and radar) is used to generate an "actual footprint" of the environment as seen by the vehicle, according to an embodiment of the invention. The "actual footprint" is compared (i.e., correlated) with a corresponding "reference footprint" determined from reference data associated with the digital map, wherein the reference data includes at least one distance channel, and may include laser reflectivity channels and/or radar reflectivity channels, as discussed above. By this correlation, the position of the vehicle can be accurately determined with respect to the digital map.
In a first example use case, as depicted in fig. 25A, an actual footprint is determined from a laser-based distance sensor (e.g., a LIDAR sensor) in the vehicle and correlated to a reference footprint determined from data in a distance channel of reference data in order to achieve a sustained positioning of the vehicle. Fig. 25B shows a first method in which a laser point cloud as determined by a laser-based distance sensor is converted into a depth map of the same format as reference data, and the two depth map images are compared. A second alternative method is shown in fig. 25C, in which a laser point cloud is reconstructed from reference data, and this reconstructed point cloud is compared to the laser point cloud as seen by the vehicle.
In a second example use case, as depicted in fig. 26A, the actual footprint is determined from the camera in the vehicle and correlated with the reference footprint determined from the data in the range channel of the reference data in order to achieve a continuous positioning of the vehicle, albeit only during the day. In other words, in this example use case, a reference depth map is used to construct a 3D point cloud or view, which is then compared to 3D scenes or views obtained from multiple vehicle cameras or a single vehicle camera. A first method is shown in fig. 26B, in which a disparity-based 3D model is built using a stereo vehicle camera, which is then used to build a 3D point cloud for correlation with the 3D point cloud built from the reference depth map. A second method is shown in fig. 26C, in which a 3D scene is constructed using a sequence of vehicle camera images, then a 3D point cloud is constructed using the 3D scene to correlate with the 3D point cloud constructed from the reference depth map. Finally, a third method is shown in fig. 26D, in which the vehicle camera image is compared to a view created from a 3D point cloud constructed from a reference depth map.
In a third example use case, as shown in fig. 27A, is a modification to the second example use case, in which laser reflectivity data of reference data located in channels of a depth map may be used to construct a 3D point cloud or view, which may be compared to a 3D point cloud or view based on images captured by one or more cameras. A first method is shown in fig. 27B, where a 3D scene is constructed using a sequence of vehicle camera images, then a 3D point cloud is constructed using the 3D scene to correlate with the 3D point cloud constructed from the reference depth map (using both distance and laser reflectivity channels). A second method is shown in fig. 27C, in which the vehicle camera image is compared to a view created from a 3D point cloud (reusing both distance and laser reflectivity channels) constructed from a reference depth map.
In a fourth example use case, as depicted in fig. 28A, an actual footprint is determined from radar-based distance sensors in the vehicle and correlated with a reference footprint determined from the distance of the reference data and the data in the radar reflectivity channels in order to achieve sparse positioning of the vehicle. A first method is shown in fig. 28B, where reference data is used to reconstruct a 3D scene and data in the radar reflectivity channels is used to leave only radar reflection points. This 3D scene is then correlated with a radar point cloud as seen by the vehicle.
Of course, it should be understood that various use cases, i.e., fusion, may be used together to allow for more accurate positioning of the vehicle relative to the digital map.
A method of correlating vehicle sensor data with reference data in order to determine the position of a vehicle, for example, as discussed above, will now be described with reference to fig. 29-32B. FIG. 29 depicts various coordinate systems used in the method, a local coordinate system (local CS), a carriage coordinate system (CF CS), and a Linear Reference Coordinate System (LRCS) along the vehicle trajectory. Another coordinate system, although not depicted, is the World Geodetic System (WGS), wherein a location is given as a latitude coordinate, longitude coordinate pair, as is known in the art. In fig. 30 a general method is shown, wherein details of the steps are performed to determine the laser point cloud shown in fig. 31. Fig. 32A shows a first exemplary method of performing the correlation step of fig. 30, wherein the position of the vehicle is corrected by image correlation between, for example, a depth map raster image of reference data and a corresponding depth map raster image created from vehicle sensor data. Fig. 32B shows a second exemplary method of performing the correlation step of fig. 30, wherein the position of the vehicle is corrected by a 3D correlation between a 3D scene constructed from reference data and a 3D scene captured by the vehicle sensor.
Any method according to the invention may be implemented at least in part using software (e.g., a computer program). Thus, the invention also extends to a computer program comprising computer readable instructions executable to perform or cause a navigation device to perform a method according to any aspect or embodiment of the invention. Accordingly, the disclosure contemplates a computer program product that, when executed by one or more processors, causes the one or more processors to generate suitable images (or other graphical information) for display on a display screen. The invention correspondingly extends to a computer software carrier comprising such software which, when used to operate a system or apparatus comprising data processing means, together with said data processing means, causes said apparatus or system to perform the steps of the method of the invention. Such a computer software carrier may be a non-transitory physical storage medium such as a ROM chip, CD ROM or diskette, or may be a signal such as an electronic signal via wires, an optical signal or a radio signal (e.g., to a satellite) or the like. The present invention provides a machine-readable medium containing instructions which, when read by a machine, cause the machine to operate in accordance with the methods of any aspect or embodiment of the present invention.
Where not explicitly stated, it is to be understood that the invention may include in any aspect thereof any or all of the features described in relation to other aspects or embodiments of the invention, provided that they are not mutually exclusive. In particular, while various embodiments of operations have been described which may be performed in the method and by the apparatus, it should be understood that any or more or all of these operations may be performed in the method and by the apparatus in any combination, as desired.
The following are some examples of the present disclosure.
According to one example of the present disclosure, there is provided a method of generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the method comprising:
For at least one navigable element represented by the digital map, obtaining a set of data points in a three-dimensional coordinate system, wherein each data point represents a surface of an object of the environment surrounding the at least one navigable element of the navigable network;
generating positioning reference data from the set of data points, the positioning reference data comprising at least one depth map indicative of the environment around the navigable element projected onto a reference plane, the reference plane being defined by a reference line associated with the navigable element, each pixel in the at least one depth map being associated with a location in the reference plane associated with the navigable element, and the pixel including a depth channel representing a distance from the associated location of the pixel in the reference plane to a surface of an object in the environment along a predetermined direction, wherein the distance to the surface of the object represented by the depth channel of each pixel is determined based on a set of multiple sensed data points, each sensed data point indicating a sensed distance from the location of the pixel to the surface of the object along the predetermined direction, and wherein the distance to the surface of the object represented by the depth channel of the pixel is based on the set of data points or a closest distance, and a closest pattern of data points
The generated positioning reference data is associated with the digital map data.
At least some of the sensed data points of the set of multiple sensed data points of a particular pixel may be related to surfaces of different objects.
The different objects may be positioned at different depths relative to the reference plane.
The distance to the surface of the object represented by the depth channel of a particular pixel may not be based on an average of the set of multiple sensed data points of the particular pixel.
According to another example of the present disclosure, there is provided a method of generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the method comprising:
For at least one navigable element represented by the digital map, obtaining a set of data points in a three-dimensional coordinate system, wherein each data point represents a surface of an object of the environment surrounding the at least one navigable element of the navigable network;
Generating positioning reference data from the set of data points, the positioning reference data comprising at least one depth map indicative of the environment around the navigable element projected onto a reference plane, the reference plane defined by a longitudinal reference line oriented parallel to the navigable element and perpendicular to a surface of the navigable element, each pixel in the at least one depth map being associated with a position in the reference plane associated with the navigable element and the pixel including a depth channel representing a lateral distance in a predetermined direction from the associated position of the pixel in the reference plane to a surface of an object in the environment, wherein the at least one depth map has a fixed longitudinal resolution and a variable vertical and/or depth resolution, and
The generated positioning reference data is associated with the digital map data.
The variable vertical and/or depth resolution may be non-linear.
Portions of the depth map closer to the ground may be displayed at a higher resolution than portions of the depth map above the ground.
Portions of the depth map that are closer to the navigable element may be shown at a higher resolution than portions of the depth map that are farther from the navigable element.
According to another example of the present disclosure, there is provided a method of generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the method comprising:
For at least one navigable element represented by the digital map, obtaining a set of data points in a three-dimensional coordinate system, wherein each data point represents a surface of an object of the environment surrounding the at least one navigable element of the navigable network;
Generating positioning reference data from the set of data points, the positioning reference data comprising at least one depth map indicative of the environment around the navigable element projected onto a reference plane, the reference plane defined by a reference line parallel to the navigable element, each pixel in the at least one depth map being associated with a location in the reference plane associated with the navigable element, and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment, wherein the predetermined direction is not perpendicular to the reference plane, and
The generated positioning reference data is associated with the digital map data indicative of the navigable element.
The projection of the environment onto the reference plane may be a non-orthogonal projection.
The predetermined direction may be along a direction that is substantially 45 degrees with respect to the reference plane.
The navigable elements may include roads and the navigable network includes a road network.
The positioning reference data may be generated for a plurality of navigable elements of the navigable network represented by the digital map.
The reference line associated with the navigable element may be defined by a point or points associated with the navigable element.
The reference line associated with the navigable element is an edge, boundary, lane, or centerline of the navigable element.
The positioning reference data may provide a representation of the environment on one or more sides of the navigable element.
The depth map may take the form of a raster image.
Each pixel of the depth map may be associated with a particular longitudinal position and elevation in the depth map.
Associating the generated positioning reference data with the digital map data may include storing the positioning reference data in association with the navigable element to which it relates.
The positioning reference data may include representations of the environment on a left side of the navigable element and a right side of the navigable element.
The positioning reference data for each side of the navigable element may be stored in a combined dataset.
According to another example of the present disclosure, there is provided a method of generating positioning reference data associated with a digital map representing elements of a navigable network, the positioning reference data providing a compressed representation of the environment surrounding at least one junction of the navigable network represented by the digital map, the method comprising:
Obtaining, for at least one junction represented by the digital map, a set of data points in a three-dimensional coordinate system, wherein each data point represents a surface of an object in the environment surrounding the at least one junction of the navigable network;
Generating positioning reference data from the set of data points, the positioning reference data comprising at least one depth map indicative of the environment around the junction projected onto a reference plane defined by a reference line defined by a radius centered on a reference point associated with the junction, each pixel in the at least one depth map being associated with a location in the reference plane associated with the junction and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment, and
The generated positioning reference data is associated with digital map data indicative of the point of engagement.
The depth map may extend about 360 degrees to provide a 360 degree representation of the environment around the junction.
The depth map may extend less than about 360 degrees.
The reference point may be located at the centre of the junction.
The reference point may be associated with a node of the digital map representing the junction or a navigable element at the junction.
The junction may be an intersection.
The set of data points may be obtained using at least one rangefinder sensor on a mobile mapping vehicle that has previously traveled along the at least one navigable element.
The at least one rangefinder sensor may include one or more of a laser scanner, a radar scanner, and a pair of stereo cameras.
According to another example of the present disclosure, there is provided a method of determining a position of a vehicle relative to a digital map, the digital map including data representative of a junction through which the vehicle travels, the method comprising:
Obtaining positioning reference data associated with the digital map for a considered current position of the vehicle in the navigable network, wherein the positioning reference data comprises at least one depth map indicative of an environment surrounding the vehicle projected onto a reference plane defined by a reference line defined by a radius centered on a reference point associated with the junction point, each pixel in the at least one depth map being associated with a position in the reference plane associated with the junction point through which the vehicle travels, and the pixel including a depth channel representing a distance along a predetermined direction from the associated position of the pixel in the reference plane to a surface of an object in the environment;
Determining real-time scan data by scanning the environment around the vehicle using at least one sensor, wherein the real-time scan data comprises at least one depth map indicative of the environment around the vehicle, each pixel in the at least one depth map being associated with a location in the reference plane associated with the junction, and the pixel including a depth channel representing a distance from the associated location of the pixel in the reference plane to a surface of an object in the environment, determined using the at least one sensor, along the predetermined direction;
Calculating a correlation between the positioning reference data and the real-time scan data to determine an alignment offset between the depth maps, and
The determined alignment offset is used to adjust the considered current position to determine the position of the vehicle relative to the digital map.
According to another example of the present disclosure, there is provided a computer program product comprising computer readable instructions executable to cause a system to perform a method as described above, optionally stored on a non-transitory computer readable medium.
According to another example of the present disclosure, there is provided a system for generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the system comprising processing circuitry configured to, for the at least one navigable element represented by the digital map:
Obtaining a set of data points in a three-dimensional coordinate system, wherein each data point represents a surface of an object in the environment surrounding the at least one navigable element of the navigable network;
Generating positioning reference data from the set of data points, the positioning reference data comprising at least one depth map indicative of the environment around the navigable element projected onto a reference plane, the reference plane being defined by a reference line associated with the navigable element, each pixel of the at least one depth map being associated with a location in the reference plane associated with the navigable element, and the pixel including a depth channel representing a distance from the associated location of the pixel in the reference plane to a surface of an object in the environment in a predetermined direction, wherein the distance to the surface of the object represented by the depth channel of each pixel is determined based on a set of multiple sensed data points, each sensed data point indicating a sensed distance from the location of the pixel to the surface of the object in the predetermined direction, and wherein the distance to the surface of the object represented by the depth channel of the pixel is based on the set of sensed data points or a closest distance and a closest mode of distance to the object
The generated positioning reference data is associated with the digital map data.
According to another example of the present disclosure, there is provided a system for generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the system comprising processing circuitry configured to, for the at least one navigable element represented by the digital map:
Obtaining a set of data points in a three-dimensional coordinate system, wherein each data point represents a surface of an object in the environment surrounding the at least one navigable element of the navigable network;
Generating positioning reference data from the set of data points, the positioning reference data comprising at least one depth map indicative of the environment around the navigable element projected onto a reference plane, the reference plane defined by a longitudinal reference line oriented parallel to the navigable element and perpendicular to a surface of the navigable element, each pixel of the at least one depth map being associated with a position in the reference plane associated with the navigable element and the pixel including a depth channel representing a lateral distance along a predetermined direction from the associated position of the pixel in the reference plane to a surface of an object in the environment, wherein the at least one depth map has a fixed longitudinal resolution and a variable vertical and/or depth resolution, and
The generated positioning reference data is associated with the digital map data.
According to another example of the present disclosure, there is provided a system for generating positioning reference data associated with a digital map, the positioning reference data providing a compressed representation of an environment surrounding at least one navigable element of a navigable network represented by the digital map, the system comprising processing circuitry configured to, for the at least one navigable element represented by the digital map:
Obtaining a set of data points in a three-dimensional coordinate system, wherein each data point represents a surface of an object in the environment surrounding the at least one navigable element of the navigable network;
Generating positioning reference data from the set of data points, the positioning reference data comprising at least one depth map indicative of the environment around the navigable element projected onto a reference plane, the reference plane defined by a reference line parallel to the navigable element, each pixel in the at least one depth map being associated with a location in the reference plane associated with the navigable element, and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment, wherein the predetermined direction is not perpendicular to the reference plane, and
The generated positioning reference data is associated with digital map data indicative of the navigable elements.
According to another example of the present disclosure, there is provided a system for generating positioning reference data associated with a digital map representing elements of a navigable network, the positioning reference data providing a compressed representation of an environment surrounding at least one junction of the navigable network represented by the digital map, the system comprising processing circuitry configured to, for at least one junction represented by the digital map:
Obtaining a set of data points in a three-dimensional coordinate system, wherein each data point represents a surface of an object in the environment around the at least one junction of the navigable network;
Generating positioning reference data from the set of data points, the positioning reference data comprising at least one depth map indicative of the environment around the junction projected onto a reference plane defined by a reference line defined by a radius centered on a reference point associated with the junction, each pixel in the at least one depth map being associated with a location in the reference plane associated with the junction and the pixel including a depth channel representing a distance along a predetermined direction from the associated location of the pixel in the reference plane to a surface of an object in the environment, and
The generated positioning reference data is associated with digital map data indicative of the point of engagement.
According to yet another example of the present disclosure, there is provided a system for determining a position of a vehicle relative to a digital map, the digital map including data representative of a junction through which the vehicle travels, the system comprising processing circuitry configured to:
Obtaining positioning reference data associated with the digital map for a considered current position of the vehicle in the navigable network, wherein the positioning reference data comprises at least one depth map indicative of an environment surrounding the vehicle projected onto a reference plane defined by a reference line defined by a radius centered on a reference point associated with the junction point, each pixel in the at least one depth map being associated with a position in the reference plane associated with the junction point through which the vehicle travels, and the pixel including a depth channel representing a distance along a predetermined direction from the associated position of the pixel in the reference plane to a surface of an object in the environment;
Determining real-time scan data by scanning the environment around the vehicle using at least one sensor, wherein the real-time scan data comprises at least one depth map indicative of the environment around the vehicle, each pixel in the at least one depth map being associated with a location in the reference plane associated with the junction, and the pixel including a depth channel representing a distance from the associated location of the pixel in the reference plane to a surface of an object in the environment, determined using the at least one sensor, along the predetermined direction;
Calculating a correlation between the positioning reference data and the real-time scan data to determine an alignment offset between the depth maps, and
The determined alignment offset is used to adjust the considered current position to determine the position of the vehicle relative to the digital map.