Disclosure of Invention
In order to solve the problems, the invention provides an object positioning method in the vehicle driving process, which can combine the existing technology of rapidly processing two-dimensional image information and the technology of acquiring object position information by a laser radar, make up for the defect that a two-dimensional image cannot reflect distance information, overcome the defects of large amount of point cloud data acquired by the laser radar and long processing time, and greatly improve the object positioning speed in the vehicle driving process.
Based on this, the invention provides an object positioning method in the driving process of a vehicle, which comprises the following steps: a method for locating an object while a vehicle is traveling, comprising:
the system comprises a camera module, an image data processing module, a data coordinate conversion module and a laser radar scanning control module;
the camera module acquires a two-dimensional image of the road environment in the running process of the vehicle;
the image data processing module identifies an object in the driving process of the vehicle according to the two-dimensional image;
the image data processing module extracts a trunk part of the object according to the image characteristics of the object, and calibrates the trunk part to obtain an image two-dimensional coordinate of the trunk part of the object on a two-dimensional image;
the data coordinate conversion module performs coordinate conversion on the two-dimensional image coordinate and converts the two-dimensional image coordinate into a position parameter required by laser radar scanning in the laser radar scanning control module;
and the laser radar scanning control module controls the laser radar to scan the object according to the position parameter to obtain distance information.
Wherein the image features of the object include: size, shape, brightness, color of the image. Wherein the extracting the stem portion of the object comprises:
and extracting geometric features of the object, wherein the geometric features comprise triangles, quadrangles, pentagons or combined polygons thereof, and vertexes of the geometric features form positioning points.
Wherein said extracting a trunk portion of the object further comprises: and extracting points of the reflection effect of the object on the radar laser source exceeding the preset effect through the brightness information of the image to serve as positioning points.
Wherein, laser radar scanning control module control laser radar is to the object scan includes:
and the laser radar control module calculates a scanning path according to the position parameters of the positioning points in each frame of the two-dimensional image data and a path planning algorithm.
Wherein the obtaining distance information comprises: distance information obtained when the vehicle is stationary and distance information obtained when the vehicle is moving.
Wherein, the distance information obtained when the vehicle is stationary and the distance information obtained when the vehicle is moving have the same calculation mode, and both comprise:
and acquiring the time difference between the emitted laser and the received reflected laser, and taking one half of the product of the light speed and the time difference as the distance information of the object.
And the range of the area scanned by the laser radar is the same as the range captured by the camera.
Wherein the coordinate parameter conversion includes coordinate parameter conversion performed in a stationary or running state of the vehicle.
The camera module, the image data processing module, the data coordinate conversion module and the laser radar scanning control module are carried out simultaneously.
The invention utilizes the existing technology for rapidly processing the two-dimensional image information and the capability of the laser radar for acquiring the position information of the object. The camera and the laser radar are combined, so that the defects that the two-dimensional image cannot reflect distance information and the defects of large amount of point cloud data acquired by the laser radar and long processing time are overcome. By the method and the device, the distance information of nearby objects in the automatic driving process can be quickly obtained, and the response speed of the automatic driving system is greatly improved.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic diagram of an object locating method in a vehicle driving process provided by an embodiment of the invention, where the object locating method in the vehicle driving process includes:
the system comprises acamera module 101, an imagedata processing module 102, a datacoordinate conversion module 103 and a laser radarscanning control module 104;
thecamera module 101 acquires a two-dimensional image of a road environment in the running process of a vehicle;
the imagedata processing module 102 identifies an object in the driving process of the vehicle according to the two-dimensional image;
the imagedata processing module 102 extracts a trunk part of the object according to the image characteristics of the object, and calibrates the trunk part to obtain an image two-dimensional coordinate of the trunk part of the object on a two-dimensional image;
the datacoordinate conversion module 103 performs coordinate conversion on the two-dimensional coordinates of the image, and converts the two-dimensional coordinates into position parameters required by laser radar scanning in the laser radarscanning control module 104;
and the laser radarscanning control module 104 controls the laser radar to scan the object according to the position parameter, so as to obtain distance information.
Thecamera module 101 includes a camera, and captures a road environment image at a vehicle view angle, the road environment image being a two-dimensional image having a defect that information on a distance between objects cannot be reflected.
The image features of the object include: color features, texture features, shape features, and spatial relationships.
A color feature is a global feature that describes the surface properties of a scene to which an image or image region corresponds.
A texture feature is also a global feature that also describes the surface properties of the scene to which the image or image area corresponds. However, since texture is only a characteristic of the surface of an object and does not completely reflect the essential attributes of the object, high-level image content cannot be obtained by using texture features alone. Unlike color features, texture features are not based on the characteristics of the pixel points, which requires statistical calculations in regions containing multiple pixel points.
There are two types of representation methods for shape features, one is outline features and the other is region features. The outline features of the image are mainly directed to the outer boundary of the object, while the area features of the image are related to the entire shape area.
The spatial relationship refers to the spatial position or relative direction relationship between a plurality of objects divided from the image, and these relationships can be classified into a connection or adjacency relationship, an overlapping or overlapping relationship, an inclusion or inclusion relationship, and the like. In general, spatial location information can be divided into two categories: relative spatial position information and absolute spatial position information. The former relation emphasizes the relative situation between the objects, such as the upper, lower, left and right relations, and the latter relation emphasizes the distance and orientation between the objects.
Wherein the image characteristics of the object mainly include: size, shape, brightness, color of the image.
Wherein the extracting the stem portion of the object comprises:
and extracting geometric features of the object, wherein the geometric features comprise triangles, quadrangles, pentagons or combined polygons thereof, and vertexes of the geometric features form positioning points.
The extracting the trunk portion of the object further includes: and extracting points of the reflection effect of the object on the radar laser source exceeding the preset effect through the brightness information of the image to serve as positioning points.
Wherein the controlling of the lidar to scan the object by the lidarscanning control module 104 includes:
and the laser radar control module calculates a scanning path according to the position parameters of the positioning points in each frame of the two-dimensional image data and a path planning algorithm.
The obtaining distance information includes: distance information obtained when the vehicle is stationary and distance information obtained when the vehicle is moving.
Wherein, the distance information obtained when the vehicle is stationary and the distance information obtained when the vehicle is moving have the same calculation mode, and both comprise:
and acquiring the time difference between the emitted laser and the received reflected laser, and taking one half of the product of the light speed and the time difference as the distance information of the object.
The range of the area scanned by the laser radar is the same as the range captured by the camera.
The coordinate parameter conversion includes coordinate parameter conversion performed in a stationary or running state of the vehicle. And thecamera module 101, the imagedata processing module 102, the data coordinateconversion module 103 and the laser radarscan control module 104 may be performed simultaneously.
Fig. 2 is a flowchart of an object locating method during driving of a vehicle according to an embodiment of the present invention, where the method includes:
s201, the camera module acquires a two-dimensional image of a road environment in the running process of a vehicle;
the camera module comprises a camera and is used for shooting a road environment image at a vehicle visual angle, wherein the road environment image is a two-dimensional image which has the defect that distance information between objects cannot be reflected.
The image features of the object include: color features, texture features, shape features, and spatial relationships.
A color feature is a global feature that describes the surface properties of a scene to which an image or image region corresponds.
A texture feature is also a global feature that also describes the surface properties of the scene to which the image or image area corresponds. However, since texture is only a characteristic of the surface of an object and does not completely reflect the essential attributes of the object, high-level image content cannot be obtained by using texture features alone. Unlike color features, texture features are not based on the characteristics of the pixel points, which requires statistical calculations in regions containing multiple pixel points.
There are two types of representation methods for shape features, one is outline features and the other is region features. The outline features of the image are mainly directed to the outer boundary of the object, while the area features of the image are related to the entire shape area.
The spatial relationship refers to the spatial position or relative direction relationship between a plurality of objects divided from the image, and these relationships can be classified into a connection or adjacency relationship, an overlapping or overlapping relationship, an inclusion or inclusion relationship, and the like. In general, spatial location information can be divided into two categories: relative spatial position information and absolute spatial position information. The former relation emphasizes the relative situation between the objects, such as the upper, lower, left and right relations, and the latter relation emphasizes the distance and orientation between the objects.
Wherein the image characteristics of the object mainly include: size, shape, brightness, color of the image.
S202, the image data processing module identifies an object in the driving process of the vehicle according to the two-dimensional image;
objects such as trees, pedestrians, other vehicles and the like existing in the two-dimensional image can be identified by the image data processing module.
S203, the image data processing module extracts a trunk part of the object according to the image characteristics of the object, and calibrates the trunk part to obtain an image two-dimensional coordinate of the trunk part of the object on a two-dimensional image;
and the image data processing module extracts the trunk part of the identified object according to the image characteristics of the object.
The extracting the stem portion of the object includes:
and extracting geometric features of the object, wherein the geometric features comprise triangles, quadrangles, pentagons or combined polygons thereof, and vertexes of the geometric features form positioning points.
Wherein said extracting a trunk portion of the object further comprises: and extracting points of the reflection effect of the object on the radar laser source exceeding the preset effect through the brightness information of the image to serve as positioning points.
For example, if the object is a tree, the tree may be regarded as a triangle and a quadrangle which are spliced up and down, three vertexes of the triangle and four vertexes of the quadrangle are used as positioning points, and the quadrangle may be regarded as a straight line and represented by a plurality of points, as shown in fig. 3.
And calibrating the positioning point, namely marking the positioning point, and acquiring the image two-dimensional coordinates of the positioning point in the two-dimensional image.
S204, the data coordinate conversion module performs coordinate conversion on the two-dimensional coordinates of the image and converts the two-dimensional coordinates into position parameters required by laser radar scanning in the laser radar scanning control module;
and the data coordinate conversion module is used for carrying out coordinate conversion on the two-dimensional coordinates of the image, namely converting the two-dimensional coordinates of a plane into three-dimensional coordinates of a space, and the radar in the laser radar scanning control module is used for scanning the object according to the three-dimensional coordinates of the space.
S205, the laser radar scanning control module controls the laser radar to scan the object according to the position parameter, and distance information is obtained.
And the laser radar control module calculates a scanning path according to the position parameters of the positioning points in each frame of the two-dimensional image data and a path planning algorithm.
Wherein the obtaining distance information comprises: distance information obtained when the vehicle is stationary and distance information obtained when the vehicle is moving.
Wherein, the distance information obtained when the vehicle is stationary and the distance information obtained when the vehicle is moving have the same calculation mode, and both comprise:
and acquiring the time difference between the emitted laser and the received reflected laser, and taking one half of the product of the light speed and the time difference as the distance information of the object.
Fig. 4 is a schematic diagram of the distance information obtained when the vehicle is stationary and the distance information obtained when the vehicle is moving according to the embodiment of the present invention in the same calculation manner, please refer to fig. 4, the reason why the distance information obtained when the vehicle is stationary and the distance information obtained when the vehicle is moving are the same is that:
fig. 1 and 2 are schematic diagrams of a vehicle scanning lidar at a standstill, the vehicle emitting laser light, the vehicle receiving the reflected light via reflection from the object, the distance between the object and the vehicle being mathematically derived as:
and acquiring the time difference between the emitted laser and the received reflected laser, and taking one half of the product of the light speed and the time difference as the distance information of the object and the vehicle.
Fig. 3 and 4 are schematic diagrams of laser radar scanning performed by a vehicle in motion, wherein the vehicle emits laser light at one position, the vehicle receives reflected light via reflection of the object at another position, and the distance between the object and the vehicle is mathematically easy to obtain:
the method comprises the steps of obtaining the time difference between laser emission and reflected laser receiving, obtaining the speed difference between the light speed and the vehicle speed, and taking one half of the product of the difference and the time difference as the distance information between an object and a vehicle, wherein the vehicle speed is far smaller than the light speed compared with the vehicle speed, so the vehicle speed can be ignored, namely the distance between the object and the vehicle is one half of the product between the light speed and the time difference.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and substitutions can be made without departing from the technical principle of the present invention, and these modifications and substitutions should also be regarded as the protection scope of the present invention.