Disclosure of Invention
The invention mainly solves the technical problem of providing a road edge detection method, a road edge detection device and a vehicle, which can realize automatic detection of a road edge line section of a current road on which the vehicle runs, reduce the operation complexity of a manipulator and have higher detection precision.
In order to solve the technical problems, the invention adopts a technical scheme that: provided is a road edge detection method including: acquiring an image frame containing road edge information of a current road on which a vehicle runs; carrying out edge detection on the image frame to obtain a plurality of edge points; extracting a plurality of straight line segments by using a plurality of edge points; and extracting the road edge line segment from the plurality of straight line segments according to the road edge structure characteristics of the current road.
Wherein the step of performing edge detection on the image frame further comprises: acquiring a local image in a preset area around a preset calibration point from an image frame; edge detection is performed within the partial image.
Wherein the step of performing edge detection within the local image further comprises: calculating the gray average value of pixel points in the local image; and setting a low threshold parameter and a high threshold parameter of a canny edge detection algorithm according to the gray average value of the pixel points in the local image, and performing edge detection in the local image by using the canny edge detection algorithm.
The step of extracting the road edge line segment from the plurality of straight line segments according to the road edge structure characteristic of the current road comprises the following steps: and extracting the road edge line segments from the plurality of straight line segments according to the comparison result of the actual distance between the road edges of the current road and the pixel distance between the straight line segments and/or according to the comparison result of the actual color difference of the two sides of the road edges of the current road and the pixel color difference of the two sides of the straight line segments.
Wherein, extracting the road edge line segment from the plurality of straight line segments according to the comparison result of the actual distance between the road edges of the current road and the pixel distance between the straight line segments and/or according to the comparison result of the actual color difference between the two sides of the road edge of the current road and the pixel color difference between the two sides of the straight line segments further comprises: and deleting the straight line segments with the slopes not meeting the preset slope requirement from the plurality of straight line segments.
Wherein the step of extracting the curb line segments from the plurality of straight line segments according to a comparison result of an actual distance between road edges of the current road and a pixel distance between the straight line segments comprises: converting an actual distance between road edges of a current road acquired under a space coordinate system and a pixel distance between straight line segments acquired under an image coordinate system into the same coordinate system by using a calibration coefficient, wherein the calibration coefficient is obtained by calculating an actual coordinate of a preset calibration point under the space coordinate system and an image coordinate of the calibration point under the image coordinate system; and performing difference operation on the actual distance between the road edges of the current road and the pixel distance between the straight line segments in the same coordinate system, and selecting the straight line segments with the difference smaller than the redundancy error from the actual distance and the pixel distance.
The step of extracting the road edge line segment from the plurality of straight line segments according to the comparison result of the actual color difference of the two sides of the road edge of the current road and the pixel color difference of the two sides of the straight line segment comprises the following steps: calculating the gray average value of pixel points between each straight line segment and the adjacent straight line segment or in the preset lateral width range at the two sides of each straight line segment, and determining the pixel color difference at the two sides of each straight line segment according to the gray average value; and extracting the straight line segments of which the pixel color difference on the two sides of the straight line segment is consistent with the actual color difference on the two sides of the road edge of the current road or is within the error allowable range from the plurality of straight line segments.
The step of extracting the road edge line segment from the plurality of straight line segments according to the comparison result of the actual distance between the road edges of the current road and the pixel distance between the straight line segments and according to the comparison result of the actual color difference between the two sides of the road edge of the current road and the pixel color difference between the two sides of the straight line segments comprises the following steps: converting an actual distance between road edges of a current road acquired under a space coordinate system and a pixel distance between straight line segments acquired under an image coordinate system into the same coordinate system by using a calibration coefficient, wherein the calibration coefficient is obtained by calculating an actual coordinate of a preset calibration point under the space coordinate system and an image coordinate of the calibration point under the image coordinate system; performing difference operation on the actual distance between the road edges of the current road and the pixel distance between the straight line segments in the same coordinate system, and selecting a plurality of alternative straight line segments with the difference smaller than the redundancy error; calculating the gray average value of pixel points between each alternative straight line segment and the adjacent alternative straight line segment or in the preset lateral width range at the two sides of each alternative straight line segment, and determining the pixel color difference at the two sides of each alternative straight line segment according to the gray average value; and extracting the alternative straight line segments, of which the pixel color difference on two sides of the alternative straight line segment is consistent with the actual color difference on two sides of the road edge of the current road or is within the error allowable range, from the plurality of alternative straight line segments.
Wherein the method further comprises: and tracking a plurality of straight line segments of the subsequent image frame acquired subsequently by using the obtained curb line segments, and further extracting the curb line segments from the plurality of straight line segments of the subsequent image frame.
Wherein the method further comprises: and calculating the actual distance of the road edge line segment relative to the vehicle in the space coordinate system according to the pixel coordinates of the road edge line segment in the image coordinate system.
In order to solve the technical problem, the invention adopts another technical scheme that: provided is a road edge detection device including: the image frame acquisition module is used for acquiring an image frame containing road edge information of a current road on which a vehicle runs; the edge detection module is used for carrying out edge detection on the image frame so as to obtain a plurality of edge points; the linear segment extraction module is used for extracting a plurality of linear segments by utilizing a plurality of edge points; and the curb line segment extraction module is used for extracting the curb line segments from the plurality of straight line segments according to the curb structure characteristics of the current road.
The edge detection module is further used for acquiring a local image in a preset area around the preset calibration point from the image frame and carrying out edge detection in the local image.
The edge detection module is further used for calculating the gray average value of the pixel points in the local image, setting a low threshold parameter and a high threshold parameter of a canny edge detection algorithm according to the gray average value of the pixel points in the local image, and performing edge detection in the local image by using the canny edge detection algorithm.
The road edge line segment extraction module is further used for extracting the road edge line segments from the plurality of straight line segments according to the comparison result of the actual distance between the road edges of the current road and the pixel distance between the straight line segments and/or according to the comparison result of the actual color difference between the two sides of the road edges of the current road and the pixel color difference between the two sides of the straight line segments.
The straight line segment extraction module is further used for deleting the straight line segments with the slopes not meeting the preset slope requirement from the plurality of straight line segments before the curb segment extraction module extracts the curb segments from the plurality of straight line segments according to the comparison result of the actual distance between the road edges of the current road and the pixel distance between the straight line segments and/or according to the comparison result of the actual color difference between the two sides of the road edges of the current road and the pixel color difference between the two sides of the straight line segments.
The system comprises a road edge line segment extraction module, a calibration coefficient calculation module and a data processing module, wherein the road edge line segment extraction module is further used for converting an actual distance between road edges of a current road acquired under a space coordinate system and a pixel distance between straight line segments acquired under an image coordinate system into the same coordinate system by using the calibration coefficient, performing difference operation on the actual distance between the road edges of the current road and the pixel distance between the straight line segments under the same coordinate system, and selecting the straight line segments with the difference value smaller than a redundancy error from the actual distance and the pixel distance, wherein the calibration coefficient is obtained by calculating an actual coordinate of a preset calibration point under the space coordinate system and an image coordinate of the calibration point under the image coordinate system.
The road edge line segment extraction module is further used for calculating a gray average value of pixel points between each straight line segment and adjacent straight line segments or in a preset lateral width range at two sides of each straight line segment, determining pixel color difference at two sides of each straight line segment according to the gray average value, and further extracting the straight line segments with the pixel color difference at two sides of each straight line segment consistent with the actual color difference at two sides of the road edge of the current road or within an error allowable range from the plurality of straight line segments.
The system comprises a road edge line segment extraction module, a calibration coefficient calculation module, a data processing module and a data processing module, wherein the road edge line segment extraction module is further used for converting an actual distance between road edges of a current road acquired under a space coordinate system and a pixel distance between straight line segments acquired under an image coordinate system into the same coordinate system by using the calibration coefficient, performing difference operation on the actual distance between the road edges of the current road and the pixel distance between the straight line segments under the same coordinate system, and selecting a plurality of alternative straight line segments with difference values smaller than a redundancy error from the alternative straight line segments, wherein the calibration coefficient is obtained by calculating an actual coordinate of a preset calibration point under the space coordinate system and an image coordinate of the calibration point under the image coordinate system; the road edge line segment extraction module is further used for calculating a gray average value of pixel points between each alternative straight line segment and the adjacent alternative straight line segment or in a preset lateral width range on two sides of each alternative straight line segment, determining pixel color difference on two sides of each alternative straight line segment according to the gray average value, and further extracting the alternative straight line segments from the multiple alternative straight line segments, wherein the pixel color difference on two sides of each alternative straight line segment is consistent with the actual color difference on two sides of the road edge of the current road or is in an error allowable range.
The road edge line segment extraction module is further used for tracking a plurality of straight line segments of a subsequent image frame acquired subsequently by using the acquired road edge line segments, and further extracting the road edge line segments from the plurality of straight line segments of the subsequent image frame.
Wherein the apparatus further comprises: and the actual distance calculation module is used for calculating the actual distance of the road edge line segment relative to the vehicle under the space coordinate system according to the pixel coordinates of the road edge line segment under the image coordinate system.
In order to solve the technical problem, the invention adopts another technical scheme that: there is provided a vehicle including the road edge detecting device of the above aspect.
The invention has the beneficial effects that: unlike the case of the prior art, the present invention obtains image frames including road edge information of a current road on which a vehicle is traveling; carrying out edge detection on the image frame to obtain a plurality of edge points; further extracting a plurality of straight line segments by using a plurality of edge points; finally, extracting the road edge line segments from the plurality of straight line segments according to the road edge structure characteristics of the current road; the road edge line segment of the current road on which the vehicle runs can be automatically detected, the operation complexity of a manipulator is reduced, and the detection precision is high.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without any creative effort belong to the protection scope of the present invention.
Referring to fig. 1, a first embodiment of a road edge detection method according to the present invention includes:
step S101: acquiring an image frame containing road edge information of a current road on which a vehicle runs;
in this step, the image frames including the road edge information of the current road on which the vehicle is traveling may be captured by an image capturing device such as a camera or a camera mounted on the vehicle, and the camera may be a digital camera or an analog camera. The camera of the camera can be an infrared camera so that the image can be collected both in the daytime and at night. In other embodiments, the camera and the camera head may be of other types, and are not limited herein. The vehicle may be a vehicle in a manned state or an autonomous state. The image frames not only contain road edge information, but also can contain information on trees planted beside the road, street lamps installed beside the road, buildings beside the road and other roads.
Step S102: carrying out edge detection on the image frame to obtain a plurality of edge points;
in this step, edge detection is a common technical means in image processing and computer vision, and the purpose of edge detection is to identify points in a digital image where brightness changes are obvious, i.e., edge points. The algorithm for edge detection may be canny edge detection algorithm or other algorithms known in the art, and the specific edge detection manner will be described in detail below by taking canny edge detection algorithm as an example.
Step S103: extracting a plurality of straight line segments by using a plurality of edge points;
in this step, a plurality of straight line segments may be extracted from the plurality of edge points acquired in step S102 using Hough transform or other algorithms known in the art. These straight line segments contain all possible edge information including road edge information.
Step S104: and extracting the road edge line segment from the plurality of straight line segments according to the road edge structure characteristics of the current road.
In this step, the curb structure includes all the curb structures including lane lines (white lane lines, yellow lane lines, etc.), and/or road surface stones, and/or curb structures. The curb line segment correspondingly comprises a left side line and a right side line of a lane line and/or a left side line and a right side line of a road stone and/or a left side line and a right side line of a curb stone. The curb structure characteristics can be obtained by arranging prior data obtained in advance by aiming at various different road designs, and mainly comprise the width of lane lines of different roads, the width of road surface stones, the width of sidewalks, the color change rules of different areas and other information.
It is understood that the first embodiment of the road edge detection method of the present invention is implemented by acquiring an image frame containing road edge information of a current road on which a vehicle is traveling; carrying out edge detection on the image frame to obtain a plurality of edge points; further extracting a plurality of straight line segments by using a plurality of edge points; and finally, the curb line segments are extracted from the plurality of straight line segments according to the curb structure characteristics of the current road, so that the curb line segments of the current road where the vehicle runs can be automatically detected, the operation complexity of a manipulator is reduced, and the detection precision is high.
Referring to fig. 2-5, a second embodiment of the road edge detection method of the present invention includes:
step S201: acquiring an image frame containing road edge information of a current road on which a vehicle runs;
in this step, the image frames are captured by an image capturing device such as a video camera or a still camera mounted on the vehicle. Specifically, as shown in fig. 3, the image capturing apparatus of the present embodiment includes a front image capturing apparatus 3 and/or a side image capturing apparatus 4, the front image capturing apparatus 3 is specifically installed in front of a cab 1 of the vehicle to capture image frames including information about a road edge 5 in front of the vehicle, a straight line 31 is a central axis of a viewing angle of the front image capturing apparatus 3, the side image capturing apparatus 4 is specifically installed on a side of a vehicle body 2 of the vehicle to capture image frames including information about a road edge 5 on a side of the vehicle, and a straight line 41 is a central axis of a viewing angle of the side image capturing apparatus 4. The present invention will be described below by taking an image frame captured by the front-end image capturing device 3 as an example, and a specific processing manner of the image frame captured by the side-end image capturing device 4 is similar to that of the image frame captured by the front-end image capturing device 3, and is not described herein again.
Step S202: acquiring a local image in a preset area around a preset calibration point from an image frame;
in this step, a local image in a predetermined area around a preset calibration point P is obtained from an image frame, and the process is specifically as follows:
the image frame is subjected to gray level conversion, the image frame is generally a color image, and the RGB color image is converted into a gray level image through the gray level conversion, so that the processing speed of the image frame is increased.
A partial image R having a region size s × t and centered on a calibration point P is acquiredstRegion R of an imagestIs preferably selected to cover the lane lines 51, road surface stones 52 and curb stones 53 in fig. 4. The calibration point P can be selected and set according to actual detection requirements, and for the front-end image acquisition equipment 3, if the road edge far away from the vehicle needs to be detected, a point far away from the vehicle is selected as a calibration point at the front end; for the side image capturing device 4, a point on the viewing angle central axis 41 of the side image capturing device 4 may be selected as a calibration point of the side end. After the calibration point of the front end and the calibration point of the side end are set, the setting is not required to be repeated. Of course, in other embodiments, the above-mentioned partial image R may be selectedstAnd then carrying out gray scale change, or not carrying out gray scale conversion under the condition that a subsequent edge detection algorithm and a straight line extraction algorithm allow.
In addition, after the calibration point P is set, a calibration coefficient λ is further obtained, where the calibration coefficient λ may be obtained by calculating an actual coordinate of the preset calibration point P in the space coordinate system and an image coordinate of the calibration point P in the image coordinate system, taking the image frame acquired by the front-end image acquisition device 3 as an example, an abscissa of the calibration point P in the space coordinate system is xPractice ofThe abscissa of the calibration point P in the image coordinate system is xImage of a personThe abscissa of the intersection point of the central axis 31 of the viewing angle of the front image capturing device 3 and the straight line 6 of the calibration point passing through the calibration point P and perpendicular to the y-axis in the space coordinate system and the image coordinate system is xPractice of′、xImage of a person' then demarcate the coefficientIn order to ensure the accuracy of road edge detection, the calibration coefficient lambda should be smaller than 1, that is, the actual unit distance at least corresponds to 1 image pixel distance.
Step S203: performing edge detection in the local image;
in this step, the step of performing edge detection in the local image includes:
calculating the gray average value of pixel points in the local image, specifically as shown in the following formula (1):
wherein, BL is the gray level mean of the pixel points in the local image, and f (x, y) is the gray level of the pixel points in the local image.
And setting a low threshold parameter and a high threshold parameter of a canny edge detection algorithm according to the gray average BL of the pixel points in the local image, and performing edge detection in the local image by using the canny edge detection algorithm. The canny edge detection algorithm is a multi-stage edge detection algorithm developed by John f, canny in 1986, and the specific process of performing edge detection in a local image by using the canny edge detection algorithm in the embodiment is as follows:
and smoothing the local image by using a Gaussian filter to remove image noise and improve the accuracy of road edge detection.
Obtaining gradient values and direction values of all pixel points in the local image, wherein the gradient values and the direction values are shown in the following formulas (2) and (3):
wherein M (x, y) is the gradient value of the pixel point, α (x, y) is the direction value of the pixel point, gx、gyThe partial derivatives of the pixel points in the x-axis direction and the y-axis direction of the image coordinate system are respectively. gx、gyThe method can be correspondingly obtained by a Sobel template, the Sobel template adopts a Sobel operator, the operator comprises two groups of 3 × 3 matrixes which are respectively horizontal and vertical, and the two groups of matrixes are respectively:the two groups of matrixes are subjected to plane convolution with the image to respectively obtain horizontal and vertical brightness difference approximate values, namely gx、gy。
And carrying out non-maximum suppression by utilizing the gradient value M (x, y) and the direction value alpha (x, y) of the pixel point to obtain a candidate pixel point, wherein the candidate pixel point comprises all edge points and partial non-edge points in the local image.
Setting a low threshold parameter and a high threshold parameter of a canny edge detection algorithm according to the gray average BL of the pixel points in the local image, wherein the low threshold parameter and the high threshold parameter are shown in the following formulas (4) and (5):
TL=BL×γ (4)
TH=3×TL(5)
wherein, TLIs a low threshold parameter, THFor high threshold parameters, gamma is the light difference coefficient, and gamma can be determined through experiments or is optimized through an EM algorithm. EM (Expectation-maximization algorithm) algorithm is the maximum Expectation algorithmThe EM algorithm is an algorithm that finds the parameter maximum likelihood estimate or maximum a posteriori estimate in a probabilistic model that relies on hidden variables that cannot be observed.
Using a low threshold parameter TLHigh threshold parameter THAnd further acquiring a plurality of edge points from the candidate pixel points. Wherein less than the low threshold parameter TLThe candidate pixel point of (2) is a non-edge point which is greater than the high threshold parameter THThe candidate pixel point is an edge point at TL-THThe candidate pixel points in between may be edge points, and the embodiment is in TL-THSelect the adjacent high threshold parameter T betweenHA range of candidate pixels and selecting a parameter T greater than a high thresholdHThe candidate pixel points of (2) are edge points.
Step S204: extracting a plurality of straight line segments by using a plurality of edge points;
in this step, a plurality of straight line segments are extracted by using a plurality of edge points specifically in a Hough transform mode, which is a parameter estimation technology using a voting principle and converts a detection problem in an image space to a parameter space by using point-line duality of the image space and the Hough parameter space. The first step of extracting the straight line segments in the Hough transform mode is to obtain a polar coordinate equation of the straight line segments corresponding to the plurality of edge points, which is specifically shown in the following formula (6):
ρ=xcosθ+ysinθ (6)
the formula (6) is a polar coordinate equation of a straight line segment, x and y are respectively an abscissa and an ordinate of the edge point in the image coordinate system, ρ is a polar diameter of the edge point, and θ is a polar angle of the edge point.
And the second step is to convert the polar coordinate equation of the straight line segment into a corresponding rectangular coordinate equation.
Step S205: deleting the straight line segments with the slopes not meeting the preset slope requirement from the plurality of straight line segments;
in this step, as can be seen from the imaging principle, when the vehicle travels along a straight road, the road edges on both sides of the vehicle are at a certain slope in the captured image frame. For example, the slope in the image frame for the right road edge of the vehicle is negative, while the slope in the image frame for the left road edge of the vehicle is positive, while the slope of the edge line between the road stones 52 on the image frame is substantially close to zero. Therefore, according to the different selected positions of the calibration point P and the acquired prior information of the curb structure of the current road, the theoretical slopes of different road borders in the image frame can be calculated through the imaging principle, and meanwhile, a certain redundancy error is considered, so that the slope requirements of different road borders can be determined, and further, the straight line segments which are obviously not the road border lines, such as the straight line segments of the border lines between the road stones 52, are excluded, and the efficiency of subsequent road border detection can be improved.
Further, in another embodiment, the vehicle traveling direction may be detected by a GPS function of the vehicle or an angle sensor, so that the slope request may be changed according to the vehicle traveling direction, for example, when the vehicle turns to the right side, the slope of the right road edge of the vehicle in the image frame may approach zero, and thus the slope request may need to be adjusted according to actual conditions.
Therefore, the second step of extracting the straight line segment by the Hough transform further includes: limiting the value range of the polar angle theta to calculate the corresponding polar diameter rho, accumulating and counting the corresponding rho and theta parameter matrix units, and selecting the larger accumulation unit of the rho and theta parameter matrix units to further determine the polar coordinate equation of the straight line segment meeting the preset slope requirement as shown in the following formula (7):
ρi=xcosθi+ysinθi(7)
in the present embodiment, θiHas a value range of [ -90,0 ]]。
Further converting the polar coordinate equation (7) of the straight line segment satisfying the predetermined slope requirement into a rectangular coordinate equation, which is shown in the following formula (8):
y=fi(x) (8)
wherein, the formula (8) is a rectangular coordinate equation of the straight line segment satisfying the predetermined slope requirement.
Step S206: extracting a road edge line segment from a plurality of straight line segments according to the road edge structure characteristics of the current road;
in this step, the curb line segments may be extracted from the plurality of straight line segments according to a comparison result of an actual distance between road edges of the current road and a pixel distance between the straight line segments and/or according to a comparison result of an actual color difference between both sides of the road edges of the current road and a pixel color difference between both sides of the straight line segments.
When a road edge line segment is extracted from a plurality of straight line segments according to a comparison result of an actual distance between road edges of a current road and a pixel distance between the straight line segments, the method specifically includes:
and converting the actual distance between the road edges of the current road acquired under the space coordinate system and the pixel distance between the straight line segments acquired under the image coordinate system into the same coordinate system by using a calibration coefficient lambda. The actual distance between the road edges of the current road is the prior information of the road edges, and specifically includes the actual distance S between the left line 531 and the right line 532 of the curb 53curbAnd the actual distance S between the left side line 521 and the right side line 522 of the road stone 52roundAnd the actual distance S between the left side line 511 and the right side line 512 of the lane line 51white,SwhiteThe value ranges of (A) are generally 12-15cm, 40cm and 10-12cm, respectively, and the value ranges of the prior information can be other sizes, which is not limited herein. The process of obtaining the pixel distance between the straight line segments is specifically as follows:
acquiring the coordinates of the intersection point of the straight line segment meeting the requirement of the preset slope and the calibration point straight line 6 in an image coordinate system, specifically: when the image frame is collected by the front image collecting device 3, the calibration point straight line 6 is a straight line passing through the calibration point P and perpendicular to the y-axis, and the abscissa of the intersection point of the straight line segment and the calibration point straight line 6 in the image coordinate system is obtained, as shown in the following formula (9):
xi=fi'(yP) (9)
wherein x isiIs the abscissa, y, of the intersection of the line segment and the index point line 6 in the image coordinate systemPIs the ordinate of the index point P in the image coordinate system.
For the abscissa x of the intersection of the line segment and the index point line 6 in the image coordinate systemiSorting by size, further using the abscissa x of the intersectioniAcquiring the pixel distance between the straight line segments, as shown in the following formula (10):
dk=|xi-xj| (10)
wherein d iskAnd i is the pixel distance between the straight line segments, and j is less than i.
In addition, when the image frame is captured by the side image capturing device 4, the calibration point straight line is a straight line passing through the calibration point P and perpendicular to the x-axis, and similarly, the pixel distance between the straight line segments is obtained by obtaining the vertical coordinates of the intersection points of the straight line segments and the calibration point straight line in the image coordinate system at this time.
After the pixel distances between the straight line segments are obtained, the actual distances between the road edges and the pixel distances between the straight line segments are converted into the same coordinate system by using a calibration coefficient lambda, wherein the actual distances are converted into corresponding pixel distances so that the actual distances and the pixel distances are in the same image coordinate system, or the pixel distances between the straight line segments are converted into corresponding actual distances so that the actual distances and the pixel distances are in the same space coordinate system. For example, to convert the actual distances into the image coordinate system, the actual distances S are respectivelycurb、Sround、SwhiteCorresponding to a pixel distance of dcurb、dground、dwhite,It will be appreciated that converting the pixel distance to a corresponding actual distance is then: distance d between pixelskMultiplying by a calibration coefficient lambda to obtain the corresponding actual distance.
And further performing difference operation on the actual distance between the road edges of the current road and the pixel distance between the straight line segments in the same coordinate system, and selecting the straight line segments with the difference smaller than the redundancy error. Taking the image coordinate system as an example, the pixel distances between the straight line segments with the difference smaller than the redundancy error are specifically expressed by the following equations (11), (12) and (13):
Dcurb={dk:|dcurb-dk|<e}(11)
Dground={dk:|dground-dk|<e} (12)
Dwhite={dk:|dwhite-dk|<e} (13)
where, for redundancy errors, 0<<0.5*min{dcurb,dground,dwhite}。Dcurb、Dground、DwhiteFurther obtaining D for the pixel distance between the straight line segments with the corresponding difference value of the kerbstone, the road surface stone and the lane line being smaller than the redundancy errorcurb、Dground、DwhiteThe corresponding straight line segment with the difference smaller than the redundancy error is the curb line segment, and the curb line segment includes the left line 531 and the right line 532 of the curb 53, the left line 521 and the right line 522 of the road stone 52, and the left line 511 and the right line 512 of the lane line 51.
When the curb line segment is extracted from the plurality of straight line segments according to the comparison result of the actual color difference of the two sides of the road edge of the current road and the pixel color difference of the two sides of the straight line segment, the method specifically comprises the following steps:
and calculating the gray average value of the pixel points between each straight line segment and the adjacent straight line segment or in the preset lateral width range at the two sides of each straight line segment. When the image frame is acquired by the front image acquisition device 3, the average grayscale value is specifically expressed by the following formula (14):
wherein,the gray level average of the pixel points between each line segment and the adjacent line segment or within the predetermined lateral width range, type ∈ { currb, ground, white },respectively represent DtypeThe abscissa of the intersection of the left and right side straight line segments of the tth candidate and the index point straight line 6 passing through the index point P and perpendicular to the y-axis, or the left and right abscissas forming a predetermined lateral width range. The predetermined lateral width range may be set to a width range of 1/2, 1/3, etc. of the pixel distance between adjacent straight line segments, and may also be set to a fixed value such as a 2-pixel unit distance; for example, for the straight line segment a, the predetermined lateral width ranges on both sides thereof may be respectively set to 1/4 of the pixel distance between the straight line segment a and the adjacent left and right straight line segments. It is understood that when the image frame is captured by the side image capturing device 4, the average value of the gray scale is obtained using the ordinate of the intersection of the straight line segment and the calibration point straight line passing through the calibration point P and perpendicular to the x-axis, and the abscissa of the calibration point P.
Further based on mean value of gray scaleDetermining pixel color difference on two sides of the straight line segment, wherein the pixel color difference is the difference value of the gray level mean values on the two sides of the straight line segment; and extracting the straight line segments with the difference of the pixel colors at the two sides of the straight line segments consistent with the actual color difference at the two sides of the road edge of the current road or within the error allowable range from the plurality of straight line segments, wherein the extracted straight line segments are the road edge segments. The curb segments include left and right side lines 531 and 532 of the curb 53, left and right side lines 521 and 522 of the road stone 52, and left and right side lines 511 and 512 of the lane line 51. Wherein the left and right sides of the kerbstone 53 are straightActual mean value of gray scale between VcurbThe actual average value of the gradations between the right and left side straight lines of the road stone 52 is VgroundThe actual gray level average value between the right and left side lines of the lane line 51 is VwhiteThe magnitude relation of the three actual gray level mean values is as follows: vwhite>Vcurb>Vground. For example, it is determined that the straight line segment b corresponds to the left line 531 of the curb 53 because the difference in pixel color between the two sides of the straight line segment b is c and the difference in pixel color between the two sides of the left line 531 of the curb 53 on the current road is c.
Further, in this embodiment, two ways of distance calculation and color comparison are adopted to simultaneously extract the curb line segments, that is, the curb line segments are extracted from the plurality of straight line segments according to the comparison result of the actual distance between the road edges of the current road and the pixel distance between the straight line segments and according to the comparison result of the actual color difference between the two sides of the road edges of the current road and the pixel color difference between the two sides of the straight line segments, which specifically includes:
converting the actual distance between the road edges of the current road acquired under a space coordinate system and the pixel distance between the straight line segments acquired under an image coordinate system into the same coordinate system by using a calibration coefficient; and performing difference operation on the actual distance between the road edges of the current road and the pixel distance between the straight line segments in the same coordinate system, and selecting a plurality of alternative straight line segments with the difference smaller than the redundancy error.
Calculating the gray average value of pixel points between each alternative straight line segment and the adjacent alternative straight line segment or in the preset lateral width range at the two sides of each alternative straight line segment, and determining the pixel color difference at the two sides of each alternative straight line segment according to the gray average value; and extracting the alternative straight line segments, of which the pixel color difference on two sides of the alternative straight line segment is consistent with the actual color difference on two sides of the road edge of the current road or is within the error allowable range, from the plurality of alternative straight line segments.
Step S207: tracking a plurality of straight line segments of a subsequent image frame acquired subsequently by using the acquired curb line segments, and further extracting the curb line segments from the plurality of straight line segments of the subsequent image frame;
and tracking a plurality of straight line segments of the subsequent image frame acquired subsequently by using the obtained curb line segments, and further extracting the curb line segments from the plurality of straight line segments of the subsequent image frame. Specifically, a plurality of straight line segments of a subsequent image frame may be tracked by using a neighbor method or a kalman filter method.
The process of tracking the linear segment by using the neighbor method specifically includes: and obtaining coordinates of a plurality of straight line segments of the subsequent image frame in an image coordinate system, further subtracting the coordinates of each curb line segment of the previous image frame in the image coordinate system from the coordinates, and determining that the straight line segments of the subsequent image frame and the curb line segments of the previous image frame are the same straight line when two coordinate errors are smaller than the redundant error. Acquiring the abscissa of the intersection point of the straight line segment of the subsequent image frame and the calibration point straight line 6 which passes through the calibration point P and is perpendicular to the y axis in the image coordinate system by using the formula (9) above; subtracting the abscissa of the intersection point of each road edge line segment of the previous image frame and the calibration point straight line 6 from the abscissa of the intersection point, and determining the line segment as the same straight line if the difference value of the abscissas of the straight line segment and a certain road edge line segment is less than the redundancy error: i x "-x '| < e, where x" is the abscissa of the intersection of the straight line segment of the subsequent image frame and the index point straight line 6, and x' is the abscissa of the intersection of a certain curb line segment of the previous image frame and the index point straight line 6.
The Kalman filter is a recursive filter proposed by Kalman (Kalman) for a time-varying linear system that can be described by a differential equation model containing orthogonal state variables, which is a method of integrating past measurement estimation errors into new measurement errors to estimate future errors. The kalman filter method tracks each line segment by expressing coordinate point data of the line segment and the curb segment as a kalman filter using the principle of the kalman filter.
After the tracking of each straight line segment of an image frame is completed, the coordinate information of each curbstone segment at present is repeatedly updated. Under complex working conditions such as water stain, shadow, shielding and the like, the real road edges of some image frames can be difficult to detect, and the two tracking methods can realize the tracking of the straight line segments under the complex working conditions by utilizing the historical information data of the road edge structure, so as to extract the road edge segments.
Step S208: and calculating the actual distance of the road edge line segment relative to the vehicle in the space coordinate system according to the pixel coordinates of the road edge line segment in the image coordinate system.
And calculating the actual distance of the road edge line segment relative to the vehicle in the space coordinate system according to the pixel coordinates of the road edge line segment in the image coordinate system. In the present embodiment, the actual distance is calculated by using the pixel coordinates of the right side line 522 of the road stone 52 in the curb line segment in the image coordinate system: when the image frame is captured by the front image capturing device 3, as shown in fig. 5, the abscissa x of the intersection of the right side line 522 of the road stone 52 and the index point straight line 6 in the curb line segment is acquired1And the abscissa x of the intersection of the viewing angle central axis 31 of the front image pickup device 3 and the calibration point straight line 62Two abscissa x1、x2The pixel distance L between the right side line 522 and the central axis 31 of the viewing angle is obtained by subtracting1=|x1-x2Further using a calibration coefficient lambda and a pixel distance L1Calculating the actual distance S of the road edge line segment relative to the vehicle under the space coordinate system1=λ×L1(ii) a When the image frame is collected by the side image collecting device 4, the calibration point is on the central axis 41 of the viewing angle of the side image collecting device 4, the straight line of the calibration point is a straight line passing through the calibration point P and perpendicular to the x axis, the straight line of the calibration point is the central axis 41 of the viewing angle of the side image collecting device 4, and the vertical coordinate L of the intersection point of the right side line 522 and the straight line of the calibration point is obtained at this time2Further using the calibration coefficient lambda and the ordinate L2Calculating the actual distance S of the road edge line segment relative to the vehicle under the space coordinate system2=λ×L2。
In other embodiments, the actual distance of the curb line segment from the vehicle in the space coordinate system may also be calculated by using the pixel coordinates of other curb line segments, such as the left line 531 of the curb 53 in the image coordinate system, which is not limited herein.
The actual distance between the vehicle and the road edge line segment at the current moment is correspondingly calculated by utilizing the image frames acquired by the side image acquisition equipment, the actual distance at the current moment can be further judged, if the actual distance at the current moment is smaller than a certain preset distance threshold value, the actual distance between the vehicle and the road edge line segment at the current moment is prompted to exceed a safe distance range by emitting warning sounds or character display and the like, and then the actual distance between the vehicle and the road edge line segment can be adjusted by a mobile phone or an automatic driving system according to the prompt information. The actual distance between the road edge line segment and the vehicle is correspondingly obtained by utilizing the image frame acquired by the front-end image acquisition equipment, the actual distance is a prejudgment on the actual distance between the vehicle and the road edge line segment at a certain future moment, the actual distance can be judged in the same way, if the actual distance exceeds a certain preset distance threshold value, prompt information is sent out in the same way to prompt that the actual distance between the vehicle and the road edge line segment is about to exceed a safe distance range, and the actual distance between the vehicle and the road edge line segment can be adjusted in advance by a mechanical hand or an automatic driving system.
In addition, after the actual distance between the road edge line segment and the vehicle under the space coordinate system is obtained through calculation, the actual distance between the road edge line segment and the vehicle is further displayed on the vehicle in real time, and the actual distance can be displayed on a display screen installed on the vehicle in real time. The above-mentioned prompt information can be displayed on the vehicle as well.
It is understood that the second embodiment of the road edge detection method of the present invention acquires a partial image in a predetermined area around a preset index point from an image frame by acquiring the image frame including road edge information of a current road on which a vehicle is traveling; performing edge detection on the local image to obtain a plurality of edge points; further extracting a plurality of straight line segments by using a plurality of edge points; deleting the straight line segments with the slopes not meeting the preset slope requirement from the plurality of straight line segments; extracting a road edge line segment from a plurality of straight line segments according to a comparison result of an actual distance between road edges of a current road and a pixel distance between straight line segments and/or according to a comparison result of actual color differences of two sides of the road edges of the current road and pixel color differences of two sides of the straight line segments; tracking a plurality of straight line segments of a subsequent image frame by using the obtained curb line segments so as to extract the curb line segments from the plurality of straight line segments of the subsequent image frame; and finally, calculating the actual distance of the road edge line segment relative to the vehicle in the space coordinate system according to the pixel coordinates of the road edge line segment in the image coordinate system.
By the aid of the method, the curb line segment of the current road where the vehicle runs can be automatically detected, operation complexity of a mobile phone is reduced, and detection accuracy is high. In addition, the edge detection is carried out in the local image, and the line segment of which the slope does not meet the requirement of the preset slope is deleted, so that the road edge detection efficiency can be improved; the method also can realize the rapid extraction of the curb line segment by tracking the line segment on the subsequent image frame; and finally, calculating the actual distance of the road edge line segment relative to the vehicle in the space coordinate system according to the pixel coordinates of the road edge line segment in the image coordinate system, automatically measuring the actual distance between the vehicle and the road edge line segment at the current moment and prejudging the actual distance between the vehicle and the road edge line segment at a future moment, reducing the operation complexity of a manipulator, ensuring the obtained distance to have higher precision and realizing the safe driving of the vehicle.
Referring to fig. 6, an embodiment of a road edge detection apparatus according to the present invention includes:
an image frame acquisition module 71, an edge detection module 72, a straight line segment extraction module 73, and a curb line segment extraction module 74.
The image frame acquiring module 71 is configured to acquire an image frame including road edge information of a current road on which the vehicle is traveling. The image frame acquiring module 71 is specifically a front-end image capturing device or a side-end image capturing device described in each of the above embodiments.
The edge detection module 72 is configured to perform edge detection on the image frame to obtain a plurality of edge points.
The edge detection module 72 is further configured to obtain a local image in a predetermined area around the preset calibration point from the image frame, and perform edge detection in the local image.
The edge detection module 72 is further configured to calculate a gray average of pixels in the local image during the edge detection in the local image, set a low threshold parameter and a high threshold parameter of a canny edge detection algorithm according to the gray average of the pixels in the local image, and perform edge detection in the local image by using the canny edge detection algorithm.
The straight line segment extraction module 73 is configured to extract a plurality of straight line segments using a plurality of edge points.
The curb line segment extraction module 74 is configured to extract a curb line segment from a plurality of straight line segments according to the curb structure characteristics of the current road. The curb line segment extraction module 74 is further configured to extract a curb line segment from the plurality of straight line segments according to a comparison result of an actual distance between road edges of the current road and a pixel distance between straight line segments and/or according to a comparison result of an actual color difference between both sides of the road edges of the current road and a pixel color difference between both sides of the straight line segment.
The straight line segment extraction module 73 is further configured to delete a straight line segment having a slope that does not satisfy a predetermined slope requirement from the plurality of straight line segments before the curb segment extraction module 74 extracts a curb segment from the plurality of straight line segments according to a comparison result of an actual distance between road edges of the current road and a pixel distance between the straight line segments and/or according to a comparison result of an actual color difference between both sides of a road edge of the current road and a pixel color difference between both sides of a straight line segment.
When the curb line segment extraction module 74 is configured to extract the curb line segments from the plurality of straight line segments according to the comparison result of the actual distance between the road edges of the current road and the pixel distance between the straight line segments, it is further configured to:
converting the actual distance between the road edges of the current road acquired under a space coordinate system and the pixel distance between the straight line segments acquired under an image coordinate system into the same coordinate system by using a calibration coefficient; and performing difference operation on the actual distance between the road edges of the current road and the pixel distance between the straight line segments in the same coordinate system, and selecting the straight line segment with the difference smaller than the redundancy error. The calibration coefficient is obtained by calculating the actual coordinate of the preset calibration point in the space coordinate system and the image coordinate of the calibration point in the image coordinate system.
When the curb line segment extraction module 74 is configured to extract the curb line segment from the plurality of straight line segments according to the comparison result of the actual color difference of both sides of the curb of the current road and the pixel color difference of both sides of the straight line segment, it is further configured to:
calculating the gray average value of pixel points between each straight line segment and the adjacent straight line segment or in the preset lateral width range at the two sides of each straight line segment, and determining the pixel color difference at the two sides of each straight line segment according to the gray average value; and then extracting the straight line segments with the difference of the pixel colors at the two sides of the straight line segments consistent with the actual color difference at the two sides of the road edge of the current road or within the error allowable range from the plurality of straight line segments.
When the curb line segment extraction module 74 is configured to extract the curb line segments from the plurality of straight line segments according to the comparison result of the actual distance between the road edges of the current road and the pixel distance between the straight line segments and according to the comparison result of the actual color difference between both sides of the road edges of the current road and the pixel color difference between both sides of the straight line segments, it is further configured to:
converting the actual distance between the road edges of the current road acquired under a space coordinate system and the pixel distance between the straight line segments acquired under an image coordinate system into the same coordinate system by using a calibration coefficient; and performing difference operation on the actual distance between the road edges of the current road and the pixel distance between the straight line segments in the same coordinate system, and selecting a plurality of alternative straight line segments with the difference smaller than the redundancy error. The calibration coefficient is obtained by calculating the actual coordinate of the preset calibration point in the space coordinate system and the image coordinate of the calibration point in the image coordinate system.
The curb line segment extraction module 74 is further configured to calculate a gray average of pixels in a predetermined lateral width range between each candidate straight line segment and an adjacent candidate straight line segment or on both sides of each candidate straight line segment, and determine a pixel color difference on both sides of the candidate straight line segment according to the gray average; and then extracting the alternative straight line segments, the color difference of the pixels at the two sides of the alternative straight line segments is consistent with the actual color difference at the two sides of the road edge of the current road or is within the error allowable range, from the plurality of alternative straight line segments.
After the road edge line segment is extracted, the road edge line segment extracting module 74 is further configured to track a plurality of straight line segments of a subsequent image frame obtained subsequently by using the obtained road edge line segment, and further extract the road edge line segment from the plurality of straight line segments of the subsequent image frame.
Further, the road edge detection device further includes: and the actual distance calculation module is used for calculating the actual distance of the road edge line segment relative to the vehicle under the space coordinate system according to the pixel coordinates of the road edge line segment under the image coordinate system.
The invention also provides a vehicle which comprises the road edge detection device in the embodiment, and the road edge detection device can be used for automatically detecting the road edge of the current road in real time in the driving process of the vehicle.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.