Movatterモバイル変換


[0]ホーム

URL:


CN116543365B - Lane line identification method and device, electronic equipment and storage medium - Google Patents

Lane line identification method and device, electronic equipment and storage medium
Download PDF

Info

Publication number
CN116543365B
CN116543365BCN202310822241.3ACN202310822241ACN116543365BCN 116543365 BCN116543365 BCN 116543365BCN 202310822241 ACN202310822241 ACN 202310822241ACN 116543365 BCN116543365 BCN 116543365B
Authority
CN
China
Prior art keywords
lane line
predicted
points
lane
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310822241.3A
Other languages
Chinese (zh)
Other versions
CN116543365A (en
Inventor
邓志巧
彭易锦
方志杰
何山波
陈春光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GAC Aion New Energy Automobile Co Ltd
Original Assignee
GAC Aion New Energy Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GAC Aion New Energy Automobile Co LtdfiledCriticalGAC Aion New Energy Automobile Co Ltd
Priority to CN202310822241.3ApriorityCriticalpatent/CN116543365B/en
Publication of CN116543365ApublicationCriticalpatent/CN116543365A/en
Application grantedgrantedCritical
Publication of CN116543365BpublicationCriticalpatent/CN116543365B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The embodiment of the application provides a lane line identification method, a lane line identification device, electronic equipment and a storage medium, wherein the method comprises the following steps: obtaining picture data containing lane lines; carrying out lane line modeling according to the picture data to obtain lane line data; carrying out regression processing on the lane line data to obtain lane line regression coordinates; and carrying out fusion processing on the lane line regression coordinates and the foreground and background feature map to obtain a recognition result. By implementing the embodiment of the application, the lane lines in the pictures can be accurately evaluated, the lane line identification with high precision is realized, the lane line condition in a complex scene can be accurately identified, the real-time requirement can be met, and the identification and evaluation can be performed more quickly and accurately.

Description

Lane line identification method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of data processing, in particular to a lane line identification method, a lane line identification device, electronic equipment and a storage medium.
Background
Lane line recognition is one of the key problems in the context awareness field of automatic driving, the purpose of which is to obtain the exact shape of each lane line on the road. However, the status of the lane lines presents uncertainty as they may be obscured, worn or otherwise affected by weather, which also presents a significant challenge for lane line identification. Meanwhile, since the lane line recognition must be performed in real time and rapidly, the lane line recognition task has very high requirements on the real-time performance of the algorithm.
The method for evaluating the lane lines in the prior art is difficult to realize real-time requirements, such as lane line identification based on the SCNN algorithm, and the method is very effective for the identification of the slender lane lines, but the slow speed of the method also limits the applicability of the method in practical application; laneNet method based lane line identification is a multitasking model combining semantic segmentation and vector representation of pixels, and lane lines can be effectively evaluated. However, the post-processing of the evaluation result depends on a clustering algorithm, which is time-consuming and difficult to meet the requirement of real-time; a pooling method based on a linear Anchor is combined with an attention mechanism to acquire more global information. However, the requirement of presetting an Anchor shape can affect the flexibility of evaluation; condLananeNet introduces a conditional lane detection strategy based on conditional convolution and row anchor algorithms, however, in some complex scenarios, the starting point is difficult to identify, resulting in poor performance.
Disclosure of Invention
The embodiment of the application aims to provide a lane line identification method, a lane line identification device, electronic equipment and a storage medium, which can accurately evaluate lane lines in pictures, realize high-precision lane line identification, accurately evaluate the conditions of the lane lines in complex scenes, meet the requirement of real-time performance and evaluate the lane lines more quickly and accurately.
In a first aspect, an embodiment of the present application provides a lane line recognition method, where the method includes:
obtaining picture data containing lane lines;
carrying out lane line modeling according to the picture data to obtain lane line data;
carrying out regression processing on the lane line data to obtain lane line regression coordinates;
and carrying out fusion processing on the lane line regression coordinates and the foreground and background feature map to obtain a recognition result.
In the implementation process, regression processing is carried out on the lane line data, and then the lane line regression coordinates are fused, so that lane lines in the pictures can be accurately identified and evaluated, high-precision lane line identification is realized, accurate evaluation can be realized on the lane line conditions in complex scenes, and meanwhile, the real-time requirements can be met, and the lane line identification and evaluation can be carried out more quickly and accurately.
Further, the step of performing regression processing on the lane line data to obtain lane line regression coordinates includes:
carrying out feature extraction on lane line data containing a first presence mark in the lane line data to obtain a first feature map;
performing convolution operation on the first feature map to obtain a first lane line pixel feature;
inputting the pixel characteristics of the first lane line into a full-connection layer to obtain lane line characteristic data;
and carrying out regression processing on the lane line characteristic data to obtain the lane line regression coordinates.
In the implementation process, the lane line data containing the first existence mark is subjected to feature extraction and convolution, and then is input into the full-connection layer, so that the lane line regression coordinates can be accurately obtained, and the data precision is improved.
Further, the step of fusing the lane line regression coordinates and the foreground and background feature map to obtain an identification result includes:
acquiring a regression lane line starting point coordinate and a regression lane line ending point coordinate in the lane line regression coordinates;
extracting effective lane line coordinates in the initial point coordinates of the regression lane lines and the final point coordinates of the regression lane lines;
obtaining the foreground and background feature map;
fusing the effective lane line coordinates and foreground coordinate points in the foreground and background feature map to obtain predicted coordinates;
evaluating the predicted coordinates to obtain an evaluation result;
and taking the predicted coordinates and the evaluation result as the identification result.
In the implementation process, the effective lane line coordinates in the initial point coordinates and the final point coordinates of the regression lane line are extracted, so that the calculation efficiency can be improved, the time is shortened, and meanwhile, the accuracy of the predicted coordinates is not influenced.
Further, the step of obtaining the foreground-background feature map includes:
carrying out feature extraction on lane line data containing a second existence mark in the lane line data to obtain a second feature map;
performing convolution operation on the second feature map to obtain a second lane line pixel feature;
obtaining a truth-value diagram;
and performing supervision training on the pixel characteristics of the second vehicle road line and the truth diagram through a first loss function to obtain the foreground and background characteristic diagram.
In the implementation process, the lane line data containing the second existence mark is subjected to feature extraction and convolution, and then is input into the full-connection layer, so that a foreground and background feature map can be accurately obtained, and errors in the calculation process are reduced.
Further, the step of obtaining a truth chart includes:
interpolation filling is carried out on lane line coordinates containing the existence marks in the lane line data, so that a lane line curve is obtained;
resetting the pixel coordinate points in the lane line curve to obtain the truth diagram.
In the implementation process, the lane line coordinates containing the existence marks in the lane line data are interpolated and filled, so that the obtained lane line curve can accurately reflect the change relation between the pixel coordinate points in the lane line, and the accuracy is improved.
Further, the step of fusing the effective lane line coordinates and the foreground coordinate points in the foreground and background feature map to obtain predicted coordinates includes:
mapping the effective lane line coordinates to the foreground and background feature map to obtain a coordinate point mapping distance difference value;
selecting a plurality of coordinate points with distance threshold values larger than the coordinate point mapping distance difference value in the effective lane line coordinates;
obtaining an average value according to the coordinate points;
and obtaining the predicted coordinates according to the average value.
In the implementation process, the effective lane line coordinates are mapped onto the foreground and background feature images for fusion, so that the error of the obtained predicted coordinates can be reduced.
Further, the step of evaluating the predicted coordinates to obtain an evaluation result includes:
obtaining the number of correct predicted points and the number of non-zero predicted points according to the predicted coordinate points;
acquiring the number of effective points in the lane line data;
obtaining Iou values according to the number of correct predicted points, the number of non-zero predicted points and the number of effective points;
the evaluation result is obtained according to the Iou value.
In the implementation process, iou values are obtained according to the number of correct predicted points, the number of non-zero predicted points and the number of effective points, and then the Iou values are evaluated, so that the evaluation result can completely reflect the change of the number of predicted points, and the accuracy of the evaluation result is improved.
Further, the step of obtaining the number of correct predicted points and the number of non-zero predicted points according to the predicted coordinate points includes:
obtaining a true value lane line cut angle;
obtaining the normal direction distance of the lane line according to the true lane line cutting angle;
judging whether the normal direction distance of the lane line is larger than the preset distance of the predicted coordinate point or not;
if yes, the predicted coordinate points are effective predicted coordinate points, and the number of the effective predicted coordinate points is counted to obtain the number of the correct predicted points and the number of the non-zero predicted points.
In the implementation process, the number of correct predicted points and the number of non-zero predicted points are obtained according to the truth value lane line cut angle and the preset distance of the predicted coordinate points, so that the number of predicted points can be known more accurately, and omission is avoided.
Further, the step of obtaining the evaluation result according to the Iou value includes:
traversing a predicted lane line;
if the existence mark of the marking lane line corresponding to the predicted lane line is a first existence mark, and the existence mark of the predicted lane line is a second existence mark, judging that the predicted lane line is false detection, and obtaining the false detection lane line number;
if the existence mark of the marking lane line corresponding to the predicted lane line is a second existence mark, obtaining the number of correctly predicted lane lines and the number of missed detection lane lines according to the Iou value;
obtaining accuracy, recall and F1 score according to the number of the missed detection lane lines, the number of the correctly predicted lane lines and the number of the false detection lane lines;
and generating and obtaining the evaluation result according to the accuracy rate, the recall rate and the F1 score.
Further, a Iou value is obtained from the correct number of predicted points, the number of non-zero predicted points, and the number of valid points by the following formula:
Iou = N_pred_valid / (N_pred_nzer + N_gt_valid);
wherein n_pred_valid is the number of correct predicted points, n_pred_nzer is the number of non-zero predicted points, and n_gt_valid is the number of valid points.
Further, the step of obtaining the number of correctly predicted lane lines and the number of missed detection lane lines according to the Iou value includes:
judging whether the Iou value is larger than a Iou threshold value or not;
if yes, correctly detecting the marked lane lines of the predicted lane lines corresponding to the Iou value, and obtaining the correctly predicted lane line number;
if not, the marked lane line of the predicted lane line corresponding to the Iou value is missed, and the number of missed detection lane lines is obtained.
In a second aspect, an embodiment of the present application further provides a lane line identifying apparatus, where the apparatus includes:
the data acquisition module is used for acquiring picture data containing lane lines;
the lane line modeling module is used for carrying out lane line modeling according to the picture data to obtain lane line data;
the regression processing module is used for carrying out regression processing on the lane line data to obtain lane line regression coordinates;
and the fusion processing module is used for carrying out fusion processing on the lane line regression coordinates and the foreground background feature map to obtain a recognition result.
In the implementation process, regression processing is carried out on the lane line data, and then the lane line regression coordinates are fused, so that lane lines in the pictures can be accurately identified and evaluated, high-precision lane line identification is realized, accurate evaluation can be realized on the lane line conditions in complex scenes, and meanwhile, the real-time requirements can be met, and the lane line identification and evaluation can be carried out more quickly and accurately.
In a third aspect, an electronic device provided in an embodiment of the present application includes: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to any one of the first aspects when the computer program is executed.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium, where instructions are stored, when the instructions are executed on a computer, to cause the computer to perform the method according to any one of the first aspects.
In a fifth aspect, embodiments of the present application provide a computer program product, which when run on a computer causes the computer to perform the method according to any of the first aspects.
Additional features and advantages of the disclosure will be set forth in the description which follows, or in part will be obvious from the description, or may be learned by practice of the techniques of the disclosure.
And can be implemented in accordance with the teachings of the specification, the following detailed description of the preferred embodiments of the application, taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be construed as limiting the scope values, and other related drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a lane line recognition method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a lane line recognition device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
The following describes in further detail the embodiments of the present application with reference to the drawings and examples. The following examples are illustrative of the application and are not intended to limit the scope of the application.
Example 1
Fig. 1 is a flow chart of a lane line recognition method according to an embodiment of the present application, as shown in fig. 1, the method includes:
s1, obtaining picture data containing lane lines;
s2, carrying out lane line modeling according to the picture data to obtain lane line data;
s3, carrying out regression processing on the lane line data to obtain lane line regression coordinates;
and S4, carrying out fusion processing on the lane line regression coordinates and the foreground and background feature map to obtain a recognition result.
In the implementation process, regression processing is carried out on the lane line data, and then the lane line regression coordinates are fused, so that lane lines in the pictures can be accurately identified and evaluated, high-precision lane line identification is realized, accurate evaluation can be realized on the lane line conditions in complex scenes, and meanwhile, the real-time requirements can be met, and the lane line identification and evaluation can be carried out more quickly and accurately.
In S2, the embodiment of the present application models the labeled image data, where the labeling mode is to describe the ith lane line by collecting lane lines covered by the lane on the left side of the vehicle, the lane on the vehicle, and the lane on the right side of the vehicle, and sequentially adopting a point set mode from left to right, specifically, the labeling mode is that: line_i= { (W1, H1), (W2, H2), …, (Wm, hm) }, where i e N, N is the number of lane lines to be learned in one picture, and m is the number of lane Line coordinate points of each lane Line.
The lane line modeling adopts equal H interval sampling, the existence of the lane line is marked by 0 when a certain lane line is missing in the picture, the first existence mark is 1 in the embodiment of the application, the second existence mark is in the embodiment of the application, and the coordinate point of the lane line is marked by-2 when the number of points of the certain lane line is missing.
Further, S3 includes:
carrying out feature extraction on lane line data containing a first existence mark in the lane line data to obtain a first feature map;
performing convolution operation on the first feature map to obtain a first lane line pixel feature;
inputting the pixel characteristics of the first lane line into the full-connection layer to obtain lane line characteristic data;
and carrying out regression processing on the lane line characteristic data to obtain lane line regression coordinates.
In the implementation process, the lane line data containing the first existence mark is subjected to feature extraction and convolution, and then is input into the full-connection layer, so that the lane line regression coordinates can be accurately obtained, and the data precision is improved.
In the embodiment of the application, resNet is adopted as a backbone network of a network structure to extract the characteristics of the lane line data containing the first existence mark in the lane line data, and then FPN network is adopted to carry out regression processing and fusion processing.
Convolving the Feat [0] output by the FPN, collecting features near each lane line pixel, further extracting lane line feature data using the fully connected layer, comprising: lane line coordinate features, lane line presence features, lane line start point coordinate features, and lane line end point coordinate features. The lane line features extracted by the full connection layer are arranged according to a specific physical sequence so as to ensure the mapping relation between the output lane line feature data and the lane line physical ID.
Further, S4 includes:
acquiring a regression lane line starting point coordinate and a regression lane line ending point coordinate in lane line regression coordinates;
extracting effective lane line coordinates in the initial point coordinates of the regression lane lines and the final point coordinates of the regression lane lines;
obtaining a foreground background feature map;
fusing the effective lane line coordinates and foreground coordinate points in the foreground background feature map to obtain predicted coordinates;
evaluating the predicted coordinates to obtain an evaluation result;
and taking the predicted coordinates and the evaluation result as the recognition result.
In the implementation process, the effective lane line coordinates in the initial point coordinates and the final point coordinates of the regression lane line are extracted, so that the calculation efficiency can be improved, the time is shortened, and meanwhile, the accuracy of the predicted coordinates is not influenced.
Further, the step of obtaining the foreground-background feature map includes:
carrying out feature extraction on lane line data containing a second existence mark in the lane line data to obtain a second feature map;
performing convolution operation on the second feature map to obtain a second lane line pixel feature;
obtaining a truth-value diagram;
and performing supervision training on the pixel characteristics of the second vehicle road line and the truth diagram through the first loss function to obtain a foreground and background characteristic diagram.
In the implementation process, the lane line data containing the second existence mark is subjected to feature extraction and convolution, and then is input into the full-connection layer, so that a foreground and background feature map can be accurately obtained, and errors in the calculation process are reduced.
Extracting the second feature map of Feat [1] may enhance the lane feature representation. And taking a Feat [1] second feature map output by the FPN as input, performing convolution operation, and performing supervision training through a Focal-Loss function (first Loss function) by combining a truth map to obtain a foreground and background feature map.
Further, the step of obtaining a truth chart includes:
interpolation filling is carried out on lane line coordinates containing the existence marks in the lane line data, so that a lane line curve is obtained;
and resetting the pixel coordinate points in the lane line curve to obtain a truth value diagram.
In the implementation process, the lane line coordinates containing the existence marks in the lane line data are interpolated and filled, so that the obtained lane line curve can accurately reflect the change relation between the pixel coordinate points in the lane line, and the accuracy is improved.
Curve fitting is performed on the marked lane line coordinates and is filled with a line of width at a radius e. And setting 1 and 0 to the pixel coordinate point where the fitted lane line curve is located, so as to obtain a truth diagram.
Further, the step of fusing the effective lane line coordinates and the foreground coordinate points in the foreground and background feature map to obtain predicted coordinates includes:
mapping the coordinates of the effective lane lines to a foreground and background feature map to obtain a coordinate point mapping distance difference value;
selecting a plurality of coordinate points with distance thresholds larger than the coordinate point mapping distance difference value in the effective lane line coordinates;
obtaining an average value according to the coordinate points;
and obtaining the predicted coordinates according to the average value.
In the implementation process, the effective lane line coordinates are mapped to the foreground and background feature images, so that the error of the obtained predicted coordinates can be reduced.
And correspondingly extracting effective lane line coordinates according to the initial point coordinates and the final point coordinates of the regression lane lines extracted by the full-connection layer, and removing ineffective lane line coordinates.
And (3) based on the regressive effective lane line coordinates, giving a distance threshold value of the lane line coordinate points in the image width direction, and mapping the effective lane line coordinates onto the foreground and background feature map. And sequentially finding out a point B, which is smaller than a distance threshold, of the Hi coordinates on the foreground background map in the image width direction (namely, a coordinate point mapping distance difference value) with the effective lane line coordinates according to the Hi coordinates corresponding to the effective lane line coordinates for each effective lane line coordinate A under each lane line. And finally, taking the average value of the A and all the B as a prediction coordinate corresponding to the Hi coordinate under the lane line.
Further, S5 includes:
obtaining the number of correct predicted points and the number of non-zero predicted points according to the predicted coordinate points;
acquiring the number of effective points in lane line data;
obtaining Iou values according to the number of correct predicted points, the number of non-zero predicted points and the number of effective points;
the evaluation result was obtained according to Iou value.
In the implementation process, iou values are obtained according to the number of correct predicted points, the number of non-zero predicted points and the number of effective points, and then Iou values are evaluated, so that the evaluation result can completely reflect the change of the number of predicted points, and the accuracy of the evaluation result is improved.
The Iou value is calculated from the counted number of correct predicted points, the number of non-zero predicted points and the number of valid points by the formula Iou =n_pred_valid/(n_pred_nzer+n_gt_valid).
Where n_pred_valid represents the number of correct predicted points, n_pred_nzer represents the number of non-zero predicted points, and n_gt_valid represents the number of valid points.
Further, the step of obtaining the correct predicted point number and the non-zero predicted point number according to the predicted coordinate points includes:
obtaining a true value lane line cut angle;
obtaining the normal direction distance of the lane line according to the true value lane line cut angle;
judging whether the normal direction distance of the lane line is greater than the preset distance of the predicted coordinate point or not;
if yes, the predicted coordinate points are effective predicted coordinate points, and the number of the effective predicted coordinate points is counted to obtain the number of correct predicted points and the number of non-zero predicted points.
In the implementation process, the number of correct predicted points and the number of non-zero predicted points are obtained according to the truth value lane line cut angle and the preset distance of the predicted coordinate points, so that the number of predicted points can be known more accurately, and omission is avoided.
Specifically, curve fitting is performed on each marked lane, a true lane cut angle theta is obtained, the distance in the normal direction of the lane is given, and the distance in the normal direction of the lane is obtained sequentially according to the true lane cut angle, and is converted into the distance in the normal direction of the lane after the distance in the image width direction.
And calculating a preset distance between each predicted point of the predicted lane line and the corresponding marking point, and if the preset distance is smaller than the normal direction distance of the lane line, judging that the current predicted point is valid, otherwise, judging that the current predicted point is invalid.
Further, the step of obtaining the evaluation result according to the Iou value includes:
traversing a predicted lane line;
if the existence mark of the marking lane line corresponding to the predicted lane line is a first existence mark, and the existence mark of the predicted lane line is a second existence mark, judging that the predicted lane line is false detection, and obtaining the false detection lane line number;
if the existence mark of the marking lane line corresponding to the predicted lane line is a second existence mark, obtaining the number of correctly predicted lane lines and the number of missed detection lane lines according to the Iou value;
obtaining accuracy, recall and F1 score according to the number of missed detection lane lines, the number of correctly predicted lane lines and the number of false detection lane lines;
and generating and obtaining an evaluation result according to the accuracy rate, the recall rate and the F1 score.
Further, the step of obtaining the number of correctly predicted lane lines and the number of missed detection lane lines according to the Iou value includes:
judging whether the Iou value is larger than a Iou threshold value or not;
if yes, correctly detecting the marked lane lines of the predicted lane lines corresponding to the Iou value, and obtaining the correct predicted lane line number;
if not, the marked lane line of the predicted lane line corresponding to the Iou value is missed, and the number of missed lane lines is obtained.
Traversing each predicted lane line in sequence, and judging the current predicted lane line as false detection if the existence of the marked lane line corresponding to the predicted lane line is 0 and the existence of the predicted lane line is 1; otherwise, if the existence of the corresponding marked lane is 1, the accuracy of the predicted lane needs to be evaluated by calculating the Iou value of the predicted lane.
If the Iou value is larger than the Iou threshold value, the marking lane line corresponding to the predicted lane line is correctly detected; otherwise, the marked lane is missed, the number of correctly predicted lane lines, the number of false detection lane lines and the number of missed detection lane lines are counted, and the accuracy rate, recall rate and F1 score of the predicted lane lines are calculated in sequence.
Example two
In order to perform a corresponding method of the above embodiment to achieve the corresponding functions and technical effects, a lane line recognition apparatus is provided below, as shown in fig. 2, which includes:
a data obtaining module 1, configured to obtain picture data including lane lines;
the lane line modeling module 2 is used for carrying out lane line modeling according to the picture data to obtain lane line data;
the regression processing module 3 is used for carrying out regression processing on the lane line data to obtain lane line regression coordinates;
and the fusion processing module 4 is used for carrying out fusion processing on the lane line regression coordinates and the foreground background feature map to obtain a recognition result.
In the implementation process, regression processing is carried out on the lane line data, and then the lane line regression coordinates are fused, so that lane lines in the pictures can be accurately identified and evaluated, high-precision lane line identification is realized, accurate evaluation can be realized on the lane line conditions in complex scenes, and meanwhile, the real-time requirements can be met, and the lane line identification and evaluation can be carried out more quickly and accurately.
Further, the regression processing module 3 is further configured to:
carrying out feature extraction on lane line data containing a first existence mark in the lane line data to obtain a first feature map;
performing convolution operation on the first feature map to obtain a first lane line pixel feature;
inputting the pixel characteristics of the first lane line into the full-connection layer to obtain lane line characteristic data;
and carrying out regression processing on the lane line characteristic data to obtain lane line regression coordinates.
In the implementation process, the lane line data containing the first existence mark is subjected to feature extraction and convolution, and then is input into the full-connection layer, so that the lane line regression coordinates can be accurately obtained, and the data precision is improved.
Further, the fusion processing module 4 is further configured to:
acquiring a regression lane line starting point coordinate and a regression lane line ending point coordinate in lane line regression coordinates;
extracting effective lane line coordinates in the initial point coordinates of the regression lane lines and the final point coordinates of the regression lane lines;
obtaining a foreground background feature map;
fusing the effective lane line coordinates and foreground coordinate points in the foreground background feature map to obtain predicted coordinates;
evaluating the predicted coordinates to obtain an evaluation result;
and taking the predicted coordinates and the evaluation result as the recognition result.
In the implementation process, the effective lane line coordinates in the initial point coordinates and the final point coordinates of the regression lane line are extracted, so that the calculation efficiency can be improved, the time is shortened, and meanwhile, the accuracy of the predicted coordinates is not influenced.
Further, the fusion processing module 4 is further configured to:
carrying out feature extraction on lane line data containing a second existence mark in the lane line data to obtain a second feature map;
performing convolution operation on the second feature map to obtain a second lane line pixel feature;
obtaining a truth-value diagram;
and performing supervision training on the pixel characteristics of the second vehicle road line and the truth diagram through the first loss function to obtain a foreground and background characteristic diagram.
In the implementation process, the lane line data containing the second existence mark is subjected to feature extraction and convolution, and then is input into the full-connection layer, so that a foreground and background feature map can be accurately obtained, and errors in the calculation process are reduced.
Further, the fusion processing module 4 is further configured to:
interpolation filling is carried out on lane line coordinates containing the existence marks in the lane line data, so that a lane line curve is obtained;
and resetting the pixel coordinate points in the lane line curve to obtain a truth value diagram.
In the implementation process, the lane line coordinates containing the existence marks in the lane line data are interpolated and filled, so that the obtained lane line curve can accurately reflect the change relation between the pixel coordinate points in the lane line, and the accuracy is improved.
Further, the fusion processing module 4 is further configured to:
mapping the coordinates of the effective lane lines to the foreground and background feature map to obtain a coordinate point mapping distance difference value;
selecting a plurality of coordinate points with distance thresholds larger than the coordinate point mapping distance difference value in the effective lane line coordinates;
obtaining an average value according to the coordinate points;
and obtaining the predicted coordinates according to the average value.
In the implementation process, the effective lane line coordinates are mapped onto the foreground and background feature images for fusion, so that the error of the obtained predicted coordinates can be reduced.
Further, the apparatus also includes an evaluation module for:
obtaining the number of correct predicted points and the number of non-zero predicted points according to the predicted coordinate points;
acquiring the number of effective points in lane line data;
obtaining Iou values according to the number of correct predicted points, the number of non-zero predicted points and the number of effective points;
the evaluation result was obtained according to Iou value.
In the implementation process, iou values are obtained according to the number of correct predicted points, the number of non-zero predicted points and the number of effective points, and then evaluation is carried out according to Iou values, so that the evaluation result can completely reflect the change of the number of predicted points, and the accuracy of the evaluation result is improved.
Further, the evaluation module is further configured to:
obtaining a true value lane line cut angle;
obtaining the normal direction distance of the lane line according to the true value lane line cut angle;
judging whether the normal direction distance of the lane line is greater than the preset distance of the predicted coordinate point or not;
if yes, the predicted coordinate points are effective predicted coordinate points, and the number of the effective predicted coordinate points is counted to obtain the number of correct predicted points and the number of non-zero predicted points.
In the implementation process, the number of correct predicted points and the number of non-zero predicted points are obtained according to the truth value lane line cut angle and the preset distance of the predicted coordinate points, so that the number of predicted points can be known more accurately, and omission is avoided.
Further, the evaluation module is further configured to:
traversing a predicted lane line;
if the existence mark of the marking lane line corresponding to the predicted lane line is a first existence mark, and the existence mark of the predicted lane line is a second existence mark, judging that the predicted lane line is false detection, and obtaining the false detection lane line number;
if the existence mark of the marking lane line corresponding to the predicted lane line is a second existence mark, obtaining the number of correctly predicted lane lines and the number of missed detection lane lines according to the Iou value;
obtaining accuracy, recall and F1 score according to the number of missed detection lane lines, the number of correctly predicted lane lines and the number of false detection lane lines;
and generating and obtaining an evaluation result according to the accuracy rate, the recall rate and the F1 score.
The evaluation module is further configured to obtain Iou values according to the number of correct predicted points, the number of non-zero predicted points, and the number of valid points by the following formula:
Iou = N_pred_valid / (N_pred_nzer + N_gt_valid);
wherein, N_pred_valid is the number of correct predicted points, N_pred_nzer is the number of non-zero predicted points, and N_gt_valid is the number of valid points.
Further, the evaluation module is further configured to:
judging whether the Iou value is larger than a Iou threshold value or not;
if yes, correctly detecting the marked lane lines of the predicted lane lines corresponding to the Iou value, and obtaining the correct predicted lane line number;
if not, the marked lane line of the predicted lane line corresponding to the Iou value is missed, and the number of missed lane lines is obtained.
The lane line recognition device described above may implement the method of the first embodiment described above. The options in the first embodiment described above also apply to this embodiment, and are not described in detail here.
The rest of the embodiments of the present application may refer to the content of the first embodiment, and in this embodiment, no further description is given.
Example III
An embodiment of the present application provides an electronic device, including a memory and a processor, where the memory is configured to store a computer program, and the processor is configured to execute the computer program to cause the electronic device to execute the lane line identification method of the first embodiment.
Alternatively, the electronic device may be a server.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the application. The electronic device may include a processor 31, a communication interface 32, a memory 33, and at least one communication bus 34. Wherein the communication bus 34 is used to enable direct connection communication of these components. The communication interface 32 of the device in the embodiment of the present application is used for performing signaling or data communication with other node devices. The processor 31 may be an integrated circuit chip with signal processing capabilities.
In addition, the embodiment of the application also provides a computer readable storage medium, which stores a computer program, and the computer program realizes the lane line identification method of the first embodiment when being executed by a processor.
The present application also provides a computer program product which, when run on a computer, causes the computer to perform the method described in the method embodiments.

Claims (6)

CN202310822241.3A2023-07-062023-07-06Lane line identification method and device, electronic equipment and storage mediumActiveCN116543365B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202310822241.3ACN116543365B (en)2023-07-062023-07-06Lane line identification method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202310822241.3ACN116543365B (en)2023-07-062023-07-06Lane line identification method and device, electronic equipment and storage medium

Publications (2)

Publication NumberPublication Date
CN116543365A CN116543365A (en)2023-08-04
CN116543365Btrue CN116543365B (en)2023-10-10

Family

ID=87451063

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202310822241.3AActiveCN116543365B (en)2023-07-062023-07-06Lane line identification method and device, electronic equipment and storage medium

Country Status (1)

CountryLink
CN (1)CN116543365B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118071620B (en)*2024-02-042025-03-14北京鉴智科技有限公司Image generation method and device

Citations (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109345547A (en)*2018-10-192019-02-15天津天地伟业投资管理有限公司Traffic lane line detecting method and device based on deep learning multitask network
US10275667B1 (en)*2018-09-052019-04-30StradVision, Inc.Learning method, learning device for detecting lane through lane model and testing method, testing device using the same
US10423840B1 (en)*2019-01-312019-09-24StradVision, Inc.Post-processing method and device for detecting lanes to plan the drive path of autonomous vehicle by using segmentation score map and clustering map
CN111179345A (en)*2019-12-272020-05-19大连海事大学Method and system for automatically detecting violation behaviors of crossing lines of front vehicle based on vehicle-mounted machine vision
CN112926548A (en)*2021-04-142021-06-08北京车和家信息技术有限公司Lane line detection method and device, electronic equipment and storage medium
CN113033395A (en)*2021-03-252021-06-25太原科技大学Drivable region segmentation method based on DeFCN and vanishing point edge detection
CN113095152A (en)*2021-03-182021-07-09西安交通大学Lane line detection method and system based on regression
CN113627228A (en)*2021-05-282021-11-09华南理工大学Lane line detection method based on key point regression and multi-scale feature fusion
KR102363719B1 (en)*2021-06-302022-02-16주식회사 모빌테크Lane extraction method using projection transformation of 3D point cloud map
KR20220058264A (en)*2020-10-302022-05-09(주) 오토노머스에이투지Method and computing device for estimating position of vehicle by referring to information on lane detected from road image obtained by image sensor
WO2022126377A1 (en)*2020-12-152022-06-23中国科学院深圳先进技术研究院Traffic lane line detection method and apparatus, and terminal device and readable storage medium
CN115116018A (en)*2022-06-302022-09-27北京旋极信息技术股份有限公司Method and device for fitting lane line
CN115620259A (en)*2022-09-072023-01-17杭州电子科技大学Lane line detection method based on traffic off-site law enforcement scene
CN116071719A (en)*2023-02-242023-05-05智道网联科技(北京)有限公司Lane line semantic segmentation method and device based on model dynamic correction
CN116311137A (en)*2023-03-302023-06-23同济大学 A lane line detection method based on multi-representation ensemble learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9721471B2 (en)*2014-12-162017-08-01Here Global B.V.Learning lanes from radar data

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US10275667B1 (en)*2018-09-052019-04-30StradVision, Inc.Learning method, learning device for detecting lane through lane model and testing method, testing device using the same
CN109345547A (en)*2018-10-192019-02-15天津天地伟业投资管理有限公司Traffic lane line detecting method and device based on deep learning multitask network
US10423840B1 (en)*2019-01-312019-09-24StradVision, Inc.Post-processing method and device for detecting lanes to plan the drive path of autonomous vehicle by using segmentation score map and clustering map
CN111179345A (en)*2019-12-272020-05-19大连海事大学Method and system for automatically detecting violation behaviors of crossing lines of front vehicle based on vehicle-mounted machine vision
KR20220058264A (en)*2020-10-302022-05-09(주) 오토노머스에이투지Method and computing device for estimating position of vehicle by referring to information on lane detected from road image obtained by image sensor
WO2022126377A1 (en)*2020-12-152022-06-23中国科学院深圳先进技术研究院Traffic lane line detection method and apparatus, and terminal device and readable storage medium
CN113095152A (en)*2021-03-182021-07-09西安交通大学Lane line detection method and system based on regression
CN113033395A (en)*2021-03-252021-06-25太原科技大学Drivable region segmentation method based on DeFCN and vanishing point edge detection
CN112926548A (en)*2021-04-142021-06-08北京车和家信息技术有限公司Lane line detection method and device, electronic equipment and storage medium
CN113627228A (en)*2021-05-282021-11-09华南理工大学Lane line detection method based on key point regression and multi-scale feature fusion
KR102363719B1 (en)*2021-06-302022-02-16주식회사 모빌테크Lane extraction method using projection transformation of 3D point cloud map
CN115116018A (en)*2022-06-302022-09-27北京旋极信息技术股份有限公司Method and device for fitting lane line
CN115620259A (en)*2022-09-072023-01-17杭州电子科技大学Lane line detection method based on traffic off-site law enforcement scene
CN116071719A (en)*2023-02-242023-05-05智道网联科技(北京)有限公司Lane line semantic segmentation method and device based on model dynamic correction
CN116311137A (en)*2023-03-302023-06-23同济大学 A lane line detection method based on multi-representation ensemble learning

Also Published As

Publication numberPublication date
CN116543365A (en)2023-08-04

Similar Documents

PublicationPublication DateTitle
WO2022083402A1 (en)Obstacle detection method and apparatus, computer device, and storage medium
CN108960211B (en)Multi-target human body posture detection method and system
CN110705405A (en)Target labeling method and device
CN111723724B (en)Road surface obstacle recognition method and related device
CN112837384B (en)Vehicle marking method and device and electronic equipment
CN112016514B (en)Traffic sign recognition method, device, equipment and storage medium
CN113221750A (en)Vehicle tracking method, device, equipment and storage medium
CN114663852A (en)Method and device for constructing lane line graph, electronic equipment and readable storage medium
CN115235478B (en)Intelligent automobile positioning method and system based on visual label and laser SLAM
CN111126209B (en)Lane line detection method and related equipment
CN116543365B (en)Lane line identification method and device, electronic equipment and storage medium
CN114842439B (en) Vehicle identification method, device, electronic device and storage medium across sensing devices
CN112598743A (en)Pose estimation method of monocular visual image and related device
CN113887391A (en) Method, apparatus, and autonomous vehicle for recognizing road markings
CN114627394B (en)Muck vehicle fake plate identification method and system based on unmanned aerial vehicle
CN112232317B (en)Target detection method and device, equipment and medium for target orientation recognition
CN111611906B (en)Obstacle detection method, system and medium for automatic parking
CN111339226B (en)Method and device for constructing map based on classification detection network
CN115908486A (en)Vehicle speed estimation method and device
CN116721397A (en)Target detection method, device, electronic equipment and storage medium
CN116363628A (en) Sign detection method, device, non-volatile storage medium and computer equipment
CN116259021A (en)Lane line detection method, storage medium and electronic equipment
CN111126336B (en) Sample collection method, device and equipment
CN115457282A (en)Point cloud data processing method and device
CN116434030A (en)Multi-sensor pre-fusion method and device based on single-sensor time sequence tracking result

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp