Disclosure of Invention
The embodiment of the application aims to provide a lane line identification method, a lane line identification device, electronic equipment and a storage medium, which can accurately evaluate lane lines in pictures, realize high-precision lane line identification, accurately evaluate the conditions of the lane lines in complex scenes, meet the requirement of real-time performance and evaluate the lane lines more quickly and accurately.
In a first aspect, an embodiment of the present application provides a lane line recognition method, where the method includes:
obtaining picture data containing lane lines;
carrying out lane line modeling according to the picture data to obtain lane line data;
carrying out regression processing on the lane line data to obtain lane line regression coordinates;
and carrying out fusion processing on the lane line regression coordinates and the foreground and background feature map to obtain a recognition result.
In the implementation process, regression processing is carried out on the lane line data, and then the lane line regression coordinates are fused, so that lane lines in the pictures can be accurately identified and evaluated, high-precision lane line identification is realized, accurate evaluation can be realized on the lane line conditions in complex scenes, and meanwhile, the real-time requirements can be met, and the lane line identification and evaluation can be carried out more quickly and accurately.
Further, the step of performing regression processing on the lane line data to obtain lane line regression coordinates includes:
carrying out feature extraction on lane line data containing a first presence mark in the lane line data to obtain a first feature map;
performing convolution operation on the first feature map to obtain a first lane line pixel feature;
inputting the pixel characteristics of the first lane line into a full-connection layer to obtain lane line characteristic data;
and carrying out regression processing on the lane line characteristic data to obtain the lane line regression coordinates.
In the implementation process, the lane line data containing the first existence mark is subjected to feature extraction and convolution, and then is input into the full-connection layer, so that the lane line regression coordinates can be accurately obtained, and the data precision is improved.
Further, the step of fusing the lane line regression coordinates and the foreground and background feature map to obtain an identification result includes:
acquiring a regression lane line starting point coordinate and a regression lane line ending point coordinate in the lane line regression coordinates;
extracting effective lane line coordinates in the initial point coordinates of the regression lane lines and the final point coordinates of the regression lane lines;
obtaining the foreground and background feature map;
fusing the effective lane line coordinates and foreground coordinate points in the foreground and background feature map to obtain predicted coordinates;
evaluating the predicted coordinates to obtain an evaluation result;
and taking the predicted coordinates and the evaluation result as the identification result.
In the implementation process, the effective lane line coordinates in the initial point coordinates and the final point coordinates of the regression lane line are extracted, so that the calculation efficiency can be improved, the time is shortened, and meanwhile, the accuracy of the predicted coordinates is not influenced.
Further, the step of obtaining the foreground-background feature map includes:
carrying out feature extraction on lane line data containing a second existence mark in the lane line data to obtain a second feature map;
performing convolution operation on the second feature map to obtain a second lane line pixel feature;
obtaining a truth-value diagram;
and performing supervision training on the pixel characteristics of the second vehicle road line and the truth diagram through a first loss function to obtain the foreground and background characteristic diagram.
In the implementation process, the lane line data containing the second existence mark is subjected to feature extraction and convolution, and then is input into the full-connection layer, so that a foreground and background feature map can be accurately obtained, and errors in the calculation process are reduced.
Further, the step of obtaining a truth chart includes:
interpolation filling is carried out on lane line coordinates containing the existence marks in the lane line data, so that a lane line curve is obtained;
resetting the pixel coordinate points in the lane line curve to obtain the truth diagram.
In the implementation process, the lane line coordinates containing the existence marks in the lane line data are interpolated and filled, so that the obtained lane line curve can accurately reflect the change relation between the pixel coordinate points in the lane line, and the accuracy is improved.
Further, the step of fusing the effective lane line coordinates and the foreground coordinate points in the foreground and background feature map to obtain predicted coordinates includes:
mapping the effective lane line coordinates to the foreground and background feature map to obtain a coordinate point mapping distance difference value;
selecting a plurality of coordinate points with distance threshold values larger than the coordinate point mapping distance difference value in the effective lane line coordinates;
obtaining an average value according to the coordinate points;
and obtaining the predicted coordinates according to the average value.
In the implementation process, the effective lane line coordinates are mapped onto the foreground and background feature images for fusion, so that the error of the obtained predicted coordinates can be reduced.
Further, the step of evaluating the predicted coordinates to obtain an evaluation result includes:
obtaining the number of correct predicted points and the number of non-zero predicted points according to the predicted coordinate points;
acquiring the number of effective points in the lane line data;
obtaining Iou values according to the number of correct predicted points, the number of non-zero predicted points and the number of effective points;
the evaluation result is obtained according to the Iou value.
In the implementation process, iou values are obtained according to the number of correct predicted points, the number of non-zero predicted points and the number of effective points, and then the Iou values are evaluated, so that the evaluation result can completely reflect the change of the number of predicted points, and the accuracy of the evaluation result is improved.
Further, the step of obtaining the number of correct predicted points and the number of non-zero predicted points according to the predicted coordinate points includes:
obtaining a true value lane line cut angle;
obtaining the normal direction distance of the lane line according to the true lane line cutting angle;
judging whether the normal direction distance of the lane line is larger than the preset distance of the predicted coordinate point or not;
if yes, the predicted coordinate points are effective predicted coordinate points, and the number of the effective predicted coordinate points is counted to obtain the number of the correct predicted points and the number of the non-zero predicted points.
In the implementation process, the number of correct predicted points and the number of non-zero predicted points are obtained according to the truth value lane line cut angle and the preset distance of the predicted coordinate points, so that the number of predicted points can be known more accurately, and omission is avoided.
Further, the step of obtaining the evaluation result according to the Iou value includes:
traversing a predicted lane line;
if the existence mark of the marking lane line corresponding to the predicted lane line is a first existence mark, and the existence mark of the predicted lane line is a second existence mark, judging that the predicted lane line is false detection, and obtaining the false detection lane line number;
if the existence mark of the marking lane line corresponding to the predicted lane line is a second existence mark, obtaining the number of correctly predicted lane lines and the number of missed detection lane lines according to the Iou value;
obtaining accuracy, recall and F1 score according to the number of the missed detection lane lines, the number of the correctly predicted lane lines and the number of the false detection lane lines;
and generating and obtaining the evaluation result according to the accuracy rate, the recall rate and the F1 score.
Further, a Iou value is obtained from the correct number of predicted points, the number of non-zero predicted points, and the number of valid points by the following formula:
Iou = N_pred_valid / (N_pred_nzer + N_gt_valid);
wherein n_pred_valid is the number of correct predicted points, n_pred_nzer is the number of non-zero predicted points, and n_gt_valid is the number of valid points.
Further, the step of obtaining the number of correctly predicted lane lines and the number of missed detection lane lines according to the Iou value includes:
judging whether the Iou value is larger than a Iou threshold value or not;
if yes, correctly detecting the marked lane lines of the predicted lane lines corresponding to the Iou value, and obtaining the correctly predicted lane line number;
if not, the marked lane line of the predicted lane line corresponding to the Iou value is missed, and the number of missed detection lane lines is obtained.
In a second aspect, an embodiment of the present application further provides a lane line identifying apparatus, where the apparatus includes:
the data acquisition module is used for acquiring picture data containing lane lines;
the lane line modeling module is used for carrying out lane line modeling according to the picture data to obtain lane line data;
the regression processing module is used for carrying out regression processing on the lane line data to obtain lane line regression coordinates;
and the fusion processing module is used for carrying out fusion processing on the lane line regression coordinates and the foreground background feature map to obtain a recognition result.
In the implementation process, regression processing is carried out on the lane line data, and then the lane line regression coordinates are fused, so that lane lines in the pictures can be accurately identified and evaluated, high-precision lane line identification is realized, accurate evaluation can be realized on the lane line conditions in complex scenes, and meanwhile, the real-time requirements can be met, and the lane line identification and evaluation can be carried out more quickly and accurately.
In a third aspect, an electronic device provided in an embodiment of the present application includes: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to any one of the first aspects when the computer program is executed.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium, where instructions are stored, when the instructions are executed on a computer, to cause the computer to perform the method according to any one of the first aspects.
In a fifth aspect, embodiments of the present application provide a computer program product, which when run on a computer causes the computer to perform the method according to any of the first aspects.
Additional features and advantages of the disclosure will be set forth in the description which follows, or in part will be obvious from the description, or may be learned by practice of the techniques of the disclosure.
And can be implemented in accordance with the teachings of the specification, the following detailed description of the preferred embodiments of the application, taken in conjunction with the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
The following describes in further detail the embodiments of the present application with reference to the drawings and examples. The following examples are illustrative of the application and are not intended to limit the scope of the application.
Example 1
Fig. 1 is a flow chart of a lane line recognition method according to an embodiment of the present application, as shown in fig. 1, the method includes:
s1, obtaining picture data containing lane lines;
s2, carrying out lane line modeling according to the picture data to obtain lane line data;
s3, carrying out regression processing on the lane line data to obtain lane line regression coordinates;
and S4, carrying out fusion processing on the lane line regression coordinates and the foreground and background feature map to obtain a recognition result.
In the implementation process, regression processing is carried out on the lane line data, and then the lane line regression coordinates are fused, so that lane lines in the pictures can be accurately identified and evaluated, high-precision lane line identification is realized, accurate evaluation can be realized on the lane line conditions in complex scenes, and meanwhile, the real-time requirements can be met, and the lane line identification and evaluation can be carried out more quickly and accurately.
In S2, the embodiment of the present application models the labeled image data, where the labeling mode is to describe the ith lane line by collecting lane lines covered by the lane on the left side of the vehicle, the lane on the vehicle, and the lane on the right side of the vehicle, and sequentially adopting a point set mode from left to right, specifically, the labeling mode is that: line_i= { (W1, H1), (W2, H2), …, (Wm, hm) }, where i e N, N is the number of lane lines to be learned in one picture, and m is the number of lane Line coordinate points of each lane Line.
The lane line modeling adopts equal H interval sampling, the existence of the lane line is marked by 0 when a certain lane line is missing in the picture, the first existence mark is 1 in the embodiment of the application, the second existence mark is in the embodiment of the application, and the coordinate point of the lane line is marked by-2 when the number of points of the certain lane line is missing.
Further, S3 includes:
carrying out feature extraction on lane line data containing a first existence mark in the lane line data to obtain a first feature map;
performing convolution operation on the first feature map to obtain a first lane line pixel feature;
inputting the pixel characteristics of the first lane line into the full-connection layer to obtain lane line characteristic data;
and carrying out regression processing on the lane line characteristic data to obtain lane line regression coordinates.
In the implementation process, the lane line data containing the first existence mark is subjected to feature extraction and convolution, and then is input into the full-connection layer, so that the lane line regression coordinates can be accurately obtained, and the data precision is improved.
In the embodiment of the application, resNet is adopted as a backbone network of a network structure to extract the characteristics of the lane line data containing the first existence mark in the lane line data, and then FPN network is adopted to carry out regression processing and fusion processing.
Convolving the Feat [0] output by the FPN, collecting features near each lane line pixel, further extracting lane line feature data using the fully connected layer, comprising: lane line coordinate features, lane line presence features, lane line start point coordinate features, and lane line end point coordinate features. The lane line features extracted by the full connection layer are arranged according to a specific physical sequence so as to ensure the mapping relation between the output lane line feature data and the lane line physical ID.
Further, S4 includes:
acquiring a regression lane line starting point coordinate and a regression lane line ending point coordinate in lane line regression coordinates;
extracting effective lane line coordinates in the initial point coordinates of the regression lane lines and the final point coordinates of the regression lane lines;
obtaining a foreground background feature map;
fusing the effective lane line coordinates and foreground coordinate points in the foreground background feature map to obtain predicted coordinates;
evaluating the predicted coordinates to obtain an evaluation result;
and taking the predicted coordinates and the evaluation result as the recognition result.
In the implementation process, the effective lane line coordinates in the initial point coordinates and the final point coordinates of the regression lane line are extracted, so that the calculation efficiency can be improved, the time is shortened, and meanwhile, the accuracy of the predicted coordinates is not influenced.
Further, the step of obtaining the foreground-background feature map includes:
carrying out feature extraction on lane line data containing a second existence mark in the lane line data to obtain a second feature map;
performing convolution operation on the second feature map to obtain a second lane line pixel feature;
obtaining a truth-value diagram;
and performing supervision training on the pixel characteristics of the second vehicle road line and the truth diagram through the first loss function to obtain a foreground and background characteristic diagram.
In the implementation process, the lane line data containing the second existence mark is subjected to feature extraction and convolution, and then is input into the full-connection layer, so that a foreground and background feature map can be accurately obtained, and errors in the calculation process are reduced.
Extracting the second feature map of Feat [1] may enhance the lane feature representation. And taking a Feat [1] second feature map output by the FPN as input, performing convolution operation, and performing supervision training through a Focal-Loss function (first Loss function) by combining a truth map to obtain a foreground and background feature map.
Further, the step of obtaining a truth chart includes:
interpolation filling is carried out on lane line coordinates containing the existence marks in the lane line data, so that a lane line curve is obtained;
and resetting the pixel coordinate points in the lane line curve to obtain a truth value diagram.
In the implementation process, the lane line coordinates containing the existence marks in the lane line data are interpolated and filled, so that the obtained lane line curve can accurately reflect the change relation between the pixel coordinate points in the lane line, and the accuracy is improved.
Curve fitting is performed on the marked lane line coordinates and is filled with a line of width at a radius e. And setting 1 and 0 to the pixel coordinate point where the fitted lane line curve is located, so as to obtain a truth diagram.
Further, the step of fusing the effective lane line coordinates and the foreground coordinate points in the foreground and background feature map to obtain predicted coordinates includes:
mapping the coordinates of the effective lane lines to a foreground and background feature map to obtain a coordinate point mapping distance difference value;
selecting a plurality of coordinate points with distance thresholds larger than the coordinate point mapping distance difference value in the effective lane line coordinates;
obtaining an average value according to the coordinate points;
and obtaining the predicted coordinates according to the average value.
In the implementation process, the effective lane line coordinates are mapped to the foreground and background feature images, so that the error of the obtained predicted coordinates can be reduced.
And correspondingly extracting effective lane line coordinates according to the initial point coordinates and the final point coordinates of the regression lane lines extracted by the full-connection layer, and removing ineffective lane line coordinates.
And (3) based on the regressive effective lane line coordinates, giving a distance threshold value of the lane line coordinate points in the image width direction, and mapping the effective lane line coordinates onto the foreground and background feature map. And sequentially finding out a point B, which is smaller than a distance threshold, of the Hi coordinates on the foreground background map in the image width direction (namely, a coordinate point mapping distance difference value) with the effective lane line coordinates according to the Hi coordinates corresponding to the effective lane line coordinates for each effective lane line coordinate A under each lane line. And finally, taking the average value of the A and all the B as a prediction coordinate corresponding to the Hi coordinate under the lane line.
Further, S5 includes:
obtaining the number of correct predicted points and the number of non-zero predicted points according to the predicted coordinate points;
acquiring the number of effective points in lane line data;
obtaining Iou values according to the number of correct predicted points, the number of non-zero predicted points and the number of effective points;
the evaluation result was obtained according to Iou value.
In the implementation process, iou values are obtained according to the number of correct predicted points, the number of non-zero predicted points and the number of effective points, and then Iou values are evaluated, so that the evaluation result can completely reflect the change of the number of predicted points, and the accuracy of the evaluation result is improved.
The Iou value is calculated from the counted number of correct predicted points, the number of non-zero predicted points and the number of valid points by the formula Iou =n_pred_valid/(n_pred_nzer+n_gt_valid).
Where n_pred_valid represents the number of correct predicted points, n_pred_nzer represents the number of non-zero predicted points, and n_gt_valid represents the number of valid points.
Further, the step of obtaining the correct predicted point number and the non-zero predicted point number according to the predicted coordinate points includes:
obtaining a true value lane line cut angle;
obtaining the normal direction distance of the lane line according to the true value lane line cut angle;
judging whether the normal direction distance of the lane line is greater than the preset distance of the predicted coordinate point or not;
if yes, the predicted coordinate points are effective predicted coordinate points, and the number of the effective predicted coordinate points is counted to obtain the number of correct predicted points and the number of non-zero predicted points.
In the implementation process, the number of correct predicted points and the number of non-zero predicted points are obtained according to the truth value lane line cut angle and the preset distance of the predicted coordinate points, so that the number of predicted points can be known more accurately, and omission is avoided.
Specifically, curve fitting is performed on each marked lane, a true lane cut angle theta is obtained, the distance in the normal direction of the lane is given, and the distance in the normal direction of the lane is obtained sequentially according to the true lane cut angle, and is converted into the distance in the normal direction of the lane after the distance in the image width direction.
And calculating a preset distance between each predicted point of the predicted lane line and the corresponding marking point, and if the preset distance is smaller than the normal direction distance of the lane line, judging that the current predicted point is valid, otherwise, judging that the current predicted point is invalid.
Further, the step of obtaining the evaluation result according to the Iou value includes:
traversing a predicted lane line;
if the existence mark of the marking lane line corresponding to the predicted lane line is a first existence mark, and the existence mark of the predicted lane line is a second existence mark, judging that the predicted lane line is false detection, and obtaining the false detection lane line number;
if the existence mark of the marking lane line corresponding to the predicted lane line is a second existence mark, obtaining the number of correctly predicted lane lines and the number of missed detection lane lines according to the Iou value;
obtaining accuracy, recall and F1 score according to the number of missed detection lane lines, the number of correctly predicted lane lines and the number of false detection lane lines;
and generating and obtaining an evaluation result according to the accuracy rate, the recall rate and the F1 score.
Further, the step of obtaining the number of correctly predicted lane lines and the number of missed detection lane lines according to the Iou value includes:
judging whether the Iou value is larger than a Iou threshold value or not;
if yes, correctly detecting the marked lane lines of the predicted lane lines corresponding to the Iou value, and obtaining the correct predicted lane line number;
if not, the marked lane line of the predicted lane line corresponding to the Iou value is missed, and the number of missed lane lines is obtained.
Traversing each predicted lane line in sequence, and judging the current predicted lane line as false detection if the existence of the marked lane line corresponding to the predicted lane line is 0 and the existence of the predicted lane line is 1; otherwise, if the existence of the corresponding marked lane is 1, the accuracy of the predicted lane needs to be evaluated by calculating the Iou value of the predicted lane.
If the Iou value is larger than the Iou threshold value, the marking lane line corresponding to the predicted lane line is correctly detected; otherwise, the marked lane is missed, the number of correctly predicted lane lines, the number of false detection lane lines and the number of missed detection lane lines are counted, and the accuracy rate, recall rate and F1 score of the predicted lane lines are calculated in sequence.
Example two
In order to perform a corresponding method of the above embodiment to achieve the corresponding functions and technical effects, a lane line recognition apparatus is provided below, as shown in fig. 2, which includes:
a data obtaining module 1, configured to obtain picture data including lane lines;
the lane line modeling module 2 is used for carrying out lane line modeling according to the picture data to obtain lane line data;
the regression processing module 3 is used for carrying out regression processing on the lane line data to obtain lane line regression coordinates;
and the fusion processing module 4 is used for carrying out fusion processing on the lane line regression coordinates and the foreground background feature map to obtain a recognition result.
In the implementation process, regression processing is carried out on the lane line data, and then the lane line regression coordinates are fused, so that lane lines in the pictures can be accurately identified and evaluated, high-precision lane line identification is realized, accurate evaluation can be realized on the lane line conditions in complex scenes, and meanwhile, the real-time requirements can be met, and the lane line identification and evaluation can be carried out more quickly and accurately.
Further, the regression processing module 3 is further configured to:
carrying out feature extraction on lane line data containing a first existence mark in the lane line data to obtain a first feature map;
performing convolution operation on the first feature map to obtain a first lane line pixel feature;
inputting the pixel characteristics of the first lane line into the full-connection layer to obtain lane line characteristic data;
and carrying out regression processing on the lane line characteristic data to obtain lane line regression coordinates.
In the implementation process, the lane line data containing the first existence mark is subjected to feature extraction and convolution, and then is input into the full-connection layer, so that the lane line regression coordinates can be accurately obtained, and the data precision is improved.
Further, the fusion processing module 4 is further configured to:
acquiring a regression lane line starting point coordinate and a regression lane line ending point coordinate in lane line regression coordinates;
extracting effective lane line coordinates in the initial point coordinates of the regression lane lines and the final point coordinates of the regression lane lines;
obtaining a foreground background feature map;
fusing the effective lane line coordinates and foreground coordinate points in the foreground background feature map to obtain predicted coordinates;
evaluating the predicted coordinates to obtain an evaluation result;
and taking the predicted coordinates and the evaluation result as the recognition result.
In the implementation process, the effective lane line coordinates in the initial point coordinates and the final point coordinates of the regression lane line are extracted, so that the calculation efficiency can be improved, the time is shortened, and meanwhile, the accuracy of the predicted coordinates is not influenced.
Further, the fusion processing module 4 is further configured to:
carrying out feature extraction on lane line data containing a second existence mark in the lane line data to obtain a second feature map;
performing convolution operation on the second feature map to obtain a second lane line pixel feature;
obtaining a truth-value diagram;
and performing supervision training on the pixel characteristics of the second vehicle road line and the truth diagram through the first loss function to obtain a foreground and background characteristic diagram.
In the implementation process, the lane line data containing the second existence mark is subjected to feature extraction and convolution, and then is input into the full-connection layer, so that a foreground and background feature map can be accurately obtained, and errors in the calculation process are reduced.
Further, the fusion processing module 4 is further configured to:
interpolation filling is carried out on lane line coordinates containing the existence marks in the lane line data, so that a lane line curve is obtained;
and resetting the pixel coordinate points in the lane line curve to obtain a truth value diagram.
In the implementation process, the lane line coordinates containing the existence marks in the lane line data are interpolated and filled, so that the obtained lane line curve can accurately reflect the change relation between the pixel coordinate points in the lane line, and the accuracy is improved.
Further, the fusion processing module 4 is further configured to:
mapping the coordinates of the effective lane lines to the foreground and background feature map to obtain a coordinate point mapping distance difference value;
selecting a plurality of coordinate points with distance thresholds larger than the coordinate point mapping distance difference value in the effective lane line coordinates;
obtaining an average value according to the coordinate points;
and obtaining the predicted coordinates according to the average value.
In the implementation process, the effective lane line coordinates are mapped onto the foreground and background feature images for fusion, so that the error of the obtained predicted coordinates can be reduced.
Further, the apparatus also includes an evaluation module for:
obtaining the number of correct predicted points and the number of non-zero predicted points according to the predicted coordinate points;
acquiring the number of effective points in lane line data;
obtaining Iou values according to the number of correct predicted points, the number of non-zero predicted points and the number of effective points;
the evaluation result was obtained according to Iou value.
In the implementation process, iou values are obtained according to the number of correct predicted points, the number of non-zero predicted points and the number of effective points, and then evaluation is carried out according to Iou values, so that the evaluation result can completely reflect the change of the number of predicted points, and the accuracy of the evaluation result is improved.
Further, the evaluation module is further configured to:
obtaining a true value lane line cut angle;
obtaining the normal direction distance of the lane line according to the true value lane line cut angle;
judging whether the normal direction distance of the lane line is greater than the preset distance of the predicted coordinate point or not;
if yes, the predicted coordinate points are effective predicted coordinate points, and the number of the effective predicted coordinate points is counted to obtain the number of correct predicted points and the number of non-zero predicted points.
In the implementation process, the number of correct predicted points and the number of non-zero predicted points are obtained according to the truth value lane line cut angle and the preset distance of the predicted coordinate points, so that the number of predicted points can be known more accurately, and omission is avoided.
Further, the evaluation module is further configured to:
traversing a predicted lane line;
if the existence mark of the marking lane line corresponding to the predicted lane line is a first existence mark, and the existence mark of the predicted lane line is a second existence mark, judging that the predicted lane line is false detection, and obtaining the false detection lane line number;
if the existence mark of the marking lane line corresponding to the predicted lane line is a second existence mark, obtaining the number of correctly predicted lane lines and the number of missed detection lane lines according to the Iou value;
obtaining accuracy, recall and F1 score according to the number of missed detection lane lines, the number of correctly predicted lane lines and the number of false detection lane lines;
and generating and obtaining an evaluation result according to the accuracy rate, the recall rate and the F1 score.
The evaluation module is further configured to obtain Iou values according to the number of correct predicted points, the number of non-zero predicted points, and the number of valid points by the following formula:
Iou = N_pred_valid / (N_pred_nzer + N_gt_valid);
wherein, N_pred_valid is the number of correct predicted points, N_pred_nzer is the number of non-zero predicted points, and N_gt_valid is the number of valid points.
Further, the evaluation module is further configured to:
judging whether the Iou value is larger than a Iou threshold value or not;
if yes, correctly detecting the marked lane lines of the predicted lane lines corresponding to the Iou value, and obtaining the correct predicted lane line number;
if not, the marked lane line of the predicted lane line corresponding to the Iou value is missed, and the number of missed lane lines is obtained.
The lane line recognition device described above may implement the method of the first embodiment described above. The options in the first embodiment described above also apply to this embodiment, and are not described in detail here.
The rest of the embodiments of the present application may refer to the content of the first embodiment, and in this embodiment, no further description is given.
Example III
An embodiment of the present application provides an electronic device, including a memory and a processor, where the memory is configured to store a computer program, and the processor is configured to execute the computer program to cause the electronic device to execute the lane line identification method of the first embodiment.
Alternatively, the electronic device may be a server.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the application. The electronic device may include a processor 31, a communication interface 32, a memory 33, and at least one communication bus 34. Wherein the communication bus 34 is used to enable direct connection communication of these components. The communication interface 32 of the device in the embodiment of the present application is used for performing signaling or data communication with other node devices. The processor 31 may be an integrated circuit chip with signal processing capabilities.
In addition, the embodiment of the application also provides a computer readable storage medium, which stores a computer program, and the computer program realizes the lane line identification method of the first embodiment when being executed by a processor.
The present application also provides a computer program product which, when run on a computer, causes the computer to perform the method described in the method embodiments.