1Henan Univ. of Technology (China)
*Address all correspondence to Yingying Qiu, sept_30@stu.haut.edu.cn
ARTICLE - 1 Introduction
- 2 Related Work
- 2.1 ConvGRU
- 2.2 Attention Mechanism
- 3 Method
- 3.1 Overall Structure of the Model
- 3.1.1 Encoder–decoder network
- 3.1.2 ConvGRU
- 3.1.3 PSA
- 3.2 Loss Function
- 4 Experimental Results and Analysis
- 4.1 Dataset
- 4.2 Implementation Details
- 4.3 Evaluation Metrics
- 4.4 Experimental Results and Analysis
- 4.4.1 Parameter analysis of ConvGRU
- 4.4.2 Analysis of α and β values in loss function
- 4.4.3 Ablation study
- Quantitative analysis
- Visualized comparison
- 4.4.4 Comparison with U-Net Deformation Algorithms
- Quantitative results comparison
- Visualized comparison
- 4.4.5 Comparison with lane line detection algorithms
- Quantitative results comparison
- Visualized comparison with the single-frame approaches
- Visualization comparison with the multiframe approaches
- 5 Discussion
- 6 Conclusion
- 7 Data Availability Statement
FIGURES & TABLES REFERENCES CITED BY
In order to solve lane line detection in difficult traffic conditions, such as shadow occlusion, signpost degradation, curves, and tunnels, numerous models have been proposed. However, most of the existing models conduct independent single-frame image detection, which makes it difficult to utilize the continuity of driving images and is ineffective in challenging scenes. To this end, we suggest a spatiotemporal information processing model for lane line recognition that enhances critical features. In order to properly learn the correlation between continuous images, we first employ a convolutional gated recurrent unit to process spatiotemporal driving information on the basis of U-Net. Second, the pyramid split attention (PSA) module is used to enhance or suppress the obtained feature expressions. Finally, the skip connection is used to fuse the features of different scales encoded by each stage with the features processed by PSA and gradually restore to the original image size. Experiments on the TuSimple dataset demonstrate that our model outperforms representative lane line detection networks in challenging driving scenes, with anF1-measure of up to 94.302%. |
Proceedings of SPIE (April 14 2023)
Proceedings of SPIE (October 16 2024)
Proceedings of SPIE (July 21 2023)
Proceedings of SPIE (June 13 2024)