Movatterモバイル変換


[0]ホーム

URL:


CN106682586A - Method for real-time lane line detection based on vision under complex lighting conditions - Google Patents

Method for real-time lane line detection based on vision under complex lighting conditions
Download PDF

Info

Publication number
CN106682586A
CN106682586ACN201611098387.4ACN201611098387ACN106682586ACN 106682586 ACN106682586 ACN 106682586ACN 201611098387 ACN201611098387 ACN 201611098387ACN 106682586 ACN106682586 ACN 106682586A
Authority
CN
China
Prior art keywords
image
illumination
lane line
value
lane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611098387.4A
Other languages
Chinese (zh)
Inventor
刘宏哲
袁家政
唐正
李超
赵小艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Union University
Original Assignee
Beijing Union University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Union UniversityfiledCriticalBeijing Union University
Priority to CN201611098387.4ApriorityCriticalpatent/CN106682586A/en
Publication of CN106682586ApublicationCriticalpatent/CN106682586A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The invention provides a method for real-time lane line detection based on vision under complex lighting conditions and belongs to the field of computer vision and unmanned intelligent driving. During image preprocessing, light estimation and light color correction are conducted on different light images so that the images can recover under standard white light; noise which is introduced in the image acquisition process is eliminated through Gaussian filtering, and then the images are binarized and subjected to edge extraction; the original images are divided in the extraction process; improved Hough transform is used for obtaining a lane candidate line, and a dynamic interesting region (ROI) is built; through Hough transform based on the dynamic interesting region (ROI) and kalman filtering, a lane line is tracked in real time to realize constraint and update of a lane line model. An algorithm is added into a lane line detection failure judgment module to improve the reliability of detection. The method is high in speed and good in robustness, a good lane line detection effect is obtained under the complex lighting conditions, the dynamic lane line identifying ability of a vehicle is improved, and the safety of unmanned vehicle driving is improved.

Description

Translated fromChinese
一种复杂光照条件下基于视觉的实时车道线检测的方法A vision-based real-time lane detection method under complex lighting conditions

技术领域technical field

本发明涉及一种复杂光照条件下基于视觉的实时车道线检测的方法,属于车辆自主驾驶和计算机辅助驾驶技术领域。The invention relates to a vision-based real-time lane detection method under complex lighting conditions, and belongs to the technical fields of vehicle autonomous driving and computer-aided driving.

背景技术Background technique

近年来,随着公路里程的不断增加和汽车产业的不断发展,交通安全问题也日益严重道,路上的车辆越来越多,发生的事故也在逐年增长,交通事故所带来的伤亡及财产损失是触目惊心,为减少交通事故的发生,运用计算机辅助驾驶系统等科技手段保障行车的安全成为了一种趋势,实现这类系统面临的首要关键问题就是实现快速准确地从车载视频图像中检测车道线,这可以让车辆按照实时路况准确规范行驶,以保证车辆和行人的安全。In recent years, with the continuous increase of highway mileage and the continuous development of the automobile industry, traffic safety problems have become increasingly serious. There are more and more vehicles on the road, and accidents are also increasing year by year. The casualties and property caused by traffic accidents The loss is shocking. In order to reduce the occurrence of traffic accidents, it has become a trend to use computer-aided driving systems and other scientific and technological means to ensure driving safety. The most important problem facing such systems is to quickly and accurately detect lanes from vehicle video images. Lines, which allow vehicles to drive accurately and according to real-time road conditions to ensure the safety of vehicles and pedestrians.

现阶段车道识别的方法主要分两种:图像特征法和模型匹配法。There are two main methods for lane recognition at this stage: image feature method and model matching method.

1、基于图像特征法的基本思想是利用车道边界或标志线与周围环境在图像特征上的不同进行检测。特征差异包括形状、纹理、连续性、灰度和对比度等。Donald等人利用车道线的几何信息对Hough变换参数限制的方法进行高速情况下车道线检测;Lee提出了一个通过边缘公布函数和车辆运动方向的变化估计预测车道线方向的偏移预警系统;Mastorakis利用车道线的直线特征筛选出最有可能的标识线;Wang和Hu分别提出利用车道线上梯度相反方向的性质、车道线区域颜色特征来进行车道线的识别。这类方法借用图像分割和阈值化等技术,算法较为简单,但阴影遮挡、光线变化、噪声、车道边界或标志线不连续性等因素都可能造成车道的无法识别。1. The basic idea of the image feature-based method is to use the difference in image features between the lane boundary or marking line and the surrounding environment for detection. Feature differences include shape, texture, continuity, grayscale and contrast, etc. Donald et al. used the geometric information of lane lines to detect lane lines at high speeds using the Hough transform parameter limitation method; Lee proposed an offset warning system that predicts the direction of lane lines through edge publishing functions and vehicle motion direction changes; Mastorakis Using the straight line feature of the lane line to screen out the most likely marking line; Wang and Hu respectively propose to use the property of the opposite direction of the gradient on the lane line and the color feature of the lane line area to identify the lane line. This type of method borrows technologies such as image segmentation and thresholding, and the algorithm is relatively simple, but factors such as shadow occlusion, light changes, noise, lane boundaries or discontinuity of marking lines may cause lanes to be unrecognizable.

2、基于模型匹配的方法主要是针对结构化道路的较强几何特征,利用二维或三维曲线进行车道线建模,常用的二维车道模型有直线模型和抛物线模型。B-Snake车道模型提供初始定位后,将车道线检测问题通过道路模型转换为确定样条曲线所需的控制点问题;采用了将Hough变换与抛物线模型结合在一起来检测车道线,并先用直线模型得到道路标识线的初步参数后, 再在此基础上利用双曲线模型检测车道线,取得了较好的检测结果;Mechat采用基于SVM的方法对车道线进行建模,并采用标准的卡尔曼滤波器进行估计跟踪。这类方法在建立道路参数模型的基础上,分析图像中的目标信息以确定模型参数,具有不受路面状况干扰的特点,但由于计算复杂度较高,算法的时间开销较大。2. The method based on model matching is mainly aimed at the strong geometric characteristics of structured roads, and uses two-dimensional or three-dimensional curves to model lane lines. Commonly used two-dimensional lane models include straight line models and parabolic models. After the B-Snake lane model provides the initial positioning, the lane line detection problem is transformed into the problem of determining the control points required by the spline curve through the road model; the lane line is detected by combining the Hough transform and the parabolic model, and first using After obtaining the preliminary parameters of the road marking line by the straight line model, the hyperbolic model is used to detect the lane line on this basis, and good detection results have been obtained; Mechat uses the method based on SVM to model the lane line, and uses the standard Karl Mann filter for estimated tracking. This kind of method analyzes the target information in the image to determine the model parameters on the basis of establishing the road parameter model, which has the characteristics of not being disturbed by the road surface condition, but due to the high computational complexity, the time cost of the algorithm is relatively large.

因此,在实际研究中要将图像特征法和道路模型匹配法结合起来,从而正规化车道识别问题。Therefore, in practical research, the image feature method and the road model matching method should be combined to normalize the lane recognition problem.

发明内容Contents of the invention

本发明针对现有车道线检测技术在复杂光线下检测车道线的识别率低,对图像没有进行很好预处理,使失真图像矫正到标准白光下。而且原有的算法比较复杂,效率低,实时性差的缺点,提出了——一种复杂光照条件下基于视觉的实时车道线检测的方法,对图像进行光照处理矫正到标准白光下,利用车道线像素的信息进行车道线检测和趋势的判断,算法具有良好的实时性,高效率得检测车道线。The present invention aims at the low recognition rate of lane line detection under complex light in the existing lane line detection technology, and the image is not well preprocessed, so that the distorted image is corrected to standard white light. Moreover, the original algorithm has the disadvantages of complexity, low efficiency, and poor real-time performance. A vision-based real-time lane line detection method under complex lighting conditions is proposed. The image is illuminated and corrected to standard white light, and the lane line is used. Pixel information is used to detect lane lines and judge trends. The algorithm has good real-time performance and can detect lane lines with high efficiency.

为实现上述目的,发明人提供了一种光照预处理的方法和车道线检测的方法,所述方法如下步骤:在图像预处理时对不同光照图像进行光照估计和光照颜色矫正,使其恢复到标准白光下。采用高斯滤波除去图像采集过程中引入的噪声,再对图像进行二值化处理和边缘提取,在提取过程中对原图像进行区域划分,利用改进的Hough变换得到车道候选线,建立动态的感兴趣区域(ROI),通过基于动态的感兴趣区域(ROI)的Hough变换实现对车道线模型的约束和更新,及Kalman滤波对车道线实时跟踪,算法加入了车道线检测失效判别模块,以提高检测的可靠性。In order to achieve the above object, the inventor provides a method for illumination preprocessing and a method for lane line detection. The method is as follows: during image preprocessing, perform illumination estimation and illumination color correction on images with different illuminations to restore them to Under standard white light. Gaussian filtering is used to remove the noise introduced in the image acquisition process, and then the image is binarized and edge extracted. During the extraction process, the original image is divided into regions, and the improved Hough transform is used to obtain lane candidate lines to establish dynamic interest. Region (ROI), through the Hough transformation based on the dynamic region of interest (ROI), the constraint and update of the lane line model is realized, and the Kalman filter is used to track the lane line in real time. The algorithm adds a lane line detection failure discrimination module to improve detection. reliability.

在结构化的公路上,车道线信息主要是集中在图像的中下部,由于考虑到在不同情况下的摄像机安装,或将车头显示在图像中。On structured highways, lane line information is mainly concentrated in the middle and lower part of the image, due to consideration of camera installation in different situations, or displaying the front of the vehicle in the image.

本方法采用的步骤如下:对图像进行降采样,并设定感兴趣区域(ROI),由于视频图像中相邻的图像之间有较大的相关性,大部分图像信息对于车道线检测是无用的,通过寻找对车道线检测有用的感兴趣区域,不但 可以降低算法的运算量,而且还能简化车道线的识别。在结构化的公路上,车道线有用信息主要是集中在图像的中下部是感兴趣区域,由于考虑到在不同情况下的摄像机安装,或将车头显示在图像中(0~0.1H)。Wimage表示图像的宽度,Himage定义为图像的高度。这样我们可以缩小图像有效检测区域的范围。The steps adopted in this method are as follows: down-sampling the image, and setting the region of interest (ROI), because there is a large correlation between adjacent images in the video image, most of the image information is useless for lane line detection Yes, by finding the region of interest that is useful for lane line detection, not only can reduce the computational load of the algorithm, but also simplify the recognition of lane lines. On structured highways, the useful information of lane lines is mainly concentrated in the middle and lower part of the image, which is the area of interest. Due to the consideration of camera installation in different situations, or the display of the front of the car in the image (0-0.1H). Wimage represents the width of the image, and Himage is defined as the height of the image. This way we can narrow down the effective detection area of the image.

车道检测的方法,对感兴趣区域图像预处理--进行颜色矫正,本方法采用的步骤如下:首先从监控相机等图像采集装置中获得感兴趣区域图像ψ,对感兴趣区域图像ψ进行颜色校正,得到校正后图像ψ1The method for lane detection is to preprocess the image of the region of interest-carry out color correction. The steps adopted in this method are as follows: first obtain the image ψ of the region of interest from an image acquisition device such as a surveillance camera, and perform color correction on the image ψ of the region of interest , get the corrected image ψ1 ;

具体步骤如下:Specific steps are as follows:

图像的光照估计的目的是将未知光照条件下的图像校正到标准白光下的图像,这个过程简要概括为首先估计出图像成像时的光照颜色,然后利用Von Kries模型将图像映射到标准白光下。也就可以获得更好的图像的白平衡效果。一般可分为以下步骤:The purpose of image illumination estimation is to correct the image under unknown illumination conditions to the image under standard white light. This process is briefly summarized as first estimating the illumination color when the image is imaged, and then using the Von Kries model to map the image to the standard white light. Also can get better white balance effect of image. Generally can be divided into the following steps:

(1)、样本块提取首先从图像中提取样本块。对每个图像样本块,估计照射在该块上的有效光照。(1) Sample block extraction First, sample blocks are extracted from the image. For each patch of image samples, estimate the effective lighting that falls on that patch.

(2)、利用现有的单光照条件下的光照估计算法进行光照估计。基于Grey-Edge颜色恒常性算法框架通过变换参数,系统地产生多个不同的颜色恒常性特征值提取方法。(2) Perform illumination estimation using the existing illumination estimation algorithm under single illumination conditions. Based on the Gray-Edge color constancy algorithm framework, a number of different color constancy eigenvalue extraction methods are systematically generated by changing parameters.

(3)、样本块光照估计值的聚类,把来自于同一光照下的图像块被聚类到一起以形成一个大的图像块,以便产生一个更为准确的光照估计值,同一光照照射下的块更易于聚类到同一簇。因此,所有的光照估计值被聚类到M类(M为场景中的光照个数)。(3) The clustering of the estimated value of the illumination of the sample block, the image blocks from the same illumination are clustered together to form a large image block, in order to produce a more accurate estimated value of illumination, under the same illumination blocks are easier to cluster into the same cluster. Therefore, all lighting estimates are clustered into M classes (M is the number of lights in the scene).

(4)、聚类结果的后向映射在把基于样本块的光照估计值聚类到M类后(M为场景中的光照数),把聚类的结果逐个映射到原始图像,也就是说,属于同一样本块的像素属于同一聚类,这样就可以得到每种光照的照射位置。由此得到一个光照映射图,即每个像素属于M个光照中的某一个。通过后向映射,可得到每个像素的光照估计值,及像素所在光照类的聚类中心值。(4) Backward mapping of clustering results After clustering the estimated illumination values based on sample blocks into M categories (M is the number of illuminations in the scene), the clustering results are mapped to the original image one by one, that is to say , the pixels belonging to the same sample block belong to the same cluster, so that the irradiation position of each light can be obtained. Thus, a light map is obtained, that is, each pixel belongs to one of the M lights. Through backward mapping, the estimated illumination value of each pixel and the cluster center value of the illumination class where the pixel belongs can be obtained.

(5)、对于重叠光照的区域,在后向映射的光照估计值的分类结果上 使用高斯滤波器(5) For areas with overlapping lighting, use a Gaussian filter on the classification results of the back-mapped lighting estimates

(6)、颜色校正,利用每个像素的光照估计值可以把输入图像校正到标准光照下,得到在标准光照下的输出图像,从而消除场景中光照的影响。目前最常用的对角模型来校正图像。(6) Color correction. Using the estimated illumination value of each pixel, the input image can be corrected to the standard illumination, and the output image under the standard illumination can be obtained, thereby eliminating the influence of illumination in the scene. Currently the most commonly used diagonal model to rectify images.

利用图像颜色校正的方法,其特征在于:所述(1)假设每个图像样本块5×5像素并且满足光照在该样本上的光照值是均匀分布的条件(只有一种颜色的光照射到该样本上)。The method of using image color correction is characterized in that: (1) Assume that each image sample block is 5×5 pixels and satisfies the condition that the illumination value of the light on the sample is uniformly distributed (only one color of light is irradiated on the sample) on this sample).

图像光照估计来校正图像颜色的方法,选择的样本块的大小一样,满足以下条件:样本块5×5像素且包含光照颜色信息来准确地估计照射在该样本块上的光照的性质。In the method of image illumination estimation to correct image color, the size of the selected sample block is the same, and the following conditions are met: the sample block is 5×5 pixels and contains light color information to accurately estimate the nature of the light irradiated on the sample block.

基于Grey-Edge颜色恒常性算法框架通过变换参数,如下所示,通过变换参数n,q和σ(n是阶乘,q是明科夫斯范式,σ是高斯滤波器的核函数大小),ε是一个取值范围[0,1]的常数,f(x)表示空间中x点处的光照值;0表示无反射,1表示全反射;e是指数e,系统地产生多个不同的颜色恒常性特征值提取方法。Based on the Gray-Edge color constancy algorithm framework, through the transformation parameters, as shown below, through the transformation parameters n, q and σ (n is the factorial, q is the Minkows normal form, σ is the kernel function size of the Gaussian filter), ε is a constant with a value range [0,1], f(x) represents the light value at point x in the space; 0 represents no reflection, 1 represents total reflection; e is the index e, which systematically produces multiple different colors Constant eigenvalue extraction method.

在该框架下,分割图像得到许多图像的样本块。假设每个样本块是5×5像素并且满足在该样本块中光照是均匀分布的假设。在每个样本块上,使用常用的单光照的颜色恒常性算法估计该样本块上的光照值。Under this framework, an image is segmented to obtain many sample blocks of the image. It is assumed that each sample block is 5×5 pixels and the assumption that the illumination is uniformly distributed in the sample block is satisfied. On each sample block, the commonly used single-illumination color constancy algorithm is used to estimate the illumination value on the sample block.

利用图像光照估计来校正图像颜色的方法,考虑以下五种有代表性的方法:Using image illumination estimation to correct image color, consider the following five representative methods:

利用图像光照估计来校正图像颜色的方法,五种候选颜色恒常性计算集合Γ={e0,1,0,e0,∞,0,e0,∞,1,e1,1,1,e2,1,1}。每个样本块的特征是由选择的颜色恒常性算法的光照估计值组成。Using image illumination estimation to correct image color, five candidate color constancy calculation sets Γ={e0,1,0 ,e0,∞,0 ,e0,∞,1 ,e1,1,1 ,e2,1,1 }. The feature of each sample block is composed of the illumination estimate by the chosen color constancy algorithm.

利用图像光照估计来校正图像颜色的方法,样本块的特征向量可以描述为F‘=[R,G,B],R,G,B为图像的颜色通道,使用归一化的光照估计值,如下所示,这样样本块的特征向量就转化成F=[r,g],一个1×2的向量。Using image illumination estimation to correct image color, the feature vector of the sample block can be described as F'=[R, G, B], R, G, B are the color channels of the image, using the normalized illumination estimation value, As shown below, the feature vector of the sample block is transformed into F=[r,g], a 1×2 vector.

利用图像光照估计来校正图像颜色的方法,在光照估计值组成的色度空间中,在对各个样本块的光照估计值进行聚类后,第j个样本块的光照估计值到第i聚类中心的距离可以使用欧氏距离计算,该欧式距离用di表示,dk表示k[0,M]中第k个样本块聚类中心的距离,Z是总共的样本块,那么该样本块位于第i个光照区域的概率pj,i如下计算:In the method of correcting image color by using image illumination estimation, in the chromaticity space composed of illumination estimation values, after clustering the illumination estimation values of each sample block, the illumination estimation value of the jth sample block is grouped into the i-th cluster The distance of the center can be calculated using the Euclidean distance, the Euclidean distance is represented by di , dk represents the distance of the cluster center of the kth sample block in k[0,M], Z is the total sample block, then the sample block The probability pj,i of being in the ith illuminated area is calculated as follows:

第i个光照的覆盖区域概率其中pj,i表示第j个块被第i个光照照射的概率并且p是输入图像中样本块的总数。Coverage area probability of the i-th light wherepj,i denotes the probability that the jth patch is illuminated by the ith light and p is the total number of sample patches in the input image.

利用图像光照估计来校正图像颜色的方法,为了获得平滑连续的光照 分布,在光照覆盖区域概率映射图上进行滤波,我们使用两种滤波器,分别是高斯和中值两种滤波器,高斯滤波器考虑了空间位置信息计算每个估计光照范围的逐个像素概率,中值滤波器的优点是能很好地保留边的信息,使其用于有明显的光照变化的场景。Using image illumination estimation to correct image color, in order to obtain a smooth and continuous illumination distribution, filter on the probability map of the illumination coverage area, we use two kinds of filters, Gaussian and median filters, Gaussian filter The filter considers the spatial position information to calculate the pixel-by-pixel probability of each estimated illumination range. The advantage of the median filter is that it can well preserve the edge information and make it suitable for scenes with obvious illumination changes.

利用图像光照估计来校正图像颜色的方法,图像每个像素的光照估计值根据如下式进行计算:In the method of correcting image color by using image illumination estimation, the illumination estimation value of each pixel of the image is calculated according to the following formula:

其中Ie是场景上的光照估计值,Ie,j是第i个光照的估计值,mi(x)表示第i个光照对位于x处的像素的贡献;Z表示总共的样本块,如果mi值较大,那么意味着第i个光照对此像素的影响大,特别地如果mi(x)=1意味着此像素完全处于第i个光照的照射下。光照的覆盖区域概率映射图和输入图像一样大。Where Ie is the estimated value of illumination on the scene, Ie,j is the estimated value of the i-th illumination, mi (x) represents the contribution of the i-th illumination to the pixel at x; Z represents the total sample block, If the value of mi is large, it means that the i-th light has a great influence on this pixel, especially ifmi (x)=1 means that this pixel is completely under the i-th light. The illuminated coverage area probability map is as large as the input image.

利用图像光照估计来校正图像颜色的方法,在得到每个像素的光照估计值之后,根据对角模型逐个像素进行校正,其中fu(x)表示在未知光照照射下x处的像素值,fc(x)表示经过校正后其在标准光照照射下呈现出的像素值。Λu,c(x)是在x处从未知光照到标准光照的映射矩阵,如下式所示:fc(x)=Λu,c(x)fu(x)。The method of using image illumination estimation to correct image color, after obtaining the illumination estimation value of each pixel, corrects pixel by pixel according to the diagonal model, where fu (x) represents the pixel value at x under unknown illumination, fc (x) represents the pixel value it presents under standard light irradiation after correction. Λu,c (x) is a mapping matrix from unknown illumination to standard illumination at x, as shown in the following formula: fc (x)=Λu,c (x)fu (x).

利用图像光照估计来校正图像颜色的方法,对角校正模型如下式所示,其中,表示成像时位置:Using image illumination estimation to correct image color, the diagonal correction model is shown in the following formula, where, Indicates the position when imaging:

其中,x表示在图像空间中某一点,R通道测量的光照值;x表示在图像空间中某一点,R通道估计的光照值;空间中某一点R通道的测量光照值比上估计的光照值。为空间中某一点G通道的测量光照值比上估计的光照值;为空间中某一点B通道的测量光照值比上估计 的光照值;Λu,c(x)是在x处从未知光照到标准光照的映射矩阵。in, x represents the light value measured by the R channel at a certain point in the image space; x represents the illumination value estimated by the R channel at a certain point in the image space; The ratio of the measured lighting value of the R channel to the estimated lighting value at a certain point in space. It is the ratio of the measured illumination value of the G channel to the estimated illumination value at a certain point in space; is the ratio of the measured illumination value of the B channel to the estimated illumination value at a certain point in space; Λu,c (x) is the mapping matrix from unknown illumination to standard illumination at x.

感兴趣区域图像预处理-----颜色矫正后图像灰度化,如下式所示;Gray=R*0.299+G*0.587+B*0.114,其中,式中:R、G、B分别表示红、蓝、绿通道分量值;Gray表示转换后像素的灰度值。在车道线上更多想要保存到白色和黄色的信息,因此在保证车道线提取误差范围内,弱化了B通道分量值的比例。灰度转换公式如下式:Gray=R*0.5+G*0.5。Image preprocessing of the region of interest ----- image grayscale after color correction, as shown in the following formula; Gray = R*0.299+G*0.587 +B*0.114, where, in the formula: R, G, B respectively Represents the red, blue and green channel component values;Gray represents the grayscale value of the converted pixel. On the lane line, more information is wanted to be saved to white and yellow, so the proportion of the B channel component value is weakened within the guaranteed lane line extraction error range. The gray scale conversion formula is as follows: Gray =R*0.5+G*0.5.

选取车道线模型,其特征在于:道路的绝大部分路段都是直线路段,将直线模型作为车道线模型计算出的误差仅为3mm。因此,本方法采用直线模型作为车道线的模型。The lane line model is selected, which is characterized in that most road sections of the road are straight line sections, and the error calculated by using the straight line model as the lane line model is only 3mm. Therefore, this method adopts the straight line model as the model of the lane line.

对灰度化图像车道线边缘提取,其特征在于:在实际道路环境中,车道线通常具有比周围路面更高的亮度,进行灰度化处理后,车道线的灰度值较高。由按行扫描的灰度图可知,车道线部分的值比其两边的值要高,形成一个波峰;呈现从左到右是先升后降趋势;我们利用这些特性,通过计算相邻图像像素的变化来判断车道线的边缘。The edge extraction of the lane line in the grayscale image is characterized in that: in the actual road environment, the lane line usually has a higher brightness than the surrounding road surface, and after the grayscale processing, the gray value of the lane line is higher. From the grayscale image scanned by row, it can be seen that the value of the lane line part is higher than the value on both sides, forming a peak; it shows a trend of rising first and then falling from left to right; we use these characteristics to calculate the adjacent image pixels Changes to judge the edge of the lane line.

基于改进Hough变换的车道线检测方法,其特征在于:Hough变换检测直线的坑噪性能强,能将断开的边缘连接起来,非常适合用于检测不连续的车道标识线。它根据图像空间和Hough参数空间的对偶性原理,将图像中的每个特征点映射到参数空间的累加阵列的多个单元中,统计各个单元的计数以检测出极值,从而确定是否存在直线并获得直线参数。The lane line detection method based on the improved Hough transform is characterized in that the Hough transform has strong pit noise performance in detecting straight lines and can connect disconnected edges, which is very suitable for detecting discontinuous lane marking lines. According to the principle of duality between the image space and the Hough parameter space, it maps each feature point in the image to multiple units of the accumulation array in the parameter space, counts the counts of each unit to detect the extreme value, and determines whether there is a straight line and obtain the straight line parameters.

经典的Hough变换对图像空间中的每一个点映射到极坐标后进行投票统计,当ρ、θp的量化越细时,检测的精度就会越高,量化过粗,检测的结果又不会准确。为了解决垂直直线斜率无限大的问题,一般通过如下的直线—极坐标方程进行Hough变换,即ρ=x cosθp+y sinθp,为了减少运算复杂度,提高计算的效率,本文在经典Hough变换上做了相应的条件约束,使之能够更适应车道线检测。The classic Hough transform maps each point in the image space to polar coordinates and performs voting statistics. When the quantization of ρ and θp is finer, the detection accuracy will be higher. If the quantization is too coarse, the detection result will not precise. In order to solve the problem that the slope of the vertical line is infinite, Hough transform is generally carried out through the following line-polar coordinate equation, that is, ρ=x cosθp +y sinθp . Corresponding conditional constraints are made to make it more suitable for lane line detection.

需要对检测出车道线进行约束---帧间关联约束,在实际采集系统以及大部分的智能车辆系统中,车载相机直接获得的是视频流信息,视频流中的 相邻两帧图像间往往具有很大的冗余性。车辆运动在时间上和空间上都具有连续性,由于车载相机的采样频率快(100fps左右),在图像帧的采样周期内,车辆只是前进了一段很短的距离,道路场景的变化十分微小,表现为前后帧间的车道线位置变化缓慢,因此前一帧图像为后一帧图像提供了非常强的车道线位置信息。为了提高车道线识别算法的稳定性和准确性,本文引入了帧间关联性约束。It is necessary to constrain the detected lane lines --- inter-frame correlation constraints. In the actual acquisition system and most intelligent vehicle systems, the on-board camera directly obtains the video stream information, and the interval between two adjacent frames of images in the video stream is often with great redundancy. Vehicle motion has continuity in both time and space. Due to the fast sampling frequency of the on-board camera (about 100fps), the vehicle only advances a short distance during the sampling period of the image frame, and the change of the road scene is very small. It is manifested that the lane line position changes slowly between the front and rear frames, so the previous frame image provides very strong lane line position information for the next frame image. In order to improve the stability and accuracy of the lane line recognition algorithm, this paper introduces inter-frame correlation constraints.

步骤如下:假设在当前帧中检测到的车道线个数为ml条,用集合Ll={L1,L2,…,Lm}表示;保存的历史帧中检测到的车道线数有nl个,用集合El={E1,E2,…,En}表示;帧间关联约束滤波器用Kl表示,令Kl={K1,K2,…,Kn}。The steps are as follows: Assume that the number of lane lines detected in the current frame is ml , represented by the set Ll ={L1 ,L2 ,...,Lm }; the number of lane lines detected in the saved historical frame There are nl , expressed by the set El = {E1 , E2 ,…,En }; the inter-frame correlation constraint filter is expressed by Kl , let Kl ={K1 ,K2 ,…,Kn } .

首先建立一个Cl=ml×nl的矩阵,矩阵Cl中的元素cij表示当前帧中的第i条直线Li和历史帧中的第j条直线Ej间的距离Δdij,其中Δdij的计算公式为:Tl是表示矩阵转秩A,B分别代表的是直线Li、Ej的两个端点。First, a matrix of Cl = ml ×nl is established, and the element cij in the matrix Cl represents the distance Δdij between the i-th straight line Li in the current frame and the j-th straight line Ej in the history frame, The calculation formula of Δdij is: Tl represents the rank A of the matrix transformation, and B represents the two endpoints of the straight lines Li and Ej respectively.

然后在矩阵Cl中,统计第i行中Δdij<T的个数ei,若ei<1,说明当前车道线没有与之相关联的前帧车道线,因此将该条车道线作为全新的车道线,更新下一帧帧间关联约束的历史帧信息。Then in the matrix Cl , the number ei of Δdij <T in the i-th row is counted. If ei <1, it means that the current lane line has no previous frame lane line associated with it, so this lane line is regarded as Brand new lane lines, update the historical frame information of the inter-frame association constraints in the next frame.

若ei=1,则认为当前帧车道线Li和历史帧车道线Ej在前后帧间是同一条车道线;当ei>1时,用向量Vi记录当前帧第i行中满足条件的车道线位置,即:在Vi中统计非零元素所在的列j的所有元素Vj,得到Vj中最小的元素,即:(Δdij)min=min{Vj}(Vj≠0)。If ei =1, it is considered that the current frame lane line Li and the historical frame lane line Ej are the same lane line between the front and rear frames; when ei >1, use the vector Vi to record the current frame i-th row satisfying Conditional lane line position, namely: Count all elements Vj of the column j where the non-zero elements are located in Vi to obtain the smallest element in Vj , that is: (Δdij )min =min{Vj }(Vj ≠0).

则得到当前帧车道线Li和历史帧车道线Ej在前后帧间是同一条车道线。当前帧检测得到的车道线符合帧间相关联约束,则认为在前后帧中是同一条车道线,并显示当前车道线的位置;否则,舍弃当前检测出的车道线。如果累计帧间关联约束次数大于Tα(Tα=3),则更新历史帧车道线的参数。when Then it is obtained that the current frame lane line Li and the historical frame lane line Ej are the same lane line between the preceding and following frames. If the lane line detected in the current frame meets the inter-frame correlation constraints, it is considered to be the same lane line in the preceding and following frames, and the position of the current lane line is displayed; otherwise, the currently detected lane line is discarded. If the cumulative number of inter-frame association constraints is greater than Tα (Tα =3), then update the parameters of the lane line in the historical frame.

检测出车道线,基于Kalman滤波车道线线跟踪,其特征在于:对结构化道路而言,连续两帧图像中的车道线位置相差不大,可以利用相邻帧之 间的车道线位置的相关性,用前一帧图像获得的信息指导下一帧车道线的检测,以实现车道线的实时跟踪。Lane lines are detected, based on Kalman filter lane line tracking, which is characterized in that: for structured roads, the positions of lane lines in two consecutive frames of images are not much different, and the correlation between the position of lane lines between adjacent frames can be used It uses the information obtained from the previous frame image to guide the detection of lane lines in the next frame, so as to realize the real-time tracking of lane lines.

失效判别其特征在于:当受到严重干扰如道路中行车或其它物体将车道标识线遮挡,转弯或车辆换道等情况,算法会产生较大误差甚至失效。因此要在检测中加入失效判别机制。一旦约束算法失效的情况下能及时恢复对道路标识线的正确识别。The characteristic of failure discrimination is that when there is serious interference such as driving on the road or other objects blocking the lane markings, turning or changing lanes, the algorithm will produce large errors or even fail. Therefore, it is necessary to add a failure discrimination mechanism in the detection. Once the constraint algorithm fails, the correct identification of road marking lines can be restored in time.

附图说明Description of drawings

图1为本发明具体实施方式所述的车道检测方法流程图;Fig. 1 is the flowchart of the lane detection method described in the specific embodiment of the present invention;

图2是本发明具体实施方式所述的利用光照估计来校正图像颜色的方法的流程图;Fig. 2 is a flowchart of a method for correcting image color by using illumination estimation described in a specific embodiment of the present invention;

图3是本发明具体实施方式所述的车道线模型Fig. 3 is the lane line model described in the specific embodiment of the present invention

图4是本发明具体实施方式所述的感兴趣区域。Fig. 4 is a region of interest described in a specific embodiment of the present invention.

图5是本发明具体实施方式所述的边缘检测图。Fig. 5 is an edge detection diagram described in a specific embodiment of the present invention.

图6是本发明具体实施方式所述的车道线滤波效果。Fig. 6 is the lane line filtering effect described in the specific embodiment of the present invention.

图7是本发明具体实施方式所述的车道线实验检测结果----路面有污损图Fig. 7 is the experimental detection result of the lane line described in the specific embodiment of the present invention----the road surface is defaced

图8是本发明具体实施方式所述的车道线实验检测结果----对向车辆雾天开灯Fig. 8 is the experimental detection result of the lane line described in the specific embodiment of the present invention-turning on the lights of the opposite vehicle in fog

图9是本发明具体实施方式所述的车道线实验检测结果----常见路面标志的干扰Fig. 9 is the experimental detection result of the lane line described in the specific embodiment of the present invention - the interference of common road signs

图10是本发明具体实施方式所述的车道线实验检测结果---傍晚行车Fig. 10 is the experimental detection result of the lane line described in the specific embodiment of the present invention---driving in the evening

具体实施方式detailed description

为详细说明技术方案的技术内容构造特征所实现目的及效果,以下结合具体实例,并配合附图详予说明。In order to explain in detail the purpose and effect achieved by the technical content and structural features of the technical solution, the following will be described in detail in conjunction with specific examples and accompanying drawings.

一、总体思路1. General idea

为了提高车道线识别的实时性和可靠性,提出一种基于视觉的复杂光照条件下实时车道线检测算法。在提取过程中对原图像进行区域划分,再对图像预处理不同光照图像进行光照估计和光照颜色矫正,使其恢复到标准白光下。采用高斯滤波去除图像采集过程中引入的噪声,再对图像进行二值化处理和边缘提取,利用改进的Hough变换得到车道候选线,建立动态的ROI,通过基于动态ROI的Hough变换实现对车道线模型的约束和更新,算法加入了车道线检测失效判别模块,以提高检测的可靠性。如图1所示。In order to improve the real-time and reliability of lane marking recognition, a vision-based real-time lane detection algorithm under complex lighting conditions is proposed. During the extraction process, the original image is divided into regions, and then the image is preprocessed with different illumination images for illumination estimation and illumination color correction, so that it can be restored to the standard white light. Gaussian filtering is used to remove the noise introduced in the image acquisition process, and then the image is binarized and edge extracted, and the improved Hough transform is used to obtain the lane candidate line, and the dynamic ROI is established, and the lane line is realized through the Hough transform based on the dynamic ROI. Constraints and updates of the model, the algorithm adds a lane line detection failure discrimination module to improve the reliability of detection. As shown in Figure 1.

二、确定感兴趣区域2. Determine the area of interest

由于视频图像中相邻的图像之间有较大的相关性,大部分图像信息对于车道线检测是无用的,通过寻找对车道线检测有用的感兴趣区域,不但可以降低算法的运算量而且能简化车道线的识别,如图3所示。Due to the large correlation between adjacent images in the video image, most of the image information is useless for lane line detection. By finding the region of interest that is useful for lane line detection, it can not only reduce the computational load of the algorithm but also can Simplify the identification of lane lines, as shown in Figure 3.

在结构化的公路上,车道线有用信息主要是集中在图像的中下部是感兴趣区域,由于考虑到在不同情况下的摄像机安装,或将车头显示在图像中。Wimage表示图像的宽度,Himage定义为图像的高度。这样我们可以缩小图像有效检测区域的范围。On structured highways, the useful information of lane lines is mainly concentrated in the middle and lower part of the image, which is the area of interest, due to the consideration of camera installation in different situations, or the display of the front of the car in the image. Wimage represents the width of the image, and Himage is defined as the height of the image. This way we can narrow down the effective detection area of the image.

三、对感兴趣区域图像预处理--进行颜色矫正本方法采用的步骤如下:首先从监控相机等图像采集装置中获得感兴趣区域图像ψ,对感兴趣区域图像ψ进行颜色校正,得到校正后图像ψ1;如图2所示,具体步骤如下:3. Preprocessing the image of the region of interest--carrying out color correction The steps adopted in this method are as follows: firstly, the image ψ of the region of interest is obtained from an image acquisition device such as a surveillance camera, and the image ψ of the region of interest is color-corrected to obtain the corrected Image ψ1 ; as shown in Figure 2, the specific steps are as follows:

图像的光照估计的目的是将未知光照条件下的图像校正到标准白光下的图像,这个过程简要概括为首先估计出图像成像时的光照颜色,然后利用Von Kries模型将图像映射到标准白光下。也就可以获得更好的图像的白平衡效果。一般可分为以下步骤:The purpose of image illumination estimation is to correct the image under unknown illumination conditions to the image under standard white light. This process is briefly summarized as first estimating the illumination color when the image is imaged, and then using the Von Kries model to map the image to the standard white light. Also can get better white balance effect of image. Generally can be divided into the following steps:

(1)、样本块提取首先从图像中提取样本块。对每个图像样本块,估计照射在该块上的有效光照。(1) Sample block extraction First, sample blocks are extracted from the image. For each patch of image samples, estimate the effective lighting that falls on that patch.

(2)、利用现有的单光照条件下的光照估计算法进行光照估计。基于Grey-Edge颜色恒常性算法框架通过变换参数,系统地产生多个不同的颜色恒 常性特征值提取方法。(2) Perform illumination estimation using the existing illumination estimation algorithm under single illumination conditions. Based on the Gray-Edge color constancy algorithm framework, a number of different color constancy eigenvalue extraction methods are systematically generated by changing parameters.

(3)、样本块光照估计值的聚类,把来自于同一光照下的图像块被聚类到一起以形成一个大的图像块,以便产生一个更为准确的光照估计值,同一光照照射下的块更易于聚类到同一簇。因此,所有的光照估计值被聚类到M类(M为场景中的光照个数)。(3) The clustering of the estimated value of the illumination of the sample block, the image blocks from the same illumination are clustered together to form a large image block, in order to produce a more accurate estimated value of illumination, under the same illumination blocks are easier to cluster into the same cluster. Therefore, all lighting estimates are clustered into M classes (M is the number of lights in the scene).

(4)、聚类结果的后向映射在把基于样本块的光照估计值聚类到M类后(M为场景中的光照数),把聚类的结果逐个映射到原始图像,也就是说,属于同一样本块的像素属于同一聚类,这样就可以得到每种光照的照射位置。由此得到一个光照映射图,即每个像素属于M个光照中的某一个。通过后向映射,可得到每个像素的光照估计值,及像素所在光照类的聚类中心值。(4) Backward mapping of clustering results After clustering the estimated illumination values based on sample blocks into M categories (M is the number of illuminations in the scene), the clustering results are mapped to the original image one by one, that is to say , the pixels belonging to the same sample block belong to the same cluster, so that the irradiation position of each light can be obtained. Thus, a light map is obtained, that is, each pixel belongs to one of the M lights. Through backward mapping, the estimated illumination value of each pixel and the cluster center value of the illumination class where the pixel belongs can be obtained.

(5)、对于重叠光照的区域,在后向映射的光照估计值的分类结果上使用高斯滤波器。(5) For areas of overlapping illumination, a Gaussian filter is used on the classification result of the back-mapped illumination estimate.

(6)、颜色校正,利用每个像素的光照估计值可以把输入图像校正到标准光照下,得到在标准光照下的输出图像,从而消除场景中光照的影响。目前最常用的对角模型来校正图像。(6) Color correction. Using the estimated illumination value of each pixel, the input image can be corrected to the standard illumination, and the output image under the standard illumination can be obtained, thereby eliminating the influence of illumination in the scene. Currently the most commonly used diagonal model to rectify images.

利用图像颜色校正的方法,其特征在于:所述(1)假设每个图像样本块5×5像素并且满足光照在该样本上的光照值是均匀分布的条件(只有一种颜色的光照射到该样本上)。The method of using image color correction is characterized in that: (1) Assume that each image sample block is 5×5 pixels and satisfies the condition that the illumination value of the light on the sample is uniformly distributed (only one color of light is irradiated on the sample) on this sample).

利用图像光照估计来校正图像颜色的方法,选择的样本块的大小一样,满足以下条件:样本块5×5像素且包含光照颜色信息来准确地估计照射在该样本块上的光照的性质。In the method of correcting image color by image illumination estimation, the size of the selected sample block is the same, and the following conditions are met: the sample block is 5×5 pixels and contains light color information to accurately estimate the nature of the light irradiated on the sample block.

基于Grey-Edge颜色恒常性算法框架通过变换参数,如下所示,通过变换参数n,q和σ(n是阶乘,q是明科夫斯范式,σ是高斯滤波器的核函数大小),f(x)表示空间中x点处的光照值;ε是一个取值范围[0,1]的常数,0表示无反射,1表示全反射;e是指数e,系统地产生多个不同的颜色恒常性特征值提取方法。Based on the Gray-Edge color constancy algorithm framework by transforming parameters, as shown below, by transforming parameters n, q and σ (n is factorial, q is Minkows normal form, σ is the kernel function size of Gaussian filter), f (x) represents the illumination value at point x in space; ε is a constant with a value range of [0,1], 0 means no reflection, 1 means total reflection; e is the index e, which systematically produces multiple different colors Constant eigenvalue extraction method.

在该框架下如下式,分割图像得到许多图像的样本块。假设每个样本块是5×5像素并且满足在该样本块中光照是均匀分布的假设。在每个样本块 上,使用常用的单光照的颜色恒常性算法估计该样本块上的光照值。Under this framework, the following equation divides the image to obtain many sample blocks of the image. It is assumed that each sample block is 5×5 pixels and the assumption that the illumination is uniformly distributed in the sample block is satisfied. On each sample block, the commonly used single-illumination color constancy algorithm is used to estimate the illumination value on the sample block.

利用图像光照估计来校正图像颜色的方法,考虑以下五种有代表性的方法:Using image illumination estimation to correct image color, consider the following five representative methods:

利用图像光照估计来校正图像颜色的方法,五种候选颜色恒常性计算集合Γ={e0,1,0,e0,∞,0,e0,∞,1,e1,1,1,e2,1,1}。每个样本块的特征是由选择的颜色恒常性算法的光照估计值组成。Using image illumination estimation to correct image color, five candidate color constancy calculation sets Γ={e0,1,0 ,e0,∞,0 ,e0,∞,1 ,e1,1,1 ,e2,1,1 }. The feature of each sample block is composed of the illumination estimate by the chosen color constancy algorithm.

利用图像光照估计来校正图像颜色的方法,样本块的特征向量可以描述为F‘=[R,G,B],R,G,B为图像的颜色通道,使用归一化的光照估计值,如下所示,这样样本块的特征向量就转化成F=[r,g],一个1×2的向量:Using image illumination estimation to correct image color, the feature vector of the sample block can be described as F'=[R, G, B], R, G, B are the color channels of the image, using the normalized illumination estimation value, As shown below, the feature vector of the sample block is transformed into F=[r,g], a 1×2 vector:

利用图像光照估计来校正图像颜色的方法,在光照估计值组成的色度空间中,在对各个样本块的光照估计值进行聚类后,第j个样本块的光照估计值到第i聚类中心的距离可以使用欧氏距离计算,该欧式距离用di表示,In the method of correcting image color by using image illumination estimation, in the chromaticity space composed of illumination estimation values, after clustering the illumination estimation values of each sample block, the illumination estimation value of the jth sample block is grouped into the i-th cluster The distance from the center can be calculated using the Euclidean distance, which is denoted by di ,

dk表示k[0,M]中第k个样本块聚类中心的距离,Z是总共的样本块,那么该样本块位于第i个光照区域的概率pj,i如下计算:dk represents the distance from the cluster center of the kth sample block in k[0,M], and Z is the total sample block, then the probability pj,i that the sample block is located in the i-th illumination area is calculated as follows:

第i个光照的覆盖区域概率其中pj,i表示第j个块被第i个光照照射的概率并且p是输入图像中样本块的总数。Coverage area probability of the i-th light wherepj,i denotes the probability that the jth patch is illuminated by the ith light and p is the total number of sample patches in the input image.

利用图像光照估计来校正图像颜色的方法,为了获得平滑连续的光照分布,在光照覆盖区域概率映射图上进行滤波,我们使用两种滤波器,分别是高斯和中值两种滤波器,高斯滤波器考虑了空间位置信息计算每个估计光照范围的逐个像素概率,中值滤波器的优点是能很好地保留边的信息,使其用于有明显的光照变化的场景。Using image illumination estimation to correct image color, in order to obtain a smooth and continuous illumination distribution, filter on the probability map of the illumination coverage area, we use two kinds of filters, Gaussian and median filters, Gaussian filter The filter considers the spatial position information to calculate the pixel-by-pixel probability of each estimated illumination range. The advantage of the median filter is that it can well preserve the edge information and make it suitable for scenes with obvious illumination changes.

利用图像光照估计来校正图像颜色的方法,图像每个像素的光照估计值根据如下式进行计算:In the method of correcting image color by using image illumination estimation, the illumination estimation value of each pixel of the image is calculated according to the following formula:

其中Ie是场景上的光照估计值,Ie,j是第i个光照的估计值,mi(x)表示第i个光照对位于x处的像素的贡献;Z表示总共的样本块,Where Ie is the estimated value of illumination on the scene, Ie,j is the estimated value of the i-th illumination, mi (x) represents the contribution of the i-th illumination to the pixel at x; Z represents the total sample block,

如果mi值较大,那么意味着第i个光照对此像素的影响大,特别地如果mi(x)=1意味着此像素完全处于第i个光照的照射下。光照的覆盖区域概率映射图和输入图像一样大。If the value of mi is large, it means that the i-th light has a great influence on this pixel, especially ifmi (x)=1 means that this pixel is completely under the i-th light. The illuminated coverage area probability map is as large as the input image.

利用图像光照估计来校正图像颜色的方法,在得到每个像素的光照估计值之后,根据对角模型逐个像素进行校正,其中fu(x)表示在未知光照照射下x处的像素值,fc(x)表示经过校正后其在标准光照照射下呈现出的像素值。The method of using image illumination estimation to correct image color, after obtaining the illumination estimation value of each pixel, corrects pixel by pixel according to the diagonal model, where fu (x) represents the pixel value at x under unknown illumination, fc (x) represents the pixel value it presents under standard light irradiation after correction.

Λu,c(x)是在x处从未知光照到标准光照的映射矩阵,如下式所示:fc(x)=Λu,c(x)fu(x)。Λu,c (x) is a mapping matrix from unknown illumination to standard illumination at x, as shown in the following formula: fc (x)=Λu,c (x)fu (x).

利用图像光照估计来校正图像颜色的方法,对角校正模型如下式所示,其中,表示成像时位置:Using image illumination estimation to correct image color, the diagonal correction model is shown in the following formula, where, Indicates the position when imaging:

其中,x表示在图像空间中某一点,R通道测量的光照值;x表示在图像空间中某一点,R通道估计的光照值;空间中某一点R通道的测量光照值比上估计的光照值。为空间中某一点G通道的测量光照值比上估计的光照值;为空间中某一点B通道的测量光照值比上估计的光照值;。Λu,c(x)是在x处从未知光照到标准光照的映射矩阵。in, x represents the light value measured by the R channel at a certain point in the image space; x represents the illumination value estimated by the R channel at a certain point in the image space; The ratio of the measured lighting value of the R channel to the estimated lighting value at a certain point in space. It is the ratio of the measured illumination value of the G channel to the estimated illumination value at a certain point in space; is the ratio of the measured illumination value of the B channel to the estimated illumination value at a certain point in space; Λu,c (x) is the mapping matrix from unknown lighting to standard lighting at x.

三、感兴趣区域图像预处理-----颜色矫正后图像灰度化3. Image preprocessing of the region of interest ----- image grayscale after color correction

如下式所示;Gray=R*0.299+G*0.587+B*0.114,其中,式中:R、G、B分别表示红、蓝、绿通道分量值;Gray表示转换后像素的灰度值。在车道线上更多想要保存到白色和黄色的信息,因此在保证车道线提取误差范围内,弱化了B通道分量值的比例。灰度转换公式如下式:Gray=R*0.5+G*0.5。As shown in the following formula; Gray =R*0.299+G*0.587 +B*0.114, wherein, in the formula: R, G, B respectively represent the red, blue, and green channel component values;Gray represents the grayscale of the converted pixel value. On the lane line, more information is wanted to be saved to white and yellow, so the proportion of the B channel component value is weakened within the guaranteed lane line extraction error range. The gray scale conversion formula is as follows: Gray =R*0.5+G*0.5.

四、车道线模型4. Lane line model

车道线模型,如图3,其特征在于:道路的绝大部分路段都是直线路段,将直线模型作为车道线模型计算出的误差仅为3mm。因此,本方法采用直线模型作为车道线的模型。The lane line model, as shown in Figure 3, is characterized in that: most road sections of the road are straight line sections, and the error calculated by using the straight line model as the lane line model is only 3mm. Therefore, this method adopts the straight line model as the model of the lane line.

其中:(x1,y1),(x2,y2),(x3,y3),(x4,y4)是车道线中的坐标,p表示直线位置横向偏向中心垂线的距离,d表示直线消失点距下边线的距离。车道线的斜率角度截距bτ=y-kx。Among them: (x1 , y1 ), (x2 , y2 ), (x3 , y3 ), (x4 , y4 ) are the coordinates in the lane line, and p represents the position of the straight line laterally deviated from the center vertical line Distance, d represents the distance from the vanishing point of the straight line to the lower edge. slope of lane line angle Intercept bτ =y-kx.

对灰度化图像车道线边缘提取,其特征在于:在实际道路环境中,车道线通常具有比周围路面更高的亮度,进行灰度化处理后,车道线的灰度值较高。由按行扫描的灰度图可知,车道线部分的值比其两边的值要高,形成一个波峰;呈现从左到右是先升后降趋势;我们利用这些特性,通过计算 相邻图像像素的变化来判断车道线的边缘。The edge extraction of the lane line in the grayscale image is characterized in that: in the actual road environment, the lane line usually has a higher brightness than the surrounding road surface, and after the grayscale processing, the gray value of the lane line is higher. From the grayscale image scanned by row, it can be seen that the value of the lane line part is higher than the value on both sides, forming a peak; it shows a trend of rising first and then falling from left to right; we use these characteristics to calculate the adjacent image pixels Changes to judge the edge of the lane line.

具体步骤如下:Specific steps are as follows:

设某点是(x,y),满足y∈[0,Himage)且x∈[2,Wimage-2)。x,y分别是像素点的列和行,Wimage表示图像的宽度,Himage定义为图像的高度。Suppose a point is (x, y), satisfying y∈[0,Himage ) and x∈[2,Wimage -2). x and y are the columns and rows of pixels respectively, Wimage represents the width of the image, and Himage is defined as the height of the image.

Step1:计算点(x,y)水平线附近的均值。其中t∈[1,3,5,7,……],t=5能取得很好的效果。Step1: Calculate the mean value near the horizontal line of the point (x, y). Where t∈[1,3,5,7,...], t=5 can achieve good results.

Step2:计算边缘提取阈值T。Step2: Calculate the edge extraction threshold T.

Step3:计算边缘的升变点ep和降变点evStep3: Calculate the rising point ep and falling pointev of the edge.

Step4:车道线的升变点和降变点在图像中是成对出现的,并且之间满足一定的距离。比较升变点和降变点的宽度,剔除不满足的点:Δw=ep(x)-ev(x)。Step4: The rising and falling points of the lane line appear in pairs in the image, and there is a certain distance between them. Compare the width of the up-change point and the down-change point, and eliminate unsatisfied points: Δw=ep(x)-ev(x).

若Δw>Wmax,则认为是不可能出现的车道线,则要舍弃。其中,ep(x)和ev(x)分别表示升变点和降变点的列像素坐标,Wmax为车道线在图像中占有的最大的像素个数。If Δw>Wmax , it is considered as an impossible lane line and discarded. Among them, ep (x) andev (x) represent the column pixel coordinates of the up-change point and down-change point respectively, and Wmax is the maximum number of pixels occupied by the lane line in the image.

五、边缘提取5. Edge extraction

基于改进Hough变换的车道线检测方法,其特征在于:Hough变换检测直线的坑噪性能强,能将断开的边缘连接起来,非常适合用于检测不连续的车道标识线。它根据图像空间和Hough参数空间的对偶性原理,将图像中的每个特征点映射到参数空间的累加阵列的多个单元中,统计各个单元的计数以检测出极值,从而确定是否存在直线并获得直线参数。The lane line detection method based on the improved Hough transform is characterized in that the Hough transform has strong pit noise performance in detecting straight lines and can connect disconnected edges, which is very suitable for detecting discontinuous lane marking lines. According to the principle of duality between the image space and the Hough parameter space, it maps each feature point in the image to multiple units of the accumulation array in the parameter space, counts the counts of each unit to detect the extreme value, and determines whether there is a straight line and obtain the straight line parameters.

经典的Hough变换对图像空间中的每一个点映射到极坐标后进行投票统计,当ρ、θp的量化越细时,检测的精度就会越高,量化过粗,检测的结果又不会准确。为了解决垂直直线斜率无限大的问题,一般通过如下的直线—极坐标方程进行Hough变换,即ρ=x cosθp+y sinθp,为了减少 运算复杂度,提高计算的效率,本文在经典Hough变换上做了相应的条件约束,使之能够更适应车道线检测,如图5所示。The classic Hough transform maps each point in the image space to polar coordinates and performs voting statistics. When the quantization of ρ and θp is finer, the detection accuracy will be higher. If the quantization is too coarse, the detection result will not precise. In order to solve the problem that the slope of the vertical line is infinite, Hough transform is generally carried out through the following line-polar coordinate equation, that is, ρ=x cosθp +y sinθp . Corresponding conditional constraints are made on , so that it can be more suitable for lane line detection, as shown in Figure 5.

给定直线所在大致区域的距离误差限dh、Hough变换的一系列参数以及均值误差阈值εh。改进的Hough变换,算法的具体步骤如下:The distance error limit dh of the approximate area where the straight line is located, a series of parameters of the Hough transform and the mean error threshold εh are given. The specific steps of the improved Hough transform algorithm are as follows:

Step1.在给定参数下,对车道线特征进行基于概率的Hough变换操作,获取直线;Step1. Under the given parameters, perform a probability-based Hough transform operation on the lane line features to obtain a straight line;

Step2.对每一个通过Hough变换检测得到的直线,在所有的特征点集S中寻找距离直线不大于dh的特征点,构成集合EhStep2. For each straight line detected by Hough transform, find the feature points whose distance from the straight line is not greater than dh in all feature point sets S to form a set Eh ;

Step3.利用最小二乘法确定集合E的回归直线参数kh和bh,以及均方误差ehStep3. Use the least square method to determine the regression line parameters kh and bh of the set E, and the mean square error eh ;

Step4.对集合Eh中的任一特征点(xi,yi),所有满足的khxi+bh>yi的特征点构成子集Epos,所有满足的khxi+bh<yi的特征点构成子集EnegStep4. For any feature point (xi , yi ) in the set Eh , all feature points satisfying kh xi +bh >yi form a subset Epos , all satisfying kh xi + The feature points of bh <yi form a subset Eneg ;

Step5.在集合Epos和Eneg中,找出误差最大的点其中dh(P)表示点P到回归直线的距离;Step5. In the sets Epos and Eneg , find the point with the largest error with Where dh (P) represents the distance from point P to the regression line;

Step6.移除点Pp和Pn,更新集合Epos、Eneg和Eh,重复步骤3,直至误差eh小于εhStep6. Remove the points Pp and Pn , update the sets Epos , Eneg and Eh , and repeat step 3 until the error eh is less than εh .

六、车道线进行约束---帧间关联约束6. Constraints on lane lines --- inter-frame association constraints

在实际采集系统以及大部分的智能车辆系统中,车载相机直接获得的是视频流信息,视频流中的相邻两帧图像间往往具有很大的冗余性。车辆运动在时间上和空间上都具有连续性,由于车载相机的采样频率快(100fps左右),在图像帧的采样周期内,车辆只是前进了一段很短的距离,道路场景的变化十分微小,表现为前后帧间的车道线位置变化缓慢,因此前一帧图像为后一帧图像提供了非常强的车道线位置信息。为了提高车道线识别算法的稳定性和准确性,本文引入了帧间关联性约束。In the actual acquisition system and most of the intelligent vehicle systems, the on-board camera directly obtains the video stream information, and there is often great redundancy between two adjacent frames of images in the video stream. Vehicle motion has continuity in both time and space. Due to the fast sampling frequency of the on-board camera (about 100fps), the vehicle only advances a short distance during the sampling period of the image frame, and the change of the road scene is very small. It is manifested that the lane line position changes slowly between the front and rear frames, so the previous frame image provides very strong lane line position information for the next frame image. In order to improve the stability and accuracy of the lane line recognition algorithm, this paper introduces inter-frame correlation constraints.

设计的帧间平滑模型如下式:该式中,Line代表当前帧的认可检测结果,ωi表示的是权重取值范围是(0,1),li表示第i帧的帧内检测结果,z表示关联的帧数。通过对当前帧以及前z帧的帧内检测结果加 权的方式得到了当前帧的认可检测结果。根据该模型,可以得到帧间检测算法。The designed inter-frame smoothing model is as follows: In this formula, Line represents the approved detection result of the current frame, ωi represents the weight range (0,1), li represents the intra-frame detection result of the i-th frame, and z represents the number of associated frames. The approved detection result of the current frame is obtained by weighting the intra-frame detection results of the current frame and the previous z frames. According to this model, an inter-frame detection algorithm can be obtained.

设置一个帧间缓冲区,如果缓冲区大小为z的话,那么缓冲区存放了当前帧以及之前z-1帧的帧内检测结果。根据性质当z值设置增大时当前帧的检测准确度上升,误检和错检率下降。z过大时,将导致认可检测无法表示当前帧内真实信息,导致检测失败、算法失效程序中断,程序重新执行。因此z的大小直接影响当前帧的检测车道线的精确度。Set an inter-frame buffer. If the buffer size is z, then the buffer stores the intra-frame detection results of the current frame and the previous z-1 frame. According to the properties, when the z value setting increases, the detection accuracy of the current frame increases, and the false detection and false detection rate decrease. When z is too large, it will cause the recognition detection to fail to represent the real information in the current frame, resulting in detection failure, algorithm failure, program interruption, and program re-execution. Therefore, the size of z directly affects the accuracy of detecting lane lines in the current frame.

当z=1时,检测等同于帧内检测效果,帧间平滑失去意义。z=15时,意味着同时14帧前的道路情况影响到当前的检测结果,缓冲区的增大带来算法的减慢和帧间平滑聚类算法性能的下降。经过实验结果分析,CUP每处理一张图像耗时40毫秒,1秒要处理25帧图像,z∈[1,25]某一个值能使算法检测的效果达到最优,这个参数值自适应设置,与帧间平滑模型中权重ωi和噪声的阈值Rth有关。满足如下关系:权重的设置满足于下式:ω-z+1≤ω-z+2≤…≤ω-1≤ω0When z=1, the detection is equivalent to the intra-frame detection effect, and the inter-frame smoothing is meaningless. When z=15, it means that the road conditions 14 frames ago affect the current detection results, and the increase of the buffer zone will slow down the algorithm and degrade the performance of the inter-frame smooth clustering algorithm. According to the analysis of experimental results, it takes 40 milliseconds for CUP to process an image, and 25 frames of images are processed in 1 second. A certain value of z∈[1,25] can make the detection effect of the algorithm optimal, and this parameter value is adaptively set , with the inter-frame smoothing model The weight ωi is related to the noise threshold Rth . Satisfy the following relationship: the setting of weight satisfies the following formula: ω-z+1 ≤ω-z+2 ≤...≤ω-1 ≤ω0 ;

噪声的阈值Rth,断标准如下:该式表示结果中,第t条车道线特征在z帧内的总加权和占总帧数的比率必须大于阈值Rth,否则认为是噪声车道线。The noise threshold Rth , the cut-off criteria are as follows: This formula indicates that in the result, the ratio of the total weighted sum of the feature of the tth lane line in the z frame to the total number of frames must be greater than the threshold Rth , otherwise it is considered as a noisy lane line.

Rth计算公式:其中c为修正因子区0.2<c<0.3,以保留尖锐的边缘和图像的细节,Nc为图像的像素点数,η为噪声方差。Rth calculation formula: Where c is the correction factor area 0.2<c<0.3 to retain sharp edges and image details, Nc is the number of pixels in the image, and η is the noise variance.

七、基于Kalman滤波车道线线跟踪7. Lane line tracking based on Kalman filter

其特征在于:对结构化道路而言,连续两帧图像中的车道线位置相差不大,可以利用相邻帧之间的车道线位置的相关性,用前一帧图像获得的信息指导下一帧车道线的检测,以实现车道线的实时跟踪,如图6所示。It is characterized in that: for structured roads, the position of lane lines in two consecutive frames of images is not much different, and the correlation of lane line positions between adjacent frames can be used to guide the next step with the information obtained from the previous frame of images Detection of frame lane lines to realize real-time tracking of lane lines, as shown in Figure 6.

失效判别其特征在于:当受到严重干扰如道路中行车或其它物体将车道标识线遮挡,转弯或车辆换道等情况,算法会产生较大误差甚至失效。因此要在检测中加入失效判别机制。一旦约束算法失效的情况下能及时恢复对 道路标识线的正确识别。如果检测出车道线参数满足以下情况中的一种,本文就判定为算法失效程序中断,程序重新执行。The characteristic of failure discrimination is that when there is serious interference such as driving on the road or other objects blocking the lane markings, turning or changing lanes, the algorithm will produce large errors or even fail. Therefore, it is necessary to add a failure discrimination mechanism in the detection. Once the constraint algorithm fails, the correct identification of road marking lines can be restored in time. If it is detected that the lane line parameters meet one of the following conditions, this paper judges that the algorithm is invalid and the program is interrupted, and the program is re-executed.

(1)在动态动态感兴趣区域内,Hough变换检测到的直线个数为零。(1) In the dynamic region of interest, the number of straight lines detected by the Hough transform is zero.

(2)不满足车道线约束条件的帧数大于Tβ(Tβ=5)。(2) The number of frames that do not satisfy the lane line constraint is greater than Tβ (Tβ =5).

(3)从当前一帧检测出的车道线参数相对于上一帧发生了突变,即直线的斜率变化率不应超过10度,截距不超过15个像素。(3) The parameters of the lane line detected from the current frame have a sudden change compared with the previous frame, that is, the slope change rate of the straight line should not exceed 10 degrees, and the intercept should not exceed 15 pixels.

图6-10是车道线检测效果图。Figure 6-10 is the effect diagram of lane line detection.

Claims (10)

Translated fromChinese
1.一种复杂光照条件下基于视觉的实时车道线检测的方法,其特征在于,所述方法包括如下步骤:根据摄像机图像确定待检测区域,在所述待检测区域中检测车道标识得到检测结果即图像降采样设置感兴趣区域,图像预处理,建立车道线模型,Hough变换候选车道线,kalman滤波,判别模块;1. a method for vision-based real-time lane line detection under complex lighting conditions, characterized in that, the method comprises the steps of: determining the region to be detected according to the camera image, detecting the lane mark in the region to be detected to obtain the detection result That is, image downsampling to set the region of interest, image preprocessing, lane line model establishment, Hough transform candidate lane line, kalman filter, and discrimination module;(1)对感兴趣区域图像预处理--进行颜色校正;(1) Preprocessing the image of the region of interest - performing color correction;步骤一、样本块提取首先从图像中提取ψ个样本块;对每个图像样本块,估计照射在该块上的有效光照;Step 1, sample block extraction First, extract ψ sample blocks from the image; for each image sample block, estimate the effective illumination irradiated on the block;步骤二、利用现有的单光照条件下的光照估计算法进行光照估计;基于Grey-Edge颜色恒常性算法框架通过变换参数,产生多个不同的颜色恒常性特征值提取方法;Step 2. Use the existing illumination estimation algorithm under single illumination conditions to perform illumination estimation; based on the Gray-Edge color constancy algorithm framework, multiple different color constancy eigenvalue extraction methods are generated by changing parameters;步骤三、样本块光照估计值的聚类,把来自于同一光照下的图像块被聚类到一起以形成一个大的图像块,以便产生一个更为准确的光照估计值,同一光照照射下的块更易于聚类到同一簇;所有的光照估计值被聚类到M类;其中M为场景中的光照个数;Step 3. Clustering of the estimated illumination value of the sample block. The image blocks from the same illumination are clustered together to form a large image block in order to generate a more accurate illumination estimate. Blocks are easier to cluster into the same cluster; all lighting estimates are clustered into M classes; where M is the number of lights in the scene;步骤四、聚类结果的后向映射在把基于样本块的光照估计值聚类到M类后,把聚类的结果逐个映射到原始图像,也就是说,属于同一样本块的像素属于同一聚类,这样就得到每种光照的照射位置;由此得到一个光照映射图,即每个像素属于M个光照中的某一个;通过后向映射,得到每个像素的光照估计值,及像素所在光照类的聚类中心值;Step 4. Backward mapping of clustering results After clustering the estimated illumination values based on sample blocks into M classes, map the clustering results to the original image one by one, that is, pixels belonging to the same sample block belong to the same cluster class, so that the irradiation position of each light is obtained; thus a light map is obtained, that is, each pixel belongs to one of the M lights; through backward mapping, the estimated light value of each pixel, and the location of the pixel The cluster center value of the illumination class;步骤五、对于重叠光照的区域,在后向映射的光照估计值的分类结果上使用高斯滤波器;Step 5. For areas with overlapping lighting, use a Gaussian filter on the classification result of the back-mapped lighting estimate;步骤六、颜色校正,利用每个像素的光照估计值把输入图像校正到标准光照下,得到在标准光照下的输出图像Step 6, color correction, using the estimated illumination value of each pixel to correct the input image to the standard illumination, and obtain the output image under the standard illumination(2)颜色校正后图像灰度化,如下式所示;其中,式中:R、G、B分别表示红、蓝、绿通道分量值;Gray表示转换后像素的灰度值;Gray=R*0.5+G*0.5(2) After color correction, the image is grayscaled, as shown in the following formula; where, in the formula: R, G, and B represent the red, blue, and green channel component values respectively;Gray represents the gray value of the pixel after conversion;Gray =R*0.5+G*0.5(3)对灰度化图像车道线边缘提取后进行改进的Hough变换,具体步骤如下:(3) Carry out improved Hough transform after extracting the lane line edge of the grayscale image, the specific steps are as follows:Step1.在给定参数下,对车道线特征进行基于概率的Hough变换操作,获取直线;Step1. Under the given parameters, perform a probability-based Hough transform operation on the lane line features to obtain a straight line;Step2.对每一个通过Hough变换检测得到的直线,在所有的特征点集S中寻找距离直线不大于dh的特征点,构成集合EhStep2. For each straight line detected by Hough transform, find the feature points whose distance from the straight line is not greater than dh in all feature point sets S to form a set Eh ;Step3.利用最小二乘法确定集合E的回归直线参数kh和bh,其中kh是直线的斜率,bh是直线的截距,以及均方误差ehStep3. Use the least squares method to determine the regression line parameters kh and bh of the set E, where kh is the slope of the line, bh is the intercept of the line, and the mean square error eh ;Step4.对集合Eh中的任一特征点(xi,yi),所有满足的khxi+bh>yi的特征点构成子集Epos,所有满足的khxi+bh<yi的特征点构成子集EnegStep4. For any feature point (xi , yi ) in the set Eh , all feature points satisfying kh xi +bh >yi form a subset Epos , all satisfying kh xi + The feature points of bh <yi form a subset Eneg ;Step5.在集合Epos和Eneg中,找出误差最大的点Pp和PnStep5. In the sets Epos and Eneg , find out the points Pp and Pn with the largest error,Step6.移除点Pp和Pn,更新集合Epos、Eneg和Eh,重复步骤3,直至误差eh小于εhStep6. Remove points Pp and Pn , update sets Epos , Eneg and Eh , repeat step 3 until the error eh is less than εh ;(4)检测出车道线,基于Kalman滤波车道线线跟踪,(4) Lane lines are detected, based on Kalman filter lane line tracking,(5)车道线帧间关联关系(5) Correlation between lane line frames(6)如果检测出车道线参数满足以下情况中的一种,就判定为算法失效;程序中断,程序从头开始执行,(6) If it is detected that the lane line parameters meet one of the following conditions, it is determined that the algorithm is invalid; the program is interrupted, and the program is executed from the beginning.1)在动态感兴趣区域内,Hough变换检测到的直线个数为零;1) In the dynamic region of interest, the number of straight lines detected by the Hough transform is zero;2)不满足车道线约束条件的帧数大于Tβ,Tβ=5;2) The number of frames that do not meet the lane line constraints is greater than Tβ , Tβ = 5;3)从当前一帧检测出的车道线参数相对于上一帧发生了突变,即直线的斜率变化率不应超过10度,截距不超过15个像素。3) The parameters of the lane line detected from the current frame have a sudden change compared with the previous frame, that is, the slope rate of the straight line should not exceed 10 degrees, and the intercept should not exceed 15 pixels.2.根据权利要求1所述利用图像光照估计来校正图像颜色的方法,选择的样本块的大小一样,满足以下条件:样本块5×5像素并且包含光照颜色信息来准确地估计照射在该样本块上的光照的性质。2. The method for correcting image color by using image illumination estimation according to claim 1, the size of the selected sample block is the same, and the following conditions are met: the sample block is 5×5 pixels and contains illumination color information to accurately estimate the size of the sample illuminated on the sample. The nature of the lighting on the block.3.根据权利要求1所述,其特征在于:五种候选颜色恒常性计算集合Γ={e0,1,0,e0,∞,0,e0,∞,1,e1,1,1,e2,1,1};每个样本块的特征是由选择的颜色恒常性算法的光照估计值组成。3. According to claim 1, characterized in that: five candidate color constancy calculation sets Γ={e0,1,0 ,e0,∞,0 ,e0,∞,1 ,e1,1, 1 ,e2,1,1 }; the feature of each sample block is composed of the illumination estimate of the chosen color constancy algorithm.4.根据权利要求1所述的方法,其特征在于:样本块的特征向量描述为F‘=[R,G,B],R,G,B为图像的颜色通道,使用归一化的光照估计值,如下所示,这样样本块的特征向量就转化成F=[r,g],一个1×2的向量:4. The method according to claim 1, characterized in that: the feature vector of the sample block is described as F'=[R, G, B], R, G, B are the color channels of the image, and normalized illumination is used Estimated values, as follows, so that the feature vector of the sample block is transformed into F=[r,g], a 1×2 vector:5.根据权利要求1所述的方法,其特征在于:在光照估计值组成的色度空间中,在对各个样本块的光照估计值进行聚类后,第j个样本块的光照估计值到第i聚类中心的距离使用欧氏距离计算,该欧式距离用di表示,dk表示k[0,M]中第k个样本块聚类中心的距离,Z是总共的样本块,那么该样本块位于第i个光照区域的概率pj,i如下计算:5. The method according to claim 1, characterized in that: in the chromaticity space composed of estimated illumination values, after clustering the estimated illumination values of each sample block, the estimated illumination value of the jth sample block reaches The distance of the i-th cluster center is calculated using the Euclidean distance, the Euclidean distance is represented by di , dk represents the distance of the cluster center of the kth sample block in k[0,M], Z is the total sample block, then The probability pj,i that the sample block is located in the i-th illumination area is calculated as follows:第i个光照的覆盖区域概率其中pj,i表示第j个块被第i个光照照射的概率并且M是输入图像中样本块的总数。Coverage area probability of the i-th light wherepj,i denotes the probability that the jth patch is illuminated by the ith light and M is the total number of sample patches in the input image.6.根据权利要求1所述的方法,其特征在于:图像每个像素的光照估计值根据如下式进行计算:6. The method according to claim 1, characterized in that: the estimated illumination value of each pixel of the image is calculated according to the following formula:其中Ie是场景上的光照估计值,Ie,j是第i个光照的估计值,mi(x)表示第i个光照对位于x处的像素的贡献;Z表示总共的样本块。Where Ie is the estimated value of illumination on the scene, Ie,j is the estimated value of the i-th illumination, mi (x) represents the contribution of the i-th illumination to the pixel at x; Z represents the total sample block.7.根据权利要求1所述的方法,其特征在于:在得到每个像素的光照估计值之后,根据对角模型逐个像素进行校正,其中fu(x)表示在未知光照照射下x处的像素值,fc(x)表示经过校正后其在标准光照照射下呈现出的像素值;Λu,c(x)是在x处从未知光照到标准光照的映射矩阵,如下式所示:fc(x)=Λu,c(x)fu(x)。7. The method according to claim 1, characterized in that: after obtaining the estimated illumination value of each pixel, correction is performed pixel by pixel according to the diagonal model, where fu (x) represents the value at x under unknown illumination Pixel value, fc (x) represents the corrected pixel value under standard illumination; Λu,c (x) is the mapping matrix from unknown illumination to standard illumination at x, as shown in the following formula: fc (x) = Λu,c (x) fu (x).8.根据权利要求1所述方法,其特征在于:对角校正模型如下式所示,其中,表示成像时位置:8. The method according to claim 1, characterized in that: the diagonal correction model is shown in the following formula, wherein, Indicates the position when imaging:其中,x表示在图像空间中某一点,R通道测量的光照值;x表示在图像空间中某一点,R通道估计的光照值;空间中某一点R通道的测量光照值比上估计的光照值;为空间中某一点G通道的测量光照值比上估计的光照值;为空间中某一点B通道的测量光照值比上估计的光照值;Λu,c(x)是在x处从未知光照到标准光照的映射矩阵。in, x represents the light value measured by the R channel at a certain point in the image space; x represents the illumination value estimated by the R channel at a certain point in the image space; The ratio of the measured illumination value of the R channel to the estimated illumination value at a certain point in space; It is the ratio of the measured illumination value of the G channel to the estimated illumination value at a certain point in space; is the ratio of the measured illumination value of the B channel to the estimated illumination value at a certain point in space; Λu,c (x) is the mapping matrix from unknown illumination to standard illumination at x.9.根据权利要求1所述的方法,其特征在于,对灰度化图像车道线边缘提取,设某点是(x,y),满足y∈[0,himage)且x∈[2,wiamge-2);x,y分别是像素点的列和行,wiamge是图像的宽度,hiamge是图像的高度;9. The method according to claim 1, characterized in that, extracting the edge of the grayscale image lane line, assuming that a certain point is (x, y), satisfying y∈[0, himage ) and x∈[2, wiamge -2); x, y are columns and rows of pixels respectively, wiamge is the width of the image, hiamge is the height of the image;Step1:计算点(x,y)水平线附近的均值;其中,t=5;Step1: Calculate the mean value near the horizontal line of the point (x, y); where, t=5;Step2:计算边缘提取阈值T;Step2: Calculate the edge extraction threshold T;Step3:计算边缘的升变点ep和降变点evStep3: Calculate the rising point ep and falling point ev of the edge;ep∈{f(x+2,y)-f(x,y)>T}ep ∈{f(x+2,y)-f(x,y)>T}ev∈{f(x+2,y)-f(x,y)<-T}ev ∈{f(x+2,y)-f(x,y)<-T}Step4:车道线的升变点和降变点在图像中是成对出现的,并且之间满足一定的距离;比较升变点和降变点的宽度,剔除不满足的点:Δw=ep(x)-ev(x);Step4: The rising and falling points of the lane line appear in pairs in the image, and there is a certain distance between them; compare the widths of the rising and falling points, and eliminate unsatisfactory points: Δw=ep (x)-ev (x);若Δw>Wmax,则认为是不可能出现的车道线,则要舍弃;其中,ep(x)和ev(x)分别表示升变点和降变点的列像素坐标,Wmax为车道线在图像中占有的最大的像素个数。If Δw>Wmax , it is considered to be an impossible lane line, and it should be discarded; among them, ep (x) andev (x) represent the column pixel coordinates of the up-change point and the down-change point respectively, and Wmax is The maximum number of pixels that lane lines occupy in the image.10.根据权利要求1所述的方法,其特征在于,设计的帧间平滑模型如下式:该式中,Line代表当前帧的认可检测结果,ωi表示的是权重取值范围是(0,1),li表示第i帧的帧内检测结果,z表示关联的帧数;通过对当前帧以及前z帧的帧内检测结果加权的方式得到了当前帧的认可检测结果;根据该模型,得到帧间检测算法;设置一个帧间缓冲区,如果缓冲区大小为z的话,那么缓冲区存放了当前帧以及之前z-1帧的帧内检测结果;根据性质当z值设置增大时当前帧的检测准确度上升,误检和错检率下降;z∈[1,25]且满足如下关系:权重的设置满足于下式:ω-z+1≤ω-z+2≤…≤ω-1≤ω0;噪声的阈值Rth,断标准如下:该式表示结果中,第t条车道线特征在z帧内的总加权和占总帧数的比率必须大于阈值Rth,否则认为是噪声车道线;Rth计算公式:其中c为修正因子区0.2<c<0.3,以保留尖锐的边缘和图像的细节,Nc为图像的像素点数,η为噪声方差。10. method according to claim 1, is characterized in that, the interframe smoothing model of design is as follows: In this formula, Line represents the approved detection result of the current frame, ωi represents the weight range (0,1), li represents the intra-frame detection result of the i-th frame, and z represents the number of associated frames; The current frame and the intra-frame detection results of the previous z frame are weighted to obtain the current frame's approved detection results; according to this model, the inter-frame detection algorithm is obtained; an inter-frame buffer is set, if the buffer size is z, then the buffer The area stores the intra-frame detection results of the current frame and the previous z-1 frame; according to the nature, when the z value is set to increase, the detection accuracy of the current frame increases, and the false detection and false detection rate decrease; z∈[1,25] and Satisfy the following relationship: the setting of the weight satisfies the following formula: ω-z+1 ≤ω-z+2 ≤...≤ω-1 ≤ω0 ; the noise threshold Rth , the threshold is as follows: This formula indicates that in the result, the ratio of the total weighted sum of the t-th lane feature in the z frame to the total number of frames must be greater than the threshold Rth , otherwise it is considered to be a noisy lane line; Rth calculation formula: Where c is the correction factor area 0.2<c<0.3 to retain sharp edges and image details, Nc is the number of pixels in the image, and η is the noise variance.
CN201611098387.4A2016-12-032016-12-03Method for real-time lane line detection based on vision under complex lighting conditionsPendingCN106682586A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201611098387.4ACN106682586A (en)2016-12-032016-12-03Method for real-time lane line detection based on vision under complex lighting conditions

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201611098387.4ACN106682586A (en)2016-12-032016-12-03Method for real-time lane line detection based on vision under complex lighting conditions

Publications (1)

Publication NumberPublication Date
CN106682586Atrue CN106682586A (en)2017-05-17

Family

ID=58867368

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201611098387.4APendingCN106682586A (en)2016-12-032016-12-03Method for real-time lane line detection based on vision under complex lighting conditions

Country Status (1)

CountryLink
CN (1)CN106682586A (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107451585A (en)*2017-06-212017-12-08浙江大学Potato pattern recognition device and method based on laser imaging
CN107578012A (en)*2017-09-052018-01-12大连海事大学 A Driver Assistance System Based on Clustering Algorithm to Select Sensitive Areas
CN107909007A (en)*2017-10-272018-04-13上海识加电子科技有限公司Method for detecting lane lines and device
CN108537224A (en)*2018-04-232018-09-14北京小米移动软件有限公司Image detecting method and device
CN108734105A (en)*2018-04-202018-11-02东软集团股份有限公司Method for detecting lane lines, device, storage medium and electronic equipment
CN109002745A (en)*2017-06-062018-12-14武汉小狮科技有限公司A kind of lane line real-time detection method based on deep learning and tracking technique
CN109272536A (en)*2018-09-212019-01-25浙江工商大学 A line killing point tracking method based on Kalman filtering
CN109740550A (en)*2019-01-082019-05-10哈尔滨理工大学 A method of lane line detection and tracking based on monocular vision
CN109858438A (en)*2019-01-302019-06-07泉州装备制造研究所A kind of method for detecting lane lines based on models fitting
CN110084190A (en)*2019-04-252019-08-02南开大学Unstructured road detection method in real time under a kind of violent light environment based on ANN
CN110765890A (en)*2019-09-302020-02-07河海大学常州校区 Lane and Lane Marking Detection Method Based on Capsule Network Deep Learning Architecture
CN111126109A (en)*2018-10-312020-05-08沈阳美行科技有限公司Lane line identification method and device and electronic equipment
CN111580500A (en)*2020-05-112020-08-25吉林大学Evaluation method for safety of automatic driving automobile
CN111753749A (en)*2020-06-282020-10-09华东师范大学 A Lane Line Detection Method Based on Feature Matching
CN112101163A (en)*2020-09-042020-12-18淮阴工学院Lane line detection method
CN112115784A (en)*2020-08-132020-12-22北京嘀嘀无限科技发展有限公司Lane line identification method and device, readable storage medium and electronic equipment
CN112767359A (en)*2021-01-212021-05-07中南大学Steel plate corner detection method and system under complex background
CN113200052B (en)*2021-05-062021-11-16上海伯镭智能科技有限公司Intelligent road condition identification method for unmanned driving
CN114266882A (en)*2021-11-022022-04-01随机数(浙江)智能科技有限公司Lane line automatic extraction method and system based on region of interest
CN115471802A (en)*2022-08-312022-12-13南通大学 Vehicle lane line detection method in weak light environment based on improved Canny algorithm
CN115806202A (en)*2023-02-022023-03-17山东新普锐智能科技有限公司Self-adaptive learning-based weighing hydraulic unloading device and turnover control system thereof
CN116029947A (en)*2023-03-302023-04-28之江实验室 A complex optical image enhancement method, device and medium for harsh environments
EP4047317A3 (en)*2021-07-132023-05-31Beijing Baidu Netcom Science Technology Co., Ltd.Map updating method and apparatus, device, server, and storage medium
US12049172B2 (en)2021-10-192024-07-30Stoneridge, Inc.Camera mirror system display for commercial vehicles including system for identifying road markings

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103839264A (en)*2014-02-252014-06-04中国科学院自动化研究所Detection method of lane line
CN103940434A (en)*2014-04-012014-07-23西安交通大学Real-time lane line detecting system based on monocular vision and inertial navigation unit
CN104866823A (en)*2015-05-112015-08-26重庆邮电大学Vehicle detection and tracking method based on monocular vision
CN105260713A (en)*2015-10-092016-01-20东方网力科技股份有限公司Method and device for detecting lane line
CN105678791A (en)*2016-02-242016-06-15西安交通大学Lane line detection and tracking method based on parameter non-uniqueness property
CN105893949A (en)*2016-03-292016-08-24西南交通大学Lane line detection method under complex road condition scene
CN105966314A (en)*2016-06-152016-09-28北京联合大学Lane departure pre-warning method based on double low-cost cameras

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103839264A (en)*2014-02-252014-06-04中国科学院自动化研究所Detection method of lane line
CN103940434A (en)*2014-04-012014-07-23西安交通大学Real-time lane line detecting system based on monocular vision and inertial navigation unit
CN104866823A (en)*2015-05-112015-08-26重庆邮电大学Vehicle detection and tracking method based on monocular vision
CN105260713A (en)*2015-10-092016-01-20东方网力科技股份有限公司Method and device for detecting lane line
CN105678791A (en)*2016-02-242016-06-15西安交通大学Lane line detection and tracking method based on parameter non-uniqueness property
CN105893949A (en)*2016-03-292016-08-24西南交通大学Lane line detection method under complex road condition scene
CN105966314A (en)*2016-06-152016-09-28北京联合大学Lane departure pre-warning method based on double low-cost cameras

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ARJAN GIJSENIJ 等: "Color Constancy for Multiple Light Sources", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》*
杨喜宁 等: "基于改进Hough变换的车道线检测技术", 《计算机测量与控制》*
董俊鹏: "基于光照分析的颜色恒常性算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》*
郭斯羽 等: "结合Hough变换与改进最小二乘法的直线检测", 《计算机科学》*
陆子辉: "基于视觉的全天时车外安全检测算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》*

Cited By (37)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109002745A (en)*2017-06-062018-12-14武汉小狮科技有限公司A kind of lane line real-time detection method based on deep learning and tracking technique
CN107451585B (en)*2017-06-212023-04-18浙江大学Potato image recognition device and method based on laser imaging
CN107451585A (en)*2017-06-212017-12-08浙江大学Potato pattern recognition device and method based on laser imaging
CN107578012A (en)*2017-09-052018-01-12大连海事大学 A Driver Assistance System Based on Clustering Algorithm to Select Sensitive Areas
CN107578012B (en)*2017-09-052020-10-27大连海事大学 A driving assistance system based on clustering algorithm to select sensitive areas
CN107909007B (en)*2017-10-272019-12-13上海识加电子科技有限公司lane line detection method and device
CN107909007A (en)*2017-10-272018-04-13上海识加电子科技有限公司Method for detecting lane lines and device
CN108734105A (en)*2018-04-202018-11-02东软集团股份有限公司Method for detecting lane lines, device, storage medium and electronic equipment
CN108537224A (en)*2018-04-232018-09-14北京小米移动软件有限公司Image detecting method and device
CN109272536A (en)*2018-09-212019-01-25浙江工商大学 A line killing point tracking method based on Kalman filtering
CN109272536B (en)*2018-09-212021-11-09浙江工商大学Lane line vanishing point tracking method based on Kalman filtering
CN111126109B (en)*2018-10-312023-09-05沈阳美行科技股份有限公司Lane line identification method and device and electronic equipment
CN111126109A (en)*2018-10-312020-05-08沈阳美行科技有限公司Lane line identification method and device and electronic equipment
CN109740550A (en)*2019-01-082019-05-10哈尔滨理工大学 A method of lane line detection and tracking based on monocular vision
CN109858438A (en)*2019-01-302019-06-07泉州装备制造研究所A kind of method for detecting lane lines based on models fitting
CN109858438B (en)*2019-01-302022-09-30泉州装备制造研究所Lane line detection method based on model fitting
CN110084190A (en)*2019-04-252019-08-02南开大学Unstructured road detection method in real time under a kind of violent light environment based on ANN
CN110084190B (en)*2019-04-252024-02-06南开大学Real-time unstructured road detection method under severe illumination environment based on ANN
CN110765890A (en)*2019-09-302020-02-07河海大学常州校区 Lane and Lane Marking Detection Method Based on Capsule Network Deep Learning Architecture
CN110765890B (en)*2019-09-302022-09-02河海大学常州校区Lane and lane mark detection method based on capsule network deep learning architecture
CN111580500A (en)*2020-05-112020-08-25吉林大学Evaluation method for safety of automatic driving automobile
CN111580500B (en)*2020-05-112022-04-12吉林大学Evaluation method for safety of automatic driving automobile
CN111753749A (en)*2020-06-282020-10-09华东师范大学 A Lane Line Detection Method Based on Feature Matching
CN112115784A (en)*2020-08-132020-12-22北京嘀嘀无限科技发展有限公司Lane line identification method and device, readable storage medium and electronic equipment
CN112115784B (en)*2020-08-132021-09-28北京嘀嘀无限科技发展有限公司Lane line identification method and device, readable storage medium and electronic equipment
CN112101163A (en)*2020-09-042020-12-18淮阴工学院Lane line detection method
CN112767359A (en)*2021-01-212021-05-07中南大学Steel plate corner detection method and system under complex background
CN112767359B (en)*2021-01-212023-10-24中南大学Method and system for detecting corner points of steel plate under complex background
CN113200052B (en)*2021-05-062021-11-16上海伯镭智能科技有限公司Intelligent road condition identification method for unmanned driving
EP4047317A3 (en)*2021-07-132023-05-31Beijing Baidu Netcom Science Technology Co., Ltd.Map updating method and apparatus, device, server, and storage medium
US12049172B2 (en)2021-10-192024-07-30Stoneridge, Inc.Camera mirror system display for commercial vehicles including system for identifying road markings
CN114266882A (en)*2021-11-022022-04-01随机数(浙江)智能科技有限公司Lane line automatic extraction method and system based on region of interest
CN114266882B (en)*2021-11-022025-04-04随机数(浙江)智能科技有限公司 A method and system for automatically extracting lane lines based on regions of interest
CN115471802A (en)*2022-08-312022-12-13南通大学 Vehicle lane line detection method in weak light environment based on improved Canny algorithm
CN115806202A (en)*2023-02-022023-03-17山东新普锐智能科技有限公司Self-adaptive learning-based weighing hydraulic unloading device and turnover control system thereof
CN115806202B (en)*2023-02-022023-08-25山东新普锐智能科技有限公司Hydraulic unloading device capable of weighing based on self-adaptive learning and overturning control system thereof
CN116029947A (en)*2023-03-302023-04-28之江实验室 A complex optical image enhancement method, device and medium for harsh environments

Similar Documents

PublicationPublication DateTitle
CN106682586A (en)Method for real-time lane line detection based on vision under complex lighting conditions
Kong et al.General road detection from a single image
CN110178167B (en) Video Recognition Method of Intersection Violation Based on Camera Cooperative Relay
CN106778593B (en)Lane level positioning method based on multi-ground sign fusion
CN110619750B (en)Intelligent aerial photography identification method and system for illegal parking vehicle
Huang et al.Lane Detection Based on Inverse Perspective Transformation and Kalman Filter.
CN106951879B (en) Multi-feature fusion vehicle detection method based on camera and millimeter wave radar
CN104778444B (en)The appearance features analysis method of vehicle image under road scene
Jung et al.A robust linear-parabolic model for lane following
CN103366154B (en)Reconfigurable clear path detection system
WO2019196130A1 (en)Classifier training method and device for vehicle-mounted thermal imaging pedestrian detection
KR101191308B1 (en)Road and lane detection system for intelligent transportation system and method therefor
WO2019196131A1 (en)Method and apparatus for filtering regions of interest for vehicle-mounted thermal imaging pedestrian detection
CN107066986A (en)A kind of lane line based on monocular vision and preceding object object detecting method
CN107679520A (en)A kind of lane line visible detection method suitable for complex condition
CN106529493A (en)Robust multi-lane line detection method based on perspective drawing
CN107895151A (en)Method for detecting lane lines based on machine vision under a kind of high light conditions
TWI401473B (en)Night time pedestrian detection system and method
CN105981042A (en) Vehicle detection system and method
CN106407951B (en)A kind of night front vehicles detection method based on monocular vision
CN102915433A (en)Character combination-based license plate positioning and identifying method
Cai et al.Real-time arrow traffic light recognition system for intelligent vehicle
CN111753749A (en) A Lane Line Detection Method Based on Feature Matching
CN110414425A (en) A width adaptive lane line detection method and system based on vanishing point detection
CN106503748A (en)A kind of based on S SIFT features and the vehicle targets of SVM training aids

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication

Application publication date:20170517

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp