技术领域technical field
本发明涉及高速公路测速技术领域,更具体地,涉及一种基于虚拟线圈的高速公路视频测速方法及系统。The present invention relates to the technical field of expressway speed measurement, and more specifically, to a method and system for expressway video speed measurement based on a virtual coil.
背景技术Background technique
传统的高速公路超速监测采用地感线圈测速,激光测速和雷达测速。以上三种测速方法均有各自的不足之处。地感线圈测速是在公路地面预先埋设一对具有车辆通过感应能力的线圈,记录目标车辆相继通过两个线圈的时间,进而根据时间差与线圈间距计算出速。但是地感线圈的安装会破坏高速公路的路面,而且对于沿非直线轨迹行驶的车辆测速精度较低。激光测速利用激光脉冲对驶来车辆多次进行测距,进而计算车速。但激光测速要求仪器正对来车,与车辆行驶方向的偏差角度不能超过10度,安装位置条件比较苛刻,而且对于在测速区内行驶轨迹变化的车辆同样存在测量误差大的问题。雷达测速是利用多普勒原理进行测速,但是与激光测速类似,同样存在与车辆行驶方向的偏差角度要保持在10度以内的问题,给其适用范围带来很大的局限性。The traditional expressway overspeed monitoring adopts ground induction coil speed measurement, laser speed measurement and radar speed measurement. The above three speed measurement methods all have their own shortcomings. Ground induction coil speed measurement is to bury a pair of coils with vehicle passing induction ability on the road surface in advance, record the time when the target vehicle passes through the two coils successively, and then calculate the speed according to the time difference and the distance between the coils. However, the installation of the ground induction coil will damage the road surface of the expressway, and the accuracy of speed measurement for vehicles traveling along a non-linear trajectory is low. Laser speed measurement uses laser pulses to measure the distance of oncoming vehicles multiple times, and then calculate the speed of the vehicle. However, the laser speed measurement requires the instrument to face the oncoming vehicle, and the deviation angle from the driving direction of the vehicle cannot exceed 10 degrees. The installation location conditions are relatively strict, and there is also the problem of large measurement errors for vehicles whose driving trajectories change in the speed measurement area. Radar speed measurement uses the Doppler principle for speed measurement, but similar to laser speed measurement, there is also the problem that the deviation angle from the vehicle's driving direction must be kept within 10 degrees, which brings great limitations to its application range.
随着高速摄像和图像处理识别技术的进步,可以从高速公路监控摄像机拍摄的各帧视频画面中定位目标车辆,并根据其行驶轨迹以及各帧视频画面的拍摄时间计算车速。基于这一原理的高速公路测速技术被称之为视频测速。相比上面介绍的传统测速技术,视频测速以下几个方面能够表现出明显的优势。首先,只需要具有较高分辨率的高速摄像机即可以实现高速公路及相关车辆画面的拍摄,信号采集过程和设备均得到简化,对高速公路的路面和其它设施没有不利影响,易于改造施工;其次,相比于雷达和激光测速对于偏差角度的敏感性来说,视频测速对于拍摄角度的要求非常宽松;第三,通过对视频测速相关算法的改进,对于车辆行驶轨迹的适应性大大提高,对于车辆变道甚至转弯行驶的情况也能够实现准确测速;第四,不论是地感线圈、激光还是雷达测速,都必须附加摄像以便识别车牌和取证,而视频测速可以将测速、车牌识别、记录取证等步骤集中完成,系统的集成化和反应速度得到了很大提升。With the advancement of high-speed camera and image processing and recognition technology, the target vehicle can be located from each frame of video captured by the highway surveillance camera, and the vehicle speed can be calculated according to its driving trajectory and the shooting time of each frame of video. The highway speed measurement technology based on this principle is called video speed measurement. Compared with the traditional speed measurement technology introduced above, the following aspects of video speed measurement can show obvious advantages. First of all, only a high-resolution high-speed camera is needed to capture images of expressways and related vehicles, the signal acquisition process and equipment are simplified, there is no adverse effect on the road surface and other facilities of the expressway, and it is easy to rebuild and construct; secondly Compared with the sensitivity of radar and laser speed measurement to deviation angle, video speed measurement has very loose requirements on shooting angle; third, through the improvement of video speed measurement related algorithms, the adaptability to vehicle trajectory is greatly improved, and for Accurate speed measurement can also be achieved when the vehicle changes lanes or even turns. Fourth, whether it is a ground sensing coil, laser or radar speed measurement, it is necessary to attach a camera to identify the license plate and obtain evidence. Video speed measurement can combine speed measurement, license plate recognition, record and evidence collection. After the steps are completed intensively, the system integration and response speed have been greatly improved.
视频测速技术的核心在于对视频画面的处理、识别和分析计算,因此其算法决定了整个系统的精确性和实时性。视频测速算法的基础是视频画面中特定目标车辆的识别抽取,目标是将视频画面中代表特定目标车辆的部分识别抽取出来而区别于视频画面中的其余部分。特定目标车辆识别抽取方法包括光流法、帧间差分法、背景差分法、车灯区域定位法、车牌定位法等。对于所抽取的表示特定目标车辆的部分,还要进行必要的处理而去除阴影等杂质。The core of video speed measurement technology lies in the processing, recognition, analysis and calculation of video images, so its algorithm determines the accuracy and real-time performance of the entire system. The basis of the video speed measurement algorithm is the identification and extraction of specific target vehicles in the video picture. The goal is to identify and extract the part representing the specific target vehicle in the video picture and distinguish it from the rest of the video picture. Specific target vehicle identification and extraction methods include optical flow method, frame difference method, background difference method, vehicle light area location method, license plate location method, etc. For the extracted part that represents the specific target vehicle, necessary processing should be performed to remove impurities such as shadows.
在此基础上,对特定目标车辆执行测速的算法可分为定位追踪法和虚拟线圈法两种类型。On this basis, the algorithm for performing speed measurement on a specific target vehicle can be divided into two types: the positioning tracking method and the virtual coil method.
定位追踪法通过定位所述特定目标车辆在连续若干帧视频画面中所处的位置,计算该特定目标车辆在视频画面中的运动速度,进而换算车辆的实际速度。定位追踪法测速精确度更高,对于复杂车辆行驶轨迹的适应性更好,但是算法非常复杂,实时运算性能比较差,在实际应用中面对大流量状况下的多车辆实时测速问题时会遇到瓶颈。The positioning tracking method calculates the moving speed of the specific target vehicle in the video frames by locating the position of the specific target vehicle in several consecutive frames of video frames, and then converts the actual speed of the vehicle. The positioning tracking method has higher speed measurement accuracy and better adaptability to complex vehicle trajectories, but the algorithm is very complicated and the real-time computing performance is relatively poor. to the bottleneck.
虚拟线圈法是将视频画面预定区域设定为虚拟线圈,作为虚拟线圈的画面区域对应高速公路当中的某一位置,当车辆通过高速公路的所述位置时,会引起视频画面中虚拟线圈处的图像信号变化,这被称为触发虚拟线圈。两个虚拟线圈对应的公路位置之间的真实间距L可以通过实地测量或者计算而事先获得。因而,通过所述特定目标车辆触发两个虚拟线圈的时间差△t,可以计算该特定目标车辆的车速v=L/△t;其中,△t通常用确定该特定目标车辆触发各虚拟线圈的前、后两个视频画面的帧间差来表示。虚拟线圈法的算法更为简单、运行速度更快、有效监控区域大于定位追踪法,因而在实际应用中也更为普遍。The virtual coil method is to set the predetermined area of the video picture as a virtual coil, and the picture area as the virtual coil corresponds to a certain position in the expressway. The image signal changes, which is called triggering the virtual coil. The real distance L between the road positions corresponding to the two virtual coils can be obtained in advance through on-the-spot measurement or calculation. Therefore, the vehicle speed v=L/Δt of the specific target vehicle can be calculated through the time difference Δt between the triggering of the two virtual coils by the specific target vehicle; where Δt is usually used to determine the time before each virtual coil is triggered by the specific target vehicle , The frame difference between the last two video pictures is represented. The algorithm of the virtual coil method is simpler, the operation speed is faster, and the effective monitoring area is larger than that of the positioning tracking method, so it is more common in practical applications.
虚拟线圈法在实际应用中需要解决的重要问题是建立可靠的虚拟线圈触发检测机制。虚拟线圈触发的标志是视频画面虚拟线圈区域内像素的图像信号发生幅度在阈值以上的阶梯性变化。用于触发判定的图像信号一般采用像素的灰度差分信号,计算方法是将当前视频画面中虚拟线圈区域的像素灰度信号与预设背景画面中该区域的灰度信号进行减法运算,进而判断灰度差分信号值是否大于所述阈值,在大于该阈值的情况下则认为虚拟线圈被触发。The important problem to be solved in practical application of the virtual coil method is to establish a reliable virtual coil trigger detection mechanism. The sign of virtual coil triggering is that the image signal of the pixels in the virtual coil area of the video screen has a stepwise change in magnitude above the threshold. The image signal used for triggering judgment generally adopts the gray level difference signal of the pixel. The calculation method is to subtract the pixel gray level signal of the virtual coil area in the current video picture from the gray level signal of the area in the preset background picture, and then judge Whether the grayscale difference signal value is greater than the threshold value, if greater than the threshold value, it is considered that the virtual coil is triggered.
然而,上述触发检测机制的可靠性是比较低的,易于发生误触发的情况。影响检测可靠性的因素除了视频画面拍摄过程中常见的噪声信号外,还包括公路实际环境中的各种因素,例如:白天日照和阴影变化会对触发检测产生干扰;夜晚车灯光束对路面照射带来图像信号灰度突变,会导致虚拟线圈在车辆实际经过之前被提早触发;在特定光照状态下,某些浅黄色或浅绿色汽车对虚拟线圈区域像素灰度的影响不足以引起触发。However, the reliability of the above-mentioned trigger detection mechanism is relatively low, and false triggering is prone to occur. Factors that affect detection reliability include various factors in the actual road environment in addition to common noise signals in the video shooting process, such as: sunlight and shadow changes during the day will interfere with trigger detection; car light beams illuminate the road surface at night A sudden change in the gray level of the image signal will cause the virtual coil to be triggered early before the vehicle actually passes by; under certain lighting conditions, the influence of some light yellow or light green cars on the pixel gray level of the virtual coil area is not enough to cause triggering.
现有技术中解决上述因素不良影响的主要手段是构建高斯背景模型来适应各种环境因素变化。高斯背景模型认为,没有运动目标的大多数帧背景画面中,像素灰度会服从高斯分布N(μ,σ2),μ是像素灰度均值,而σ2是方差。高斯背景模型针对之前累积的一定数量的背景画面中各像素灰度值进行统计从而计算μ和σ2值,从而构造了背景模型,并且基于实时拍摄的视频画面不断对背景模型的μ和σ2值进行更新;通过将当前视频画面的像素值与背景模型进行比较,提取其中不属于背景画面的前景像素,进而实现对虚拟线圈触发的判断。The main means to solve the adverse effects of the above factors in the prior art is to construct a Gaussian background model to adapt to changes in various environmental factors. The Gaussian background model believes that in most background frames without moving objects, the pixel gray levels will obey the Gaussian distribution N(μ,σ2 ), where μ is the mean value of pixel gray levels, and σ2 is the variance. The Gaussian background model counts the gray values of each pixel in a certain number of background pictures accumulated before to calculate the μ and σ2 values, thereby constructing the background model, and continuously evaluates the μ and σ2 values of the background model based on the real-time captured video pictures. The value is updated; by comparing the pixel value of the current video picture with the background model, extracting the foreground pixels that do not belong to the background picture, and then realizing the judgment of the virtual coil trigger.
利用高斯背景模型实现的触发判断确实能够适应缓慢的背景变化所带来的影响,例如可以克服白天日照和阴影变化的干扰,但是,对于上面提到的夜间车灯光束照射或者针对特定颜色车辆不能反应等突发性变化,仍然无法解决其带来的误触发问题。The trigger judgment achieved by using the Gaussian background model can indeed adapt to the effects of slow background changes, for example, it can overcome the interference of sunlight and shadow changes during the day, but it cannot be used for the above-mentioned nighttime car light beam illumination or specific color vehicles. Sudden changes such as responses still cannot solve the problem of false triggering caused by them.
发明内容Contents of the invention
针对现有技术中的上述缺陷,本发明提供了一种基于虚拟线圈的高速公路视频测速方法及系统。本发明的核心在于对虚拟线圈触发检测机制进行了改进,在虚拟线圈区域内自适应地识别若干关键区域所具有的稳定边缘特征,进而通过检测由特定目标车辆引起的虚拟线圈区域内边缘特征变化实现对虚拟线圈的触发。Aiming at the above-mentioned defects in the prior art, the present invention provides a method and system for expressway video speed measurement based on virtual coils. The core of the present invention is to improve the virtual coil trigger detection mechanism, adaptively identify the stable edge features of several key areas in the virtual coil area, and then detect the edge feature changes in the virtual coil area caused by specific target vehicles Realize the triggering of the virtual coil.
本发明所提供的一种基于虚拟线圈的高速公路视频测速方法,包括:虚拟线圈触发检测步骤,将测速摄像机所拍摄的视频画面中的至少两个预定区域设定为虚拟线圈,并且检测由特定目标车辆形成的画面是否经过所述虚拟线圈;测速步骤,根据特定目标车辆依次触发各个所述虚拟线圈的时间差进行车辆测速;其特征在于,所述虚拟线圈触发检测步骤具体包括:A highway video speed measurement method based on a virtual coil provided by the present invention includes: a virtual coil trigger detection step, setting at least two predetermined areas in the video picture taken by the speed measurement camera as a virtual coil, and detecting Whether the picture formed by the target vehicle passes through the virtual coil; the speed measurement step, according to the specific target vehicle triggers the time difference of each of the virtual coils sequentially to perform vehicle speed measurement; it is characterized in that the virtual coil trigger detection step specifically includes:
关键区域稳定边缘特征提取步骤,用于提取无车辆状态下视频画面的虚拟线圈区域内空间分布和时域稳定性符合条件的稳定边缘信息,形成包含所述稳定边缘的虚拟线圈区域模板;The key area stable edge feature extraction step is used to extract the stable edge information that the spatial distribution and temporal stability of the virtual coil area of the video picture in the no-vehicle state meet the conditions, and form a virtual coil area template containing the stable edge;
边缘特征变化检测步骤,用于针对实时拍摄视频画面的虚拟线圈区域提取实时边缘信息,并且对所述稳定边缘信息和实时边缘信息进行匹配运算,判断二者的相似度是否低于最低相似阈值;在低于所述最低相似阈值的情况下确定虚拟线圈被触发。The edge feature change detection step is used to extract real-time edge information for the virtual coil area of the real-time shooting video picture, and perform a matching operation on the stable edge information and real-time edge information, and judge whether the similarity between the two is lower than the minimum similarity threshold; The virtual coil is determined to be triggered below said lowest similarity threshold.
优选的是,所述关键区域稳定边缘特征提取步骤包括以下子步骤:Preferably, the key region stable edge feature extraction step includes the following sub-steps:
边缘提取步骤,用于针对虚拟线圈区域执行高斯平滑滤波,然后利用Canny算子进行边缘检测算法,获得在虚拟线圈区域中所存在边缘信息的二值化图像表示;The edge extraction step is used to perform Gaussian smoothing filtering on the virtual coil area, and then utilizes the Canny operator to carry out the edge detection algorithm to obtain a binary image representation of edge information existing in the virtual coil area;
关键区域识别步骤,计算所述虚拟线圈区域中所存在边缘在图像的特定轴向上的游程长度,比较所述游程长度是否大于游程阈值,将游程长度大于游程阈值的边缘作为候选边缘;并且对间隔特定时长的至少两帧视频画面的候选边缘执行差分运算,获得边缘差分图像;在候选边缘中排除边缘差分图像中的非零值区域,获得稳定边缘特征;The key area identification step is to calculate the run length of the edge existing in the virtual coil area on the specific axis of the image, compare whether the run length is greater than the run threshold, and use the edge with the run length greater than the run threshold as a candidate edge; and Performing a difference operation on candidate edges of at least two frames of video images at a specific time interval to obtain an edge difference image; excluding non-zero value areas in the edge difference image from the candidate edges to obtain stable edge features;
边缘特征生成步骤,获得并保存所述稳定边缘特征,从而形成具有稳定边缘特征的虚拟线圈区域模板。The edge feature generation step is to obtain and save the stable edge features, so as to form a virtual coil area template with stable edge features.
进一步优选的是,所述边缘提取步骤包括以下子步骤:步骤1,利用Canny边缘检测器对滤波图像进行第一次边缘提取;步骤2,对滤波图像进行顶帽变换后,用Canny边缘检测器进行第二次边缘提取;步骤3,对顶帽变换后的图像进行对数变换后,用Canny边缘检测器进行第三次边缘提取;步骤4,将三次边缘提取的结果图像叠加;步骤5,对叠加后的图像进行骨架化处理,得到边缘信息。Further preferably, the edge extraction step comprises the following sub-steps: Step 1, utilize Canny edge detector to carry out edge extraction for the first time to filter image; Step 2, after carrying out top-hat transformation to filter image, use Canny edge detector Carry out edge extraction for the second time; Step 3, after logarithmic transformation is carried out to the image after top-hat transformation, carry out edge extraction for the third time with Canny edge detector; Step 4, the result image of three times of edge extraction is superimposed; Step 5, Skeletonize the superimposed images to obtain edge information.
进一步优选的是,所述关键区域识别步骤中,针对虚拟线圈区域中所存在边缘,分别计算各边缘在图像的X轴和Y轴方向上所具有的不间断像素的数目Nx和Ny,进而为Nx和Ny分配不同的权重,从而通过加权计算各边缘在所述特定轴向上的游程长度H=α1·Nx+α2·Ny,其中α1和α2表示权重系数。Further preferably, in the key area identification step, for the edges existing in the virtual coil area, the numbers Nx and Ny of the uninterrupted pixels of each edge in the X-axis and Y-axis directions of the image are calculated respectively, Furthermore, different weights are assigned to Nx and Ny , so that the run length of each edge on the specific axis is calculated by weighting H=α1·Nx +α2·Ny , where α1 and α2 represent weight coefficients.
进一步优选的是,所述关键区域识别步骤中,对所述候选边缘首先执行扩张算法将边缘扩张为具有更大的线条宽度的边缘区,再执行所述差分运算。Further preferably, in the step of identifying the key region, an expansion algorithm is firstly performed on the candidate edge to expand the edge into an edge area with a larger line width, and then the difference operation is performed.
优选的是,在所述边缘特征变化检测步骤中,对所述稳定边缘信息和实时边缘信息执行Hausdorff距离计算,并且判断Hausdorff距离是否低于最低相似阈值。Preferably, in the edge feature change detection step, perform Hausdorff distance calculation on the stable edge information and real-time edge information, and judge whether the Hausdorff distance is lower than the lowest similarity threshold.
进一步优选的是,所述Hausdorff距离计算包括:将构成虚拟线圈区域模板中的稳定边缘信息的像素点作为一个点集P1,而将形成所述实时边缘信息的像素点作为另一个点集P2,计算点集P1与点集P2之间的Hausdorff距离H(P1,P2)=max{h(P1,P2),h(P2,P1)},其中h(P1,P2)表示点集P1中的像素点到点集P2的欧氏距离的最大值,其中a和b分别是属于点集P1与点集P2的像素点,D(a,b)表示a和b之间的欧氏距离;h(P2,P1)表示点集P2中的像素点到点集P1的欧氏距离的最大值,其中a和b分别是属于点集P2与点集P1的像素点,D(a,b)表示a和b之间的欧氏距离。Further preferably, the calculation of the Hausdorff distance includes: taking the pixel points forming the stable edge information in the virtual coil area template as a point set P1, and taking the pixel points forming the real-time edge information as another point set P2, Calculate the Hausdorff distance H(P1,P2)=max{h(P1,P2),h(P2,P1)} between the point set P1 and the point set P2, where h(P1,P2) represents the point set P1 The maximum value of the Euclidean distance from the pixel point to the point set P2, Where a and b are pixels belonging to point set P1 and point set P2 respectively, D(a,b) represents the Euclidean distance between a and b; h(P2,P1) represents the pixel point in point set P2 to The maximum value of the Euclidean distance of the point set P1, Where a and b are pixels belonging to point set P2 and point set P1 respectively, and D(a,b) represents the Euclidean distance between a and b.
本发明进而提供了一种基于虚拟线圈的高速公路视频测速系统,包括:虚拟线圈触发检测模块,用于将测速摄像机所拍摄的视频画面中的至少两个预定区域设定为虚拟线圈,并且检测由特定目标车辆形成的画面是否经过所述虚拟线圈;测速模块,用于根据特定目标车辆依次触发各个所述虚拟线圈的时间差进行车辆测速;其特征在于,所述虚拟线圈触发检测模块具体包括:The present invention further provides a highway video speed measurement system based on a virtual coil, including: a virtual coil trigger detection module, which is used to set at least two predetermined areas in the video picture taken by the speed measurement camera as a virtual coil, and detect Whether the picture formed by the specific target vehicle passes through the virtual coil; the speed measurement module is used to measure the speed of the vehicle according to the time difference of each of the virtual coils sequentially triggered by the specific target vehicle; it is characterized in that the virtual coil trigger detection module specifically includes:
关键区域稳定边缘特征提取模块,用于提取无车辆状态下视频画面的虚拟线圈区域内空间分布和时域稳定性符合条件的稳定边缘信息,形成包含所述稳定边缘的虚拟线圈区域模板;The key area stable edge feature extraction module is used to extract the stable edge information that the spatial distribution and temporal stability of the virtual coil area of the video picture in the no-vehicle state meet the conditions, and form a virtual coil area template containing the stable edge;
边缘特征变化检测模块,用于针对实时拍摄视频画面的虚拟线圈区域提取实时边缘信息,并且对所述稳定边缘信息和实时边缘信息进行匹配运算,判断二者的相似度是否低于最低相似阈值;在低于所述最低相似阈值的情况下确定虚拟线圈被触发。The edge feature change detection module is used to extract real-time edge information for the virtual coil area of the real-time shooting video picture, and perform matching operations on the stable edge information and real-time edge information, and determine whether the similarity between the two is lower than the minimum similarity threshold; The virtual coil is determined to be triggered below said lowest similarity threshold.
优选的是,所述关键区域稳定边缘特征提取模块包括:Preferably, the key region stable edge feature extraction module includes:
边缘提取模块,用于针对虚拟线圈区域执行高斯平滑滤波,然后利用Canny算子进行边缘检测算法,获得在虚拟线圈区域中所存在边缘信息的二值化图像表示;The edge extraction module is used to perform Gaussian smoothing filtering on the virtual coil area, and then utilizes the Canny operator to perform an edge detection algorithm to obtain a binary image representation of edge information existing in the virtual coil area;
关键区域识别模块,计算所述虚拟线圈区域中所存在边缘在图像的特定轴向上的游程长度,比较所述游程长度是否大于游程阈值,将游程长度大于游程阈值的边缘作为候选边缘;并且对间隔特定时长的至少两帧视频画面的候选边缘执行差分运算,获得边缘差分图像;在候选边缘中排除边缘差分图像中的非零值区域,获得稳定边缘特征;The key area identification module calculates the run length of the edge existing in the virtual coil area on the specific axis of the image, compares whether the run length is greater than the run threshold, and uses the edge with the run length greater than the run threshold as a candidate edge; and Performing a difference operation on candidate edges of at least two frames of video images at a specific time interval to obtain an edge difference image; excluding non-zero value areas in the edge difference image from the candidate edges to obtain stable edge features;
边缘特征生成模块,获得并保存所述稳定边缘特征,从而形成具有稳定边缘特征的虚拟线圈区域模板。The edge feature generation module obtains and saves the stable edge feature, thereby forming a virtual coil area template with stable edge features.
优选的是,所述边缘特征变化检测模块包括:Preferably, the edge feature change detection module includes:
实时边缘信息提取模块,用于针对实时拍摄的视频画面,根据预设的边界线坐标分割虚拟线圈区域,进而通过边缘提取算法抽取其中所存在的实时边缘信息;The real-time edge information extraction module is used to divide the virtual coil area according to the preset boundary line coordinates for the video picture taken in real time, and then extract the real-time edge information existing therein through the edge extraction algorithm;
边缘匹配模块,对所述稳定边缘信息和实时边缘信息执行Hausdorff距离计算,并且判断Hausdorff距离是否低于最低相似阈值。The edge matching module performs Hausdorff distance calculation on the stable edge information and real-time edge information, and judges whether the Hausdorff distance is lower than the minimum similarity threshold.
可见,本发明在高速公路视频测速方法及系统当中,针对虚拟线圈触发检测机制进行了改进,将虚拟线圈无车状态画面中的稳定边缘特征提取出来,并将由于车辆遮挡等原因所造成的虚拟线圈区域内边缘特征变化作为判定触发的依据。由于稳定边缘特征相对于日照、阴影、车灯照射等外界环境因素的渐变和突变都具有很强的鲁棒性,可以有效避免这些外界干扰所引发的误触发,不论是白天和夜间都能够保持可靠的触发检测性能;而且车辆遮挡对虚拟线圈区域内边缘特征的改变也是稳定的,相比于灰度检测,不会出现因特定的光照条件和车辆颜色而引起的触发失灵现象。本发明通过构建了上述可靠、准确和高实时性的虚拟线圈触发机制,充分保证了基于虚拟线圈原理实现的高速公路视频测速方法及系统的高质量运行。It can be seen that the present invention improves the virtual coil trigger detection mechanism in the expressway video speed measurement method and system, extracts the stable edge features in the virtual coil no-vehicle state picture, and removes the virtual The change of edge characteristics in the coil area is used as the basis for determining the trigger. Since the stable edge feature has strong robustness to the gradient and sudden change of external environmental factors such as sunlight, shadows, and car lights, it can effectively avoid false triggers caused by these external disturbances, and can maintain both day and night. Reliable trigger detection performance; and the change of vehicle occlusion to the edge features in the virtual coil area is also stable. Compared with grayscale detection, there will be no trigger failure caused by specific lighting conditions and vehicle colors. By constructing the above-mentioned reliable, accurate and high real-time virtual coil trigger mechanism, the present invention fully guarantees the high-quality operation of the highway video speed measurement method and system based on the virtual coil principle.
附图说明Description of drawings
下面结合附图和具体实施方式对本发明作进一步详细的说明:Below in conjunction with accompanying drawing and specific embodiment the present invention will be described in further detail:
图1是本发明实施例中无车状态下虚拟线圈区域示意图;Fig. 1 is a schematic diagram of a virtual coil area in a car-free state in an embodiment of the present invention;
图2是本发明实施例中提取关键区域稳定边缘特征的方法流程图;Fig. 2 is a flow chart of a method for extracting stable edge features in key regions in an embodiment of the present invention;
图3是本发明实施例中对虚拟线圈区域中的边缘计算游程长度的示意图;3 is a schematic diagram of calculating run lengths for edges in the virtual coil region in an embodiment of the present invention;
图4是本发明实施例中对虚拟线圈区域的候选边缘执行扩张算法的示意图;Fig. 4 is a schematic diagram of performing an expansion algorithm on a candidate edge of a virtual coil area in an embodiment of the present invention;
图5是本发明实施例中保存稳定边缘特征的虚拟线圈区域模板示意图;Fig. 5 is a schematic diagram of a virtual coil region template for preserving stable edge features in an embodiment of the present invention;
图6是本发明实施例中车辆经过状态下虚拟线圈区域示意图;Fig. 6 is a schematic diagram of a virtual coil area in a state where a vehicle passes by in an embodiment of the present invention;
图7是通过检测边缘特征变化触发虚拟线圈的方法流程图。Fig. 7 is a flowchart of a method for triggering a virtual coil by detecting edge feature changes.
具体实施方式Detailed ways
为了使本技术领域的人员更好地理解本发明的技术方案,并使本发明的上述目的、特征和优点能够更加明显易懂,下面结合实施例及实施例附图对本发明作进一步详细的说明。In order to enable those skilled in the art to better understand the technical solution of the present invention, and to make the above-mentioned purpose, features and advantages of the present invention more obvious and understandable, the present invention will be further described in detail below in conjunction with the embodiments and accompanying drawings .
本发明所述的高速公路视频测速方法及系统采用基于虚拟线圈的测速算法。本发明将测速摄像机所拍摄的视频画面中的预定区域设定为虚拟线圈,一般设定至少两个虚拟线圈,根据特定目标车辆依次触发各个所述虚拟线圈的时间差实现测速计算。The expressway video speed measuring method and system described in the present invention adopts a speed measuring algorithm based on a virtual coil. In the present invention, the predetermined area in the video screen shot by the speed measuring camera is set as a virtual coil, generally at least two virtual coils are set, and the speed measuring calculation is realized according to the time difference of triggering each virtual coil sequentially by a specific target vehicle.
本发明的核心在于对虚拟线圈触发检测机制进行了改进,不再依靠检测当前视频画面相对于背景画面或者背景模型是否产生像素灰度的阶梯性变化来确定虚拟线圈的触发,而是在虚拟线圈区域内自适应地识别若干关键区域,确定这些关键区域的稳定边缘特征,进而通过检测由特定目标车辆引起的边缘特征变化实现对虚拟线圈的触发。The core of the present invention is to improve the trigger detection mechanism of the virtual coil, no longer relying on detecting whether the current video picture produces a step change in pixel gray level relative to the background picture or the background model to determine the trigger of the virtual coil, but in the virtual coil Several key areas are adaptively identified in the area, the stable edge features of these key areas are determined, and then the virtual coil is triggered by detecting the edge feature changes caused by a specific target vehicle.
图1是本发明实施例的虚拟线圈区域示意图。图1示出了摄像机拍摄测速区域的视频画面,图1所示的视频画面表示该拍摄测速区域无车辆状态。根据本发明的视频测速算法,虚线框A、B及所述虚线框以内的视频画面区域被预先设定为虚拟线圈A和B。当有车辆通过高速公路的相应位置时,会引起视频画面的虚线框A、B区域内图像信号变化,从而触发虚拟线圈A和B。在本发明的虚拟线圈触发检测机制中,所述图像信号变化将表现为虚线框A、B以内边缘特征的变化。Fig. 1 is a schematic diagram of a virtual coil area in an embodiment of the present invention. FIG. 1 shows a video frame of a speed measurement area shot by a camera, and the video frame shown in FIG. 1 indicates that there is no vehicle in the speed measurement area. According to the video speed measurement algorithm of the present invention, the dotted line frames A, B and the video picture area inside the dotted line frame are preset as virtual coils A and B. When a vehicle passes the corresponding position of the expressway, it will cause the image signal changes in the dotted frame A and B area of the video screen, thereby triggering the virtual coil A and B. In the virtual coil trigger detection mechanism of the present invention, the change of the image signal will appear as a change of edge features inside the dotted boxes A and B.
如图1所示,在虚拟线圈A对应的视频画面区域当中,存在由部分像素形成的若干关键区域,这些关键区域内像素的灰度和/或色度信息与视频画面背景区域内像素灰度和/或色度信息相比存在明显的差别,从而使得关键区域与背景区域之间的交界处具有比较强烈的边缘特征。对应到高速公路的路面结构,所述背景区域主要反映了高速公路的一般路面表层,而在图1中所示出的公路路面上的交通标线C、不同路面表层的分界线D、路面设施(如竖井井盖等)E甚至路面的一些不规则的凸起或破损F均会在视频画面当中形成所述关键区域,从而在这些关键区域与一般路面表层的交界处表现出所述边缘特征。As shown in Figure 1, in the video picture area corresponding to the virtual coil A, there are several key areas formed by some pixels, and the grayscale and/or chromaticity information of the pixels in these key areas is different from the pixel grayscale in the background area of the video picture. There is an obvious difference compared with the chromaticity information, so that the junction between the key area and the background area has a relatively strong edge feature. Corresponding to the pavement structure of the expressway, the background area mainly reflects the general pavement surface layer of the expressway, while the traffic marking C, the boundary line D of different pavement surface layers, and the pavement facilities shown in Fig. 1 (such as shaft covers, etc.) E and even some irregular bulges or damages on the road surface F all will form the key areas in the video screen, thereby showing the edge features at the junction of these key areas and the general road surface.
关键区域形成的边缘特征是易于通过图像分析算法加以检测的,而且相对于日照、阴影、车灯照射等外界环境因素的渐变和突变都具有很强的鲁棒性。而在车辆驶过时,由于车辆对路面的遮挡以及车辆自身边缘的引入,使得虚拟线圈区域的边缘特征会呈现明显变化。因此,相对于现有技术基于灰度差分的检测机制,基于边缘特征的检测能够充分提高虚拟线圈触发的可靠性,有效防止误触发。The edge features formed by key areas are easy to be detected by image analysis algorithms, and have strong robustness against gradients and mutations of external environmental factors such as sunlight, shadows, and car lights. However, when the vehicle passes by, the edge characteristics of the virtual coil area will show obvious changes due to the occlusion of the road surface by the vehicle and the introduction of the edge of the vehicle itself. Therefore, compared with the detection mechanism based on gray level difference in the prior art, detection based on edge features can fully improve the reliability of virtual coil triggering and effectively prevent false triggering.
图2示出了本发明实施例的提取关键区域稳定边缘特征的方法流程图。针对图1所示的无车辆状态下虚拟线圈区域画面,本发明依次执行图2所示的边缘提取步骤、关键区域识别步骤以及边缘特征生成步骤,从而定义无车辆状态虚拟线圈的稳定边缘特征参数。首先需要明确的是,边缘提取步骤、关键区域识别步骤以及边缘特征生成步骤所针对的均是图1所示的视频画面当中虚线框A、B以内的部分,即虚拟线圈区域,根据预设的表示虚线框边界线的坐标,从视频画面当中分割出上述区域。FIG. 2 shows a flowchart of a method for extracting stable edge features in key regions according to an embodiment of the present invention. For the virtual coil area picture in the no-vehicle state shown in Figure 1, the present invention sequentially executes the edge extraction step, the key area identification step and the edge feature generation step shown in Figure 2, thereby defining the stable edge feature parameters of the virtual coil in the no-vehicle state . First of all, it needs to be clarified that the edge extraction step, the key area identification step and the edge feature generation step are all aimed at the parts within the dotted line boxes A and B in the video picture shown in Figure 1, that is, the virtual coil area, according to the preset Indicates the coordinates of the boundary line of the dotted frame, and the above-mentioned area is segmented from the video screen.
在边缘提取步骤中,首先执行信号预处理,信号预处理的作用是消除白噪声以及增强图像对比度。信号预处理的方式是执行高斯平滑滤波,根据高斯函数的形状来选择模板进行滤波,模板公式为通过该公式选取像素点(i,j)周围(2K+1)×(2L+1)大小的相邻像素区域进行加权平均,W(m,n)表示该相邻像素区域中点(m,n)处的权重系数。通过信号预处理可消除正态分布的噪声,并且对其中的边缘具有比较好的保护作用,减少图像因滤除高频分量而产生的模糊。针对信号预处理之后的图像,执行边缘检测算法。边缘是图像当中像素灰度存在明显的不连续的区域,可以利用求一阶和二阶导数的方法来检测边缘,求导数的过程可以通过空域微分算子通过卷积完成。常用的空域微分算子包括Roberts算子、Prewitt算子、Sobel算子、拉普拉斯算子。其中,Roberts算子定位的边缘精度高,但是没有去噪功能;Sobel和Prewitt算子能够对图像进行平滑处理,但容易制造虚假的边缘;拉普拉斯算子对噪声十分敏感,但抗噪能力较弱,易造成边缘不连贯。因此,本发明在边缘检测过程中采用了Canny算子。Canny算子寻找图像梯度的局部最大值,由于不同图像受到的噪声影响不同,Canny算子遵循最优边缘检测,是一种抗噪和定位精确的折衷选择。Canny算子采用一阶偏导数的有限差分来计算梯度的幅值和方向,并且对梯度幅值进行非极大值抑制,用双阈值算法检测和连接边缘。但Canny算子对一些灰度差很小的弱边缘检测依然存在一定的局限性,在抑制噪声的同时容易丢失小目标细节。本发明中所述的边缘检测算法,基于Canny算子,将原始图像、顶帽变换和对数变换后的图像边缘检测结果叠加并进行骨架化处理,实现了对图像边缘尤其是弱边缘的提取。基于Canny算子的边缘检测具体包括:步骤1,利用Canny边缘检测器对滤波图像进行第一次边缘提取。步骤2,对滤波图像进行顶帽变换后,用Canny边缘检测器进行第二次边缘提取。顶帽变换是基于数学形态学的一种图像处理方式,其是从原图中减去开运算后的图像,其中,开运算可以用于补偿不均匀的背景亮度。步骤3,对顶帽变换后的图像进行对数变换后,用Canny边缘检测器进行第三次边缘提取。步骤4,将三次边缘提取的结果图像叠加。步骤5,对叠加后的图像进行骨架化处理,得到边缘图像。骨架化是将二值图像中的对象约简为一组细骨架,这些细骨架仍保留原始对象形状的重要信息。骨架化能从图像中抽取出模式的特征信息,大量消除冗余数据。通过边缘提取,获得在虚拟线圈区域中所存在边缘信息的二值化图像表示。In the edge extraction step, signal preprocessing is performed first, and the function of signal preprocessing is to eliminate white noise and enhance image contrast. The signal preprocessing method is to perform Gaussian smoothing filtering, and select the template for filtering according to the shape of the Gaussian function. The template formula is Through this formula, the adjacent pixel area of size (2K+1)×(2L+1) around the pixel point (i,j) is selected for weighted average, and W(m,n) represents the midpoint of the adjacent pixel area (m, The weight coefficient at n). The noise of normal distribution can be eliminated through signal preprocessing, and it has a relatively good protection effect on the edges, and reduces the blurring of the image caused by filtering out high-frequency components. For the image after signal preprocessing, an edge detection algorithm is performed. The edge is an area where there is an obvious discontinuity in the pixel grayscale in the image. The edge can be detected by the method of calculating the first and second derivatives. The process of calculating the derivative can be completed by convolution through the spatial differential operator. Commonly used spatial differential operators include Roberts operator, Prewitt operator, Sobel operator, and Laplacian operator. Among them, the Roberts operator has high edge positioning accuracy, but has no denoising function; the Sobel and Prewitt operators can smooth the image, but it is easy to create false edges; the Laplacian operator is very sensitive to noise, but the anti-noise Weak ability, easy to cause edge incoherence. Therefore, the present invention uses the Canny operator in the edge detection process. The Canny operator looks for the local maximum of the image gradient. Since different images are affected by noise differently, the Canny operator follows the optimal edge detection, which is a compromise between anti-noise and accurate positioning. The Canny operator uses the finite difference of the first-order partial derivative to calculate the magnitude and direction of the gradient, and suppresses the non-maximum value of the gradient magnitude, and uses the double threshold algorithm to detect and connect the edges. However, the Canny operator still has certain limitations in the detection of some weak edges with small grayscale differences, and it is easy to lose small target details while suppressing noise. The edge detection algorithm described in the present invention, based on the Canny operator, superimposes the original image, the image edge detection results after the top-hat transformation and the logarithmic transformation and performs skeletonization processing, and realizes the extraction of image edges, especially weak edges . The edge detection based on the Canny operator specifically includes: Step 1, using the Canny edge detector to perform the first edge extraction on the filtered image. Step 2, after the top-hat transformation is performed on the filtered image, the second edge extraction is performed with the Canny edge detector. The top-hat transform is an image processing method based on mathematical morphology, which subtracts the image after the opening operation from the original image, where the opening operation can be used to compensate for uneven background brightness. Step 3, after performing logarithmic transformation on the top-hat transformed image, use the Canny edge detector to perform the third edge extraction. Step 4, superpose the result images of the three edge extractions. Step 5: Skeletonize the superimposed image to obtain an edge image. Skeletonization is the reduction of an object in a binary image to a set of thin skeletons that still retain important information about the shape of the original object. Skeletonization can extract the characteristic information of the pattern from the image and eliminate redundant data in large quantities. Through edge extraction, the binarized image representation of the edge information existing in the virtual coil area is obtained.
针对通过边缘提取步骤所获得的边缘信息,关键区域识别步骤从全部所述边缘信息中遴选与虚拟线圈的关键区域相关联的边缘信息,从而实现对无车辆通过状态下虚拟线圈区域的形态、纹理具有标志性作用的关键区域的识别。在显示高速公路路面的视频画面中,通过边缘提取步骤所提取的边缘表示路面的非背景区域与背景区域之间的分界。但是,所述非背景区域未必均可以作为上述用于标志无车辆通过状态下虚拟线圈区域的形态、纹理特性的关键区域。作为关键区域,其所形成的边缘首先应当在各种环境条件下均具备足够的稳定性,最优状态下只有当存在车辆通过虚拟线圈相应路面区域时,关键区域的边缘才会发生明显变化,而在其它环境变化之下其边缘保持不变;其次,关键区域及其形成的边缘应当占据足够大的空间,从而在车辆通过虚拟线圈相应路面区域时能够因遮挡住关键区域的全部或一部分而引起边缘的充分变化;相反,如果边缘所分布的空间区域过小,则有可能在车辆通过时无法受到影响。因此,针对边缘提取步骤所获得的边缘信息,所述关键区域识别步骤需要基于以上两方面的因素对其加以遴选。For the edge information obtained by the edge extraction step, the key area identification step selects the edge information associated with the key area of the virtual coil from all the edge information, so as to realize the shape and texture of the virtual coil area in the state of no vehicle passing through. Identification of key regions with iconic effects. In the video picture showing the road surface of the highway, the edge extracted by the edge extraction step represents the boundary between the non-background area and the background area of the road surface. However, not all of the non-background areas may be used as key areas for marking the morphology and texture characteristics of the virtual coil area in the state of no vehicle passing through. As a key area, the edge formed by it should first have sufficient stability under various environmental conditions. In the optimal state, only when there is a vehicle passing through the corresponding road area of the virtual coil, the edge of the key area will change significantly. However, its edge remains unchanged under other environmental changes; secondly, the key area and the edge formed by it should occupy a large enough space, so that when the vehicle passes through the corresponding road area of the virtual coil, it can block all or part of the key area. cause sufficient changes in the edge; on the contrary, if the spatial area distributed by the edge is too small, it may not be affected when the vehicle passes. Therefore, for the edge information obtained in the edge extraction step, the key area identification step needs to select it based on the above two factors.
在关键区域识别步骤中,首先对边缘信息进行空间形态分析,从中提取出分布于足够大的空间范围以内的边缘。边缘信息的空间形态分析包括对边缘在图像的特定轴向上的游程长度进行计算,并且判断在该特定轴向上的游程长度是否均大于预定的游程阈值,在大于的情况下即选中所述边缘作为候选边缘。如图3所示L为通过边缘提取步骤从虚拟线圈区域中所提取的一个封闭边缘,则在所述关键区域识别步骤中,首先分别计算边缘L在图像的X轴和Y轴方向上所具有的不间断像素的数目Nx和Ny,进而根据特定向轴O相对于X轴和Y轴的角度,为Nx和Ny分配不同的权重,从而通过加权计算边缘L投影在特定轴向O上的游程长度H=α1·Nx+α2·Ny,其中α1和α2表示权重系数。通常,应将特定轴向O设定为与高速公路的行车方向近似垂直的方向,并且根据特定轴向O的相对于X轴和Y轴的角度设定上述权重系数。在获得边缘L在特定轴向O上的游程长度之后,比较游程长度H是否大于游程阈值Hthreshold,在大于该游程阈值的情况下将边缘L作为候选边缘。In the key region identification step, the edge information is firstly analyzed for its spatial shape, and the edges distributed within a large enough space range are extracted from it. The spatial morphology analysis of edge information includes calculating the run length of the edge on a specific axis of the image, and judging whether the run lengths on the specific axis are greater than a predetermined run threshold, and if it is greater than the specified run length, select the edges as candidate edges. As shown in Figure 3, L is a closed edge extracted from the virtual coil area through the edge extraction step, then in the key region identification step, first calculate the edge L in the X-axis and Y-axis directions of the image. The number of uninterrupted pixels Nx and Ny , and then assign different weights to Nx and Ny according to the angle of the specific axis O relative to the X axis and the Y axis, so that the weighted calculation edge L projects on the specific axis The run length on O is H=α1·Nx +α2·Ny , where α1 and α2 represent weight coefficients. Usually, the specific axis O should be set to be approximately perpendicular to the driving direction of the expressway, and the above weight coefficients should be set according to the angle of the specific axis O relative to the X axis and the Y axis. After obtaining the run length of the edge L on the specific axis O, it is compared whether the run length H is greater than the run threshold Hthreshold , and if the run length H is greater than the run threshold, the edge L is taken as a candidate edge.
针对候选边缘,关键区域识别步骤进而对候选边缘在时域上的变化进行检测,判断其是否具备足够的稳定性。通过稳定性的检测,可以排除在虚拟线圈区域中短暂存在的非稳定边缘(例如路面遗落物的边缘)。首先,针对从视频中抽取的间隔特定时长的视频画面当中按照上面介绍的方法所获得的候选边缘,通过差分运算获得边缘差分图像。由于存在校正误差以及边缘提取计算的误差,有可能导致两帧视频画面中均保持不变的目标所生成候选边缘同样存在一定的差异。因此,关键区域识别步骤中首先对两帧视频画面中的候选边缘进行扩张算法,如图4所示,边缘L经过扩张算法,原本的线条宽度为W的边缘扩张为宽度为W’的边缘区,进而再执行两帧视频画面的候选边缘之间的差分运算。可以针对不同时间抽取的视频画面反复执行多次上述差分运算,并且确定这些差分运算所获得的各边缘差分图像当中共同的非零值区域。所获得的边缘差分图像中的非零值区域反映了虚拟线圈区域当中随时间而发生变化的边缘。因此,通过在候选边缘中排除边缘差分图像中的非零值区域,所获得的是视频画面的虚拟线圈区域当中存在的稳定边缘特征,即构成稳定边缘的像素点集。稳定边缘特征相对于日照、阴影、车灯照射等外界环境因素的渐变和突变都具有很强的鲁棒性。因此基于上述计算所获得的这些稳定边缘特征可以用于对虚拟区域触发的检测。边缘特征生成步骤中获得并保存所述稳定边缘特征,从而形成具有稳定边缘特征的虚拟线圈区域模板,作为后续对虚拟区域触发进行判断的依据。图5示出了经上述计算方法处理之后所述虚拟线圈A、B形成的作为虚拟线圈区域模板,即所述像素点集的二值化图像表示,其中包含关键区域所具有的稳定边缘特征。从图5中可以看到,关键区域C、D、E所形成的边缘作为所述稳定边缘特征被提取出来并保存于虚拟线圈区域模板,而图1中的区域G所形成的边缘由于其不符合空间分布要求或者不具有稳定性已经被滤除。For the candidate edge, the key region identification step further detects the change of the candidate edge in the time domain to judge whether it has sufficient stability. By means of the stability check, unstable edges (for example edges of road debris) which are briefly present in the region of the virtual coil can be ruled out. First, for the candidate edges obtained by the method introduced above among the video frames extracted from the video at a specific time interval, an edge difference image is obtained through a difference operation. Due to correction errors and edge extraction calculation errors, there may be some differences in candidate edges generated by objects that remain unchanged in the two frames of video images. Therefore, in the key area identification step, the expansion algorithm is first performed on the candidate edges in the two frames of video images, as shown in Figure 4, the edge L undergoes the expansion algorithm, and the original line width W is expanded to an edge area with a width W' , and then perform the difference operation between the candidate edges of the two video frames. The above-mentioned difference operations may be repeatedly performed multiple times for video frames extracted at different times, and the common non-zero value areas in the edge difference images obtained by these difference operations are determined. The non-zero value area in the obtained edge difference image reflects the edge changing with time in the virtual coil area. Therefore, by excluding the non-zero value area in the edge difference image from the candidate edge, what is obtained is the stable edge feature existing in the virtual coil area of the video picture, that is, the pixel point set constituting the stable edge. The stable edge feature has strong robustness to the gradient and sudden change of external environmental factors such as sunlight, shadow, and car light exposure. Therefore, these stable edge features obtained based on the above calculations can be used for detection of virtual region triggers. The stable edge features are obtained and saved in the edge feature generating step, so as to form a virtual coil area template with stable edge features, which is used as a basis for subsequent judgment on virtual area triggering. FIG. 5 shows the virtual coil area template formed by the virtual coils A and B after the above calculation method, that is, the binary image representation of the pixel point set, which contains the stable edge features of the key area. It can be seen from Fig. 5 that the edges formed by the key areas C, D, and E are extracted as the stable edge features and stored in the virtual coil area template, while the edges formed by the area G in Fig. 1 are not Those that meet the spatial distribution requirements or do not have stability have been filtered out.
在此基础上,本发明进而通过检测由特定目标车辆在视频画面的虚拟线圈区域内引起的边缘特征变化,实现对虚拟线圈的触发。当车辆的画面进入到虚拟线圈区域以内时,必然会引起虚拟线圈以内的边缘信息发生明显的变化。车辆经过所带来的影响包括:车辆自身及其车灯光束、阴影会使虚拟线圈区域产生原来没有的边缘部分,以及车辆经过时全部或者部分遮挡所述关键区域而使其稳定边缘信息发生改变,例如图6中汽车M的画面经过虚拟线圈A导致关键区域E被全部遮挡以及关键区域C被部分遮挡。对车辆自身及其车灯光束、阴影所产生边缘的抽取与检测也能够在一定程度上达到触发虚拟线圈的作用,但是检测的可靠性不强,例如在汽车M本身并未进入虚拟线圈A对应的路面时,其车前光束的照射有可能已经造成虚拟线圈A内的边缘信息产生变化。但是,在这种情况下,由于没有受到遮挡,虚拟线圈A中关键区域的稳定边缘信息不会发生实质变化,只有在车辆实际经过虚拟线圈所在区域从而遮挡关键区域时才会引起稳定边缘信息的实质性变化,因而可以通过将实时的视频画面与所述虚拟线圈区域模板相匹配运算而作为虚拟线圈触发的判断依据。On this basis, the present invention further realizes the triggering of the virtual coil by detecting the edge feature change caused by the specific target vehicle in the virtual coil area of the video image. When the picture of the vehicle enters the area of the virtual coil, it will inevitably cause obvious changes in the edge information within the virtual coil. The impact brought by the passing of the vehicle includes: the vehicle itself and its headlight beams, shadows will cause the virtual coil area to produce edge parts that were not there before, and when the vehicle passes by completely or partially blocking the key area, its stable edge information will change. For example, in FIG. 6 , the frame of the car M passing through the virtual coil A causes the key area E to be fully occluded and the key area C to be partially occluded. The extraction and detection of the edges of the vehicle itself and its light beams and shadows can also achieve the effect of triggering the virtual coil to a certain extent, but the reliability of the detection is not strong. For example, when the car M itself does not enter the virtual coil A corresponding On the road surface, the irradiation of the light beam in front of the vehicle may have caused the edge information in the virtual coil A to change. However, in this case, since there is no occlusion, the stable edge information of the key area in the virtual coil A will not change substantially, and the stable edge information will only be caused when the vehicle actually passes through the area where the virtual coil is located to block the key area Therefore, the real-time video picture can be used as the basis for judging the triggering of the virtual coil by matching the real-time video picture with the template of the virtual coil area.
图7示出了通过检测边缘特征变化触发虚拟线圈的方法流程图。检测边缘特征变化以判断虚拟线圈的方法包括以下步骤:步骤1,针对实时拍摄的视频画面,根据预设的边界线坐标分割虚拟线圈区域。进而,在步骤2中,针对实时视频画面的虚拟线圈区域,通过边缘提取算法抽取其中所存在的实时边缘信息,获得实时边缘信息的二值化图像表示,其具体算法包括信号预处理以及利用Canny算子实现的边缘提取。步骤3,根据虚拟线圈区域模板中的稳定边缘信息,以及步骤2中所获得的实时视频画面的虚拟线圈区域的实时边缘信息,对二者进行匹配运算,判断二者的相似度是否低于最低相似阈值,在低于该最低相似阈值的情况下则认为虚拟线圈区域所包含的边缘发生了显著改变,该虚拟线圈被触发。为了提高步骤3中匹配运算的精确度,对所述稳定边缘信息和实时边缘信息执行计算Hausdorff距离的匹配方法。Fig. 7 shows a flow chart of a method for triggering a virtual coil by detecting edge feature changes. The method for detecting the change of the edge feature to judge the virtual coil includes the following steps: Step 1, for the video picture captured in real time, divide the virtual coil area according to the preset boundary line coordinates. Furthermore, in step 2, for the virtual coil area of the real-time video screen, the real-time edge information existing in it is extracted through the edge extraction algorithm, and the binary image representation of the real-time edge information is obtained. The specific algorithm includes signal preprocessing and using Canny Edge extraction implemented by operators. Step 3, according to the stable edge information in the virtual coil area template and the real-time edge information of the virtual coil area of the real-time video screen obtained in step 2, perform a matching operation on the two to determine whether the similarity between the two is lower than the minimum Similarity threshold, if it is lower than the lowest similarity threshold, it is considered that the edge included in the virtual coil area has changed significantly, and the virtual coil is triggered. In order to improve the accuracy of the matching operation in step 3, a matching method of calculating Hausdorff distance is performed on the stable edge information and the real-time edge information.
Hausdorff距离用于测量两个点集的匹配程度。在本发明中将构成虚拟线圈区域模板中的稳定边缘的像素点作为一个点集P1,而将形成所述实时边缘的像素点作为另一个点集P2,因而可以计算点集P1与点集P2之间的Hausdorff距离H(P1,P2)=max{h(P1,P2),h(P2,P1)},h(P1,P2)表示点集P1中的像素点到点集P2的欧氏距离的最大值,其中a和b分别是属于点集P1与点集P2的像素点,D(a,b)表示a和b之间的欧氏距离。相类似的,h(P2,P1)表示点集P2中的像素点到点集P1的欧氏距离的最大值。Hausdorff距离H(P1,P2)表示h(P1,P2)和h(P2,P1)之中的最大值,通过该Hausdorff距离可以反映构成虚拟线圈区域模板中的稳定边缘的像素点集和构成所述实时边缘的像素点集之间的匹配程度。当Hausdorff距离H(P1,P2)低于最低相似阈值Hthreshold,则认为虚拟线圈区域所包含的边缘发生了显著改变,该虚拟线圈被触发。The Hausdorff distance is used to measure how well two sets of points match. In the present invention, the pixel points forming the stable edge in the virtual coil area template are regarded as a point set P1, and the pixel points forming the real-time edge are regarded as another point set P2, thus the point set P1 and the point set P2 can be calculated The Hausdorff distance between H(P1,P2)=max{h(P1,P2),h(P2,P1)}, h(P1,P2) represents the Euclidean distance from the pixel point in the point set P1 to the point set P2 the maximum distance, Where a and b are pixel points belonging to point set P1 and point set P2 respectively, and D(a,b) represents the Euclidean distance between a and b. Similarly, h(P2,P1) represents the maximum value of the Euclidean distance from the pixels in the point set P2 to the point set P1. The Hausdorff distance H(P1, P2) represents the maximum value among h(P1, P2) and h(P2, P1), through which the Hausdorff distance can reflect the pixel point set and the composition of the stable edge in the virtual coil area template Describes the degree of matching between the pixel point sets of the real-time edge. When the Hausdorff distance H(P1, P2) is lower than the minimum similarity thresholdHthreshold , it is considered that the edges included in the virtual coil area have changed significantly, and the virtual coil is triggered.
在判定虚拟线圈被触发的基础上,本发明所提供的高速公路视频测速方法及系统将根据两个虚拟线圈依次被触发的时间差,进行针对特定目标车辆的测速计算,其具体的速度测算方法因与现有技术基本相同,在此不再赘述。On the basis of judging that the virtual coil is triggered, the highway video speed measurement method and system provided by the present invention will perform speed measurement calculation for a specific target vehicle according to the time difference between the two virtual coils being triggered sequentially. It is basically the same as the prior art and will not be repeated here.
本发明进而提供了一种基于虚拟线圈的高速公路视频测速系统,包括:虚拟线圈触发检测模块,用于将测速摄像机所拍摄的视频画面中的至少两个预定区域设定为虚拟线圈,并且检测由特定目标车辆形成的画面是否经过所述虚拟线圈;测速模块,用于根据特定目标车辆依次触发各个所述虚拟线圈的时间差进行车辆测速。其中,所述虚拟线圈触发检测模块具体包括:关键区域稳定边缘特征提取模块,用于提取无车辆状态下视频画面的虚拟线圈区域内空间分布和时域稳定性符合条件的稳定边缘信息,形成包含所述稳定边缘的虚拟线圈区域模板;边缘特征变化检测模块,用于针对实时拍摄视频画面的虚拟线圈区域提取实时边缘信息,并且对所述稳定边缘信息和实时边缘信息进行匹配运算,判断二者的相似度是否低于最低相似阈值;在低于所述最低相似阈值的情况下确定虚拟线圈被触发。所述关键区域稳定边缘特征提取模块包括:边缘提取模块,用于针对虚拟线圈区域执行高斯平滑滤波,然后利用Canny算子进行边缘检测算法,获得在虚拟线圈区域中所存在边缘信息的二值化图像表示;关键区域识别模块,计算所述虚拟线圈区域中所存在边缘在图像的特定轴向上的游程长度,比较所述游程长度是否大于游程阈值,将游程长度大于游程阈值的边缘作为候选边缘;并且对间隔特定时长的至少两帧视频画面的候选边缘执行差分运算,获得边缘差分图像;在候选边缘中排除边缘差分图像中的非零值区域,获得稳定边缘特征;边缘特征生成模块,获得并保存所述稳定边缘特征,从而形成具有稳定边缘特征的虚拟线圈区域模板。所述边缘特征变化检测模块包括:实时边缘信息提取模块,用于针对实时拍摄的视频画面,根据预设的边界线坐标分割虚拟线圈区域,进而通过边缘提取算法抽取其中所存在的实时边缘信息;边缘匹配模块,对所述稳定边缘信息和实时边缘信息执行Hausdorff距离计算,并且判断Hausdorff距离是否低于最低相似阈值。The present invention further provides a highway video speed measurement system based on a virtual coil, including: a virtual coil trigger detection module, which is used to set at least two predetermined areas in the video picture taken by the speed measurement camera as a virtual coil, and detect Whether the picture formed by the specific target vehicle passes through the virtual coil; the speed measurement module is used to measure the speed of the vehicle according to the time difference when the specific target vehicle sequentially triggers each of the virtual coils. Wherein, the virtual coil trigger detection module specifically includes: a key area stable edge feature extraction module, which is used to extract the spatial distribution and temporal stability of the virtual coil area of the video picture in the state of no vehicle. The virtual coil area template of the stable edge; the edge feature change detection module, which is used to extract real-time edge information for the virtual coil area of the real-time shooting video picture, and perform a matching operation on the stable edge information and real-time edge information, and judge both Whether the similarity of is lower than the lowest similarity threshold; if it is lower than the lowest similarity threshold, it is determined that the virtual coil is triggered. The key area stable edge feature extraction module includes: an edge extraction module, which is used to perform Gaussian smoothing filtering on the virtual coil area, and then utilizes the Canny operator to perform an edge detection algorithm to obtain the binarization of edge information existing in the virtual coil area Image representation; key area identification module, calculate the run length of the edge existing in the virtual coil area on the specific axis of the image, compare whether the run length is greater than the run threshold, and use the edge with the run length greater than the run threshold as a candidate edge ; and perform difference operation on the candidate edges of at least two frames of video pictures at interval specific time lengths to obtain edge difference images; exclude non-zero value regions in the edge difference images in candidate edges to obtain stable edge features; edge feature generation module obtains And save the stable edge features, so as to form a virtual coil area template with stable edge features. The edge feature change detection module includes: a real-time edge information extraction module, which is used to divide the virtual coil area according to the preset boundary line coordinates for the video picture taken in real time, and then extract the real-time edge information existing therein through an edge extraction algorithm; The edge matching module performs Hausdorff distance calculation on the stable edge information and real-time edge information, and judges whether the Hausdorff distance is lower than the minimum similarity threshold.
可见,本发明在高速公路视频测速方法及系统当中,针对虚拟线圈触发检测机制进行了改进,将虚拟线圈无车状态画面中的稳定边缘特征提取出来,并将由于车辆遮挡等原因所造成的虚拟线圈区域内边缘特征变化作为判定触发的依据。由于稳定边缘特征相对于日照、阴影、车灯照射等外界环境因素的渐变和突变都具有很强的鲁棒性,可以有效避免这些外界干扰所引发的误触发,不论是白天和夜间都能够保持可靠的触发检测性能;而且车辆遮挡对虚拟线圈区域内边缘特征的改变也是稳定的,相比于灰度检测,不会出现因特定的光照条件和车辆颜色而引起的触发失灵现象。本发明通过构建了上述可靠、准确和高实时性的虚拟线圈触发机制,充分保证了基于虚拟线圈原理实现的高速公路视频测速方法及系统的高质量运行。It can be seen that the present invention improves the virtual coil trigger detection mechanism in the expressway video speed measurement method and system, extracts the stable edge features in the virtual coil no-vehicle state picture, and removes the virtual The change of edge characteristics in the coil area is used as the basis for determining the trigger. Since the stable edge feature has strong robustness to the gradient and sudden change of external environmental factors such as sunlight, shadows, and car lights, it can effectively avoid false triggers caused by these external disturbances, and can maintain both day and night. Reliable trigger detection performance; and the change of vehicle occlusion to the edge features in the virtual coil area is also stable. Compared with grayscale detection, there will be no trigger failure caused by specific lighting conditions and vehicle colors. By constructing the above-mentioned reliable, accurate and high real-time virtual coil trigger mechanism, the present invention fully guarantees the high-quality operation of the highway video speed measurement method and system based on the virtual coil principle.
以上所述,仅为本发明的具体实施方式,本发明还可以应用在其它设备中;以上描述中的尺寸和数量均仅为参考性的,本领域技术人员可根据实际需要选择适当的应用尺寸,而不脱离本发明的范围。本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应该以权利要求所界定的保护范围为准。The above is only a specific embodiment of the present invention, and the present invention can also be applied to other equipment; the dimensions and quantities in the above description are only for reference, and those skilled in the art can choose appropriate application dimensions according to actual needs , without departing from the scope of the present invention. The protection scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope disclosed in the present invention shall fall within the protection scope of the present invention. Therefore, the protection scope of the present invention should be defined by the claims.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410576699.6ACN104267209B (en) | 2014-10-24 | 2014-10-24 | Method and system for expressway video speed measurement based on virtual coils |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410576699.6ACN104267209B (en) | 2014-10-24 | 2014-10-24 | Method and system for expressway video speed measurement based on virtual coils |
| Publication Number | Publication Date |
|---|---|
| CN104267209Atrue CN104267209A (en) | 2015-01-07 |
| CN104267209B CN104267209B (en) | 2017-01-11 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201410576699.6AActiveCN104267209B (en) | 2014-10-24 | 2014-10-24 | Method and system for expressway video speed measurement based on virtual coils |
| Country | Link |
|---|---|
| CN (1) | CN104267209B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105354573A (en)* | 2015-12-15 | 2016-02-24 | 重庆凯泽科技有限公司 | Container license plate identification method and system |
| CN105448086A (en)* | 2015-07-22 | 2016-03-30 | 南通大学 | Traffic flow detection method based on virtual detection bands |
| CN105551265A (en)* | 2015-02-09 | 2016-05-04 | 南通大学 | Traffic flow detection method based on virtual detection band |
| CN111650392A (en)* | 2020-07-03 | 2020-09-11 | 东北大学 | Detection method of metal plate motion speed based on line scan camera stereo vision |
| CN112557812A (en)* | 2020-11-24 | 2021-03-26 | 山东理工大学 | Small current ground fault positioning method and system based on Hausdorff distance |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1984236A (en)* | 2005-12-14 | 2007-06-20 | 浙江工业大学 | Method for collecting characteristics in telecommunication flow information video detection |
| CN102136196A (en)* | 2011-03-10 | 2011-07-27 | 北京大学深圳研究生院 | Vehicle velocity measurement method based on image characteristics |
| CN102324183A (en)* | 2011-09-19 | 2012-01-18 | 华中科技大学 | Vehicle Detection and Capture Method Based on Composite Virtual Coil |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1984236A (en)* | 2005-12-14 | 2007-06-20 | 浙江工业大学 | Method for collecting characteristics in telecommunication flow information video detection |
| CN102136196A (en)* | 2011-03-10 | 2011-07-27 | 北京大学深圳研究生院 | Vehicle velocity measurement method based on image characteristics |
| CN102324183A (en)* | 2011-09-19 | 2012-01-18 | 华中科技大学 | Vehicle Detection and Capture Method Based on Composite Virtual Coil |
| Title |
|---|
| 尹朝征: "基于视频虚拟线圈的交通流参数检测", 《中国优秀博硕士学位论文全文数据库 (硕士) 信息科技辑》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105551265A (en)* | 2015-02-09 | 2016-05-04 | 南通大学 | Traffic flow detection method based on virtual detection band |
| CN105551265B (en)* | 2015-02-09 | 2017-10-20 | 南京蓝泰交通设施有限责任公司 | A kind of magnitude of traffic flow detection method based on virtual detection band |
| CN105448086A (en)* | 2015-07-22 | 2016-03-30 | 南通大学 | Traffic flow detection method based on virtual detection bands |
| CN105354573A (en)* | 2015-12-15 | 2016-02-24 | 重庆凯泽科技有限公司 | Container license plate identification method and system |
| CN105354573B (en)* | 2015-12-15 | 2019-03-22 | 重庆凯泽科技股份有限公司 | A kind of container licence plate recognition method and system |
| CN111650392A (en)* | 2020-07-03 | 2020-09-11 | 东北大学 | Detection method of metal plate motion speed based on line scan camera stereo vision |
| CN112557812A (en)* | 2020-11-24 | 2021-03-26 | 山东理工大学 | Small current ground fault positioning method and system based on Hausdorff distance |
| Publication number | Publication date |
|---|---|
| CN104267209B (en) | 2017-01-11 |
| Publication | Publication Date | Title |
|---|---|---|
| Lee et al. | Robust lane detection and tracking for real-time applications | |
| CN100474337C (en) | Noise-possessing movement fuzzy image restoration method based on radial basis nerve network | |
| JP5551595B2 (en) | Runway monitoring system and method | |
| CN109684996B (en) | Video-based real-time vehicle entry and exit recognition method | |
| Tripathi et al. | Removal of rain from videos: a review | |
| CN101739686B (en) | Moving target tracking method and system thereof | |
| US8798314B2 (en) | Detection of vehicles in images of a night time scene | |
| US8848978B2 (en) | Fast obstacle detection | |
| CN101872546B (en) | Video-based method for rapidly detecting transit vehicles | |
| CN109829365B (en) | Multi-scene adaptive driving deviation and turn warning method based on machine vision | |
| CN106845364B (en) | Rapid automatic target detection method | |
| CN104267209B (en) | Method and system for expressway video speed measurement based on virtual coils | |
| CN104881645B (en) | The vehicle front mesh object detection method of feature based point mutual information and optical flow method | |
| CN106778540B (en) | Parking detection is accurately based on the parking event detecting method of background double layer | |
| CA2818579A1 (en) | Calibration device and method for use in a surveillance system for event detection | |
| Toropov et al. | Traffic flow from a low frame rate city camera | |
| CN103077387B (en) | Automatic detection method for freight train carriage in video | |
| JP6226368B2 (en) | Vehicle monitoring apparatus and vehicle monitoring method | |
| CN111860120A (en) | Automatic shielding detection method and device for vehicle-mounted camera | |
| CN102842037A (en) | Method for removing vehicle shadow based on multi-feature fusion | |
| CN108416798A (en) | A Vehicle Distance Estimation Method Based on Optical Flow | |
| JP2019121356A (en) | Interference region detection apparatus and method, and electronic apparatus | |
| CN107067752A (en) | Automobile speedestimate system and method based on unmanned plane image | |
| CN105913464A (en) | Multi-body target online measurement method based on videos | |
| CN103679196A (en) | Method for automatically classifying people and vehicles in video surveillance |
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| PE01 | Entry into force of the registration of the contract for pledge of patent right | Denomination of invention:Method and system for expressway video speed measurement based on virtual coils Effective date of registration:20171219 Granted publication date:20170111 Pledgee:Hangzhou hi tech Company limited by guarantee Pledgor:ZHEJIANG LISHI TECHNOLOGY CO.,LTD. Registration number:2017330000310 | |
| PE01 | Entry into force of the registration of the contract for pledge of patent right | ||
| PP01 | Preservation of patent right | Effective date of registration:20250731 Granted publication date:20170111 | |
| PP01 | Preservation of patent right |