技术领域technical field
本发明属视觉导航技术领域,具体涉及一种基于机器视觉的城市环境行驶车辆行为识别方法。The invention belongs to the technical field of visual navigation, and in particular relates to a machine vision-based method for recognizing vehicle behavior in urban environments.
背景技术Background technique
当前,机器视觉在人工智能领域中发展速度很快,且在理论科学与工程应用方面有着广泛应用前景,机器视觉系统的研究包括目标检测、图像特征提取和行为识别等几个关键问题,并在医学动态影像、图像检索、多媒体信息处理与通信、指纹和人脸识别、图像处理与预处理、自然界生物种类识别、交通安全等各个研究领域都得到广泛应用。At present, machine vision is developing rapidly in the field of artificial intelligence, and has broad application prospects in theoretical science and engineering applications. The research of machine vision systems includes several key issues such as target detection, image feature extraction, and behavior recognition. Medical dynamic imaging, image retrieval, multimedia information processing and communication, fingerprint and face recognition, image processing and preprocessing, natural biological species recognition, traffic safety and other research fields have been widely used.
运动目标检测既是机器视觉系统中的一个核心技术,又是图像处理、多媒体信息处理、智能视频监控等各个领域中不可或缺的部分。在各种繁琐复杂的场景中存在着各种不同的信息,但是只有部分信息是人们感兴趣的即是有效的,把有效信息与复杂背景成功地分割开也就是从背景中只提取人们感兴趣的目标,这就是运动目标检测的基本任务。从检测结果中可以一目了然地观察到运动目标的轮廓边缘、内部信息等基本特征,有利于特征提取、行为识别工作的进行,研究意义重大。Moving object detection is not only a core technology in machine vision systems, but also an indispensable part in various fields such as image processing, multimedia information processing, and intelligent video surveillance. There are various kinds of information in various complex scenes, but only part of the information is effective if people are interested in it. Successfully separating effective information from the complex background is to extract only people's interest from the background. This is the basic task of moving target detection. From the detection results, the basic features such as the contour edge and internal information of the moving target can be observed at a glance, which is beneficial to the feature extraction and behavior recognition work, and the research is of great significance.
运动目标的行为识别则包含了目标的及时检测与特征提取、行为描述、分析与识别等。在工厂、企业、商场、车站、机场、小区等公共场地安装监控设备,大都是以运动人体为研究对象,要实现对运动目标进行监控和行为分析,先检测出目标并根据提取出的行为动作特征来分析目标的走、跑、打架斗殴、集会、偷盗等行为。The behavior recognition of the moving target includes the timely detection and feature extraction of the target, behavior description, analysis and recognition, etc. Most of the monitoring equipment installed in public places such as factories, enterprises, shopping malls, stations, airports, and residential areas takes moving human bodies as the research object. To realize the monitoring and behavior analysis of moving targets, first detect the target and use the extracted behavior actions Features to analyze the target's walking, running, fighting, gathering, stealing and other behaviors.
目前,智能监控技术在目标检测、特征提取以及行为识别方法的鲁棒性方面存在不足,适用范围受到限制,没有达到较高的行为识别率,所以,对于运动目标的特征提取与行为识别研究一直是智能安全系统中的热点问题。道路交通的环境,尤其是交叉路口的交通环境非常复杂,然而大部分机动车交通事故都发生在这,若能够有效的监控交叉路口等道路,如果车辆的违章行为可以被自动检测,这样就可以降低交通道路中事故的发生。At present, intelligent monitoring technology has shortcomings in the robustness of target detection, feature extraction and behavior recognition methods, the scope of application is limited, and a high behavior recognition rate has not been achieved. Therefore, research on feature extraction and behavior recognition of moving targets has It is a hot issue in intelligent security system. The environment of road traffic, especially the traffic environment at intersections is very complex. However, most motor vehicle traffic accidents occur here. If the roads such as intersections can be effectively monitored, and if the violations of vehicles can be automatically detected, it will be possible to Reduce the occurrence of accidents in traffic roads.
发明内容Contents of the invention
本发明提供一种基于机器视觉的城市环境行驶车辆行为识别方法,通过对运动车辆进行目标检测、目标跟踪、特征提取、行为识别,得出车辆行为分析的结果。首先使用背景差分法的车辆目标检测方法完成车辆目标检测;然后使用光流法对检测到的车辆目标进行跟踪;跟踪完成后,根据车辆运动轨迹的基本行为特征得出车辆的轨迹;最后采用训练好的SVM分类器对行使轨迹进行识别,从而判断该行为是左转弯、右转弯或者直行。The invention provides a machine vision-based method for recognizing vehicle behavior in an urban environment. The vehicle behavior analysis result is obtained by performing target detection, target tracking, feature extraction and behavior recognition on moving vehicles. First, use the vehicle target detection method of the background difference method to complete the vehicle target detection; then use the optical flow method to track the detected vehicle target; after the tracking is completed, obtain the vehicle trajectory according to the basic behavior characteristics of the vehicle trajectory; finally use the training A good SVM classifier recognizes the driving trajectory to determine whether the behavior is a left turn, a right turn, or a straight line.
一种基于机器视觉的城市环境行驶车辆行为识别方法,其特征在于步骤如下:A kind of behavior recognition method of urban environment driving vehicle based on machine vision, it is characterized in that the steps are as follows:
步骤1:车辆目标检测与跟踪:利用背景差分法与混合高斯模型建模的方法进行运动车辆目标检测,然后利用光流的跟踪算法对检测出的运动车辆目标进行跟踪,具体为:Step 1: Vehicle target detection and tracking: use the background difference method and the mixed Gaussian model modeling method to detect the moving vehicle target, and then use the optical flow tracking algorithm to track the detected moving vehicle target, specifically:
步骤1.1:分别按照和计算一段时间内视频序列图像的像素亮度的均值μ0(x,y)及方差以μ0(x,y)和分别为像素均值和方差组成具有高斯分布的图像B0,B0即为初始的背景估计图像;Step 1.1: Follow the and Calculate the mean value μ0 (x, y) and variance of the pixel brightness of video sequence images over a period of time Take μ0 (x,y) and The image B0 with a Gaussian distribution is composed of pixel mean and variance respectively, and B0 is the initial background estimation image;
其中,N为初始化背景图像选取时间段内序列图像的总帧数,150≤N≤200;fi(x,y)为第i帧图像在第x行、y列的像素亮度值,(x,y)表示图像中的像素位置为x行、y列;Among them, N is the total number of frames of the sequence image in the selected time period for initializing the background image,150≤N≤200 ; ,y) indicates that the pixel position in the image is row x and column y;
步骤1.2:分别按照μj(x,y)=(1-α)·μj-1(x,y)+α·fj(x,y)和更新背景估计图像的均值μj(x,y)和方差得到更新后的第j帧图像的背景估计图像Bj;Step 1.2: According to μj (x,y)=(1-α)·μj-1 (x,y)+α·fj (x,y) and Update the mean μj (x,y) and variance of the background estimated image Obtain the background estimation image Bj of the updated image of the jth frame;
其中,δ是[0,1]之间的常数,K是混合高斯模型的个数,3≤K≤5;j≥1,fj(x,y)表示第j帧图像在第x行、y列的像素亮度值;in, δ is a constant between [0,1], K is the number of mixed Gaussian models, 3≤K≤5; j≥1, fj (x, y) means that the j-th frame image is in row x and column y The pixel brightness value of ;
步骤1.3:按照dj(x,y)=|fj(x,y)-Bj(x,y)|计算得到当前帧图像和当前帧背景估计图像的差分图像,并按照对差分图像进行二值化处理,得到检测出的运动车辆区域,即二值化处理后图像M中像素值为1的区域,图像M中像素值为0的区域为背景区域;第j帧图像中Mj(x,y)=1表示的运动区域;Step 1.3: According to dj (x,y)=|fj (x,y)-Bj (x,y)|, calculate the difference image between the current frame image and the current frame background estimated image, and follow Binarize the difference image to obtain the detected moving vehicle area, that is, the area with a pixel value of 1 in the image M after binarization processing, and the area with a pixel value of 0 in the image M is the background area; the jth frame image The motion area represented by Mj (x, y) = 1;
其中,r为灰度阈值,50≤r≤60;Among them, r is the gray threshold, 50≤r≤60;
步骤1.4:对每一帧图像检测出的运动车辆区域进行角点特征提取,再应用金字塔Lucas-Kanade稀疏光流算法对所有帧视频图像中的角点进行跟踪,得到运动车辆运动轨迹;Step 1.4: Extract the corner point features of the moving vehicle area detected in each frame image, and then apply the pyramid Lucas-Kanade sparse optical flow algorithm to track the corner points in all frames of video images to obtain the moving vehicle trajectory;
步骤2:车辆轨迹特征提取:使用矩阵网格和双向直方图相结合的轨迹特征提取方法,构造车辆轨迹特征向量,为车辆行为分类提供特征依据,具体为:Step 2: Vehicle trajectory feature extraction: use the trajectory feature extraction method combining matrix grid and bidirectional histogram to construct vehicle trajectory feature vectors to provide feature basis for vehicle behavior classification, specifically:
步骤2.1:以步骤1得到的运动车辆轨迹坐标为基准,以轨迹横坐标为x轴、轨迹纵坐标为y轴构造车辆轨迹坐标系Oxy,分别求出所有点的x轴方向最大值xmax和最小值xmin、y轴方向最大值ymax和最小值ymin,在坐标系Oxy创建宽为高为的矩阵网格,对网格中每一个子网格赋予初始值0;Step 2.1: Based on the trajectory coordinates of the moving vehicle obtained in step 1, the vehicle trajectory coordinate system Oxy is constructed with the abscissa of the trajectory as the x-axis and the ordinate of the trajectory as the y-axis, and the maximum values xmax and The minimum value xmin , the maximum value ymax in the y-axis direction, and the minimum value ymin are created in the coordinate system Oxy with a width of high for The matrix grid of , assign an initial value of 0 to each sub-grid in the grid;
步骤2.2:对轨迹坐标所在的子网格进行赋值,先沿x轴正方向,再沿y轴正方向,按顺序赋予递增的权重值1、2、3、……,得到轨迹坐标矩阵;Step 2.2: Assign values to the sub-grid where the trajectory coordinates are located, first along the positive direction of the x-axis, and then along the positive direction of the y-axis, and assign increasing weight values 1, 2, 3, ... in order to obtain the trajectory coordinate matrix;
步骤2.3:对轨迹坐标矩阵的行和列分别构造直方图,并使用行和列的直方图估算车辆目标的行为趋势,即依据轨迹坐标矩阵行和列的直方图整体趋势走向,如果沿x轴正向递减,则趋向于右转弯;如果沿x轴正向递增,则趋向于左转弯;如果沿x轴正向先增再减,则轨迹线呈直行趋势,得到运动车辆行为趋势为左转弯、右转弯或直行;Step 2.3: Construct histograms for the rows and columns of the trajectory coordinate matrix, and use the histograms of the rows and columns to estimate the behavior trend of the vehicle target, that is, according to the overall trend of the histogram of the rows and columns of the trajectory coordinate matrix, if along the x-axis Decrease in the positive direction, it tends to turn right; if it increases along the positive direction of the x-axis, it tends to turn left; if it increases first and then decreases along the positive direction of the x-axis, the trajectory line tends to go straight, and the behavior trend of the moving vehicle is a left turn , turn right or go straight;
其中,直方图横坐标表示轨迹坐标矩阵的行数或列数,直方图的纵坐标统计轨迹坐标矩阵中某一行或列中矩阵元素值不为0的个数;Wherein, the abscissa of the histogram represents the number of rows or columns of the trajectory coordinate matrix, and the ordinate of the histogram counts the number of matrix element values in a certain row or column in the trajectory coordinate matrix that are not 0;
步骤2.4:由步骤2.3得到的行为趋势和步骤2.2得到的轨迹坐标矩阵构成运动车辆的轨迹特征向量;Step 2.4: The behavior trend obtained in step 2.3 and the trajectory coordinate matrix obtained in step 2.2 constitute the trajectory feature vector of the moving vehicle;
步骤3:车辆行为训练:利用SVM两层分类器结构对车辆轨迹样本进行行为训练,得到样本数据的识别结果,具体为:Step 3: Vehicle behavior training: use the SVM two-layer classifier structure to conduct behavior training on the vehicle trajectory samples, and obtain the recognition results of the sample data, specifically:
步骤3.1:创建轨迹样本:用图片生成工具导入预先采集的视频的模拟背景,并创建包括直行、左转弯、右转弯三种类型的轨迹样本各100张;Step 3.1: Create trajectory samples: use the image generation tool to import the simulated background of the pre-captured video, and create 100 trajectory samples of three types, including going straight, turning left, and turning right;
步骤3.2:根据步骤2的轨迹特征提取方法对轨迹样本进行轨迹特征提取,得到各种类型轨迹样本的轨迹特征向量;Step 3.2: According to the trajectory feature extraction method in step 2, perform trajectory feature extraction on the trajectory sample, and obtain trajectory feature vectors of various types of trajectory samples;
步骤3.3:利用SVM两层分类器对步骤3.2得到的样本的轨迹特征向量进行训练,得到样本数据的识别结果;Step 3.3: use the SVM two-layer classifier to train the trajectory feature vector of the sample obtained in step 3.2, and obtain the recognition result of the sample data;
步骤4:车辆行为识别:根据步骤3训练得到的SVM分类器,对步骤2得到的运动车辆轨迹特征向量进行车辆行为识别,最终得到车辆行为为左转弯、右转弯或直行。Step 4: Vehicle behavior recognition: According to the SVM classifier trained in step 3, carry out vehicle behavior recognition on the moving vehicle trajectory feature vector obtained in step 2, and finally obtain the vehicle behavior as turning left, turning right or going straight.
本发明的有益效果是:由于采用背景差分法进行车辆目标检测和光流法进行车辆跟踪,方法执行速度快,准确率高;由于采用矩阵网格和双向直方图相结合的轨迹特征提取方法,在一定程度上避免了冗余情况,有效的提取并扩充了车辆运动轨迹特征向量,还可以避免产生不可分区域问题。The beneficial effects of the present invention are: because the background difference method is used for vehicle target detection and the optical flow method is used for vehicle tracking, the execution speed of the method is fast and the accuracy rate is high; because the trajectory feature extraction method combining matrix grid and bidirectional histogram is adopted, Redundancy is avoided to a certain extent, the feature vector of vehicle trajectory is effectively extracted and expanded, and the problem of inseparable regions can also be avoided.
附图说明Description of drawings
图1是本发明的基于机器视觉的城市环境行驶车辆行为识别方法的基本流程图Fig. 1 is the basic flowchart of the urban environment driving vehicle behavior recognition method based on machine vision of the present invention
图2是本发明方法用背景差分法进行车辆目标检测的结果图Fig. 2 is the result figure that the method of the present invention carries out vehicle target detection with background subtraction method
图3是本发明方法提取的车辆运动轨迹示意图Fig. 3 is a schematic diagram of the vehicle trajectory extracted by the method of the present invention
具体实施方式Detailed ways
下面结合附图和实施例对本发明进一步说明,本发明包括但不仅限于下述实施例。The present invention will be further described below in conjunction with the accompanying drawings and embodiments, and the present invention includes but not limited to the following embodiments.
本发明提供了一种基于机器视觉的城市环境行驶车辆行为识别方法,其基本流程图如图1所示,具体包括以下步骤:The present invention provides a method for recognizing the behavior of driving vehicles in an urban environment based on machine vision, the basic flow chart of which is shown in Figure 1, specifically comprising the following steps:
步骤1:车辆目标检测与跟踪Step 1: Vehicle Object Detection and Tracking
车辆目标检测是指从视频序列图像中分离出车辆目标,车辆目标的检测结果直接影响后期车辆目标跟踪、行为特征提取以及行为分类等环节。车辆目标跟踪是为了获取车辆目标的运动参数(例如位置、速度等)以及运动轨迹,实现对车辆目标的行为理解。在交通监控视频中,天气、车辆目标相互干扰等因素,是影响车辆目标检测和跟踪精度的主要原因。Vehicle target detection refers to the separation of vehicle targets from video sequence images. The detection results of vehicle targets directly affect the later stages of vehicle target tracking, behavior feature extraction, and behavior classification. Vehicle target tracking is to obtain the motion parameters (such as position, speed, etc.) and trajectory of the vehicle target to realize the behavior understanding of the vehicle target. In traffic surveillance video, factors such as weather and mutual interference of vehicle targets are the main reasons that affect the accuracy of vehicle target detection and tracking.
针对上述问题,首先,利用背景差分法与混合高斯模型建模的方法进行目标检测与提取,准确的从视频图像中区分出前景图像和背景模型,提取前景图像并检测出车辆目标。然后,利用光流的跟踪算法对运动车辆进行跟踪。To solve the above problems, firstly, use the method of background difference method and mixed Gaussian model modeling method to detect and extract the target, accurately distinguish the foreground image and the background model from the video image, extract the foreground image and detect the vehicle target. Then, use the optical flow tracking algorithm to track the moving vehicle.
1、利用背景差分法与混合高斯模型建模的方法进行目标检测与提取:1. Use background difference method and mixed Gaussian model modeling method to detect and extract targets:
背景差分法是一种最基本的目标识别方法,它采用根据某种背景模型更新参考图像,计算当前图像与参考图像的差分图像,然后阀值化分割出运动物体,这种方法计算简单,如果参考图像选取得当,这种方法的优点是可以准确地分割出运动物体。其实现步骤如下:The background difference method is one of the most basic target recognition methods. It updates the reference image according to a certain background model, calculates the difference image between the current image and the reference image, and then thresholds the moving object. This method is simple to calculate. If The reference image is properly selected, and the advantage of this method is that it can accurately segment moving objects. Its implementation steps are as follows:
(1)确定背景模型,并建立背景图像。最简单的背景模型是时间平均图像,但随着时间的推移,外界的光线会变化,这会引起背景图像的变化,因而采用一幅固定背景图像的方法,只适合应用于外界条件较好的场合。为了实现长时间的视频监视,本发明采用基于高斯统计模型的背景图像估计算法,用高斯分布来描述每个像素颜色的概率密度分布。该算法由背景图像的估计和更新两部分组成。在背景图像的估计算法中,首先,计算一段时间内视频序列图像的像素亮度的均值μ0(x,y)及方差以μ0(x,y)和组成具有高斯分布的图像B0,B0即为初始的背景估计图像;(1) Determine the background model and create a background image. The simplest background model is a time-averaged image, but as time goes by, the external light will change, which will cause changes in the background image, so the method of using a fixed background image is only suitable for applications with better external conditions occasion. In order to realize long-term video surveillance, the present invention adopts a background image estimation algorithm based on a Gaussian statistical model, and uses a Gaussian distribution to describe the probability density distribution of each pixel color. The algorithm consists of two parts: estimation and update of the background image. In the estimation algorithm of the background image, firstly, the mean value μ0 (x, y) and the variance of the pixel brightness of the video sequence image within a period of time are calculated Take μ0 (x,y) and Form an image B0 with a Gaussian distribution, and B0 is the initial background estimation image;
其中,N为初始化背景图像选取时间段内序列图像的总帧数,150≤N≤200;fi(x,y)为第i帧图像在第x行、y列的像素亮度值,(x,y)表示图像中的像素位置为x行、y列。in, N is the total number of frames of the sequence image in the selected time period for initializing the background image,150≤N≤200 ; ) indicates that the pixel position in the image is row x and column y.
当背景估计图像的初始化完成后,随着每一帧新图像的到来,分别按照和更新背景估计图像的均值μj(x,y)和方差得到更新后的第j帧图像的背景估计图像Bj;When the initialization of the background estimation image is completed, with the arrival of each new frame of image, according to and Update the mean μj (x,y) and variance of the background estimated image Obtain the background estimation image Bj of the updated image of the jth frame;
其中,δ是一给定的[0,1]之间的常数,K是混合高斯模型的个数,3≤K≤5;j≥1,fj(x,y)表示第j帧图像在第x行、y列的像素亮度值。in, δ is a given constant between [0,1], K is the number of mixed Gaussian models, 3≤K≤5; j≥1, fj (x, y) means that the jth frame image is at x The brightness value of the pixel in row and column y.
(2)在像素模式下,用当前图像减去己知背景图像来得到差分图像,即按照dj(x,y)=|fj(x,y)-Bj(x,y)|计算得到当前帧图像和当前帧背景估计图像的差分图像;然后,对差分图像里做二值化处理,得到检测出的运动车辆区域,即:(2) In pixel mode, subtract the known background image from the current image to obtain the difference image, that is, calculate according to dj (x,y)=|fj (x,y)-Bj (x,y)| Obtain the difference image of the current frame image and the current frame background estimation image; then, perform binarization processing on the difference image to obtain the detected moving vehicle area, namely:
其中,Mj(x,y)为差分图像中任何一点,r为灰度阀值,50≤r≤60。Wherein, Mj (x, y) is any point in the difference image, r is the gray threshold, 50≤r≤60.
如果,Mj(x,y)=1,则表示像素点(x,y)在第j帧属于运动区域,否则,像素点(x,y)在第j帧属于背景区域。If Mj (x, y)=1, it means that the pixel point (x, y) belongs to the motion area in the jth frame; otherwise, the pixel point (x, y) belongs to the background area in the jth frame.
2、利用光流的跟踪算法对运动车辆进行跟踪:2. Use the optical flow tracking algorithm to track the moving vehicle:
光流场的方法是从实时采集的图像序列中抽取光流场,蹄选出光流较大的运动目标区域并计算出运动目标的速度矢量,从而实现运动目标的检测和跟踪。光流场是一个二维矢量场,它包含的信息即是各像素点的瞬时运动速度矢量信息。研究光流场的目的就是为了从图像序列中近似计算不能直接得到的运动场。任何估计算法都可以获得光流,如局部松她算法、多分辨率估计算法以及分层块匹配算法等。The method of optical flow field is to extract the optical flow field from the image sequence collected in real time, select the moving target area with large optical flow and calculate the velocity vector of the moving target, so as to realize the detection and tracking of the moving target. The optical flow field is a two-dimensional vector field, and the information it contains is the instantaneous motion velocity vector information of each pixel. The purpose of studying the optical flow field is to approximate the motion field that cannot be obtained directly from the image sequence. Any estimation algorithm can obtain optical flow, such as local loose her algorithm, multi-resolution estimation algorithm, and hierarchical block matching algorithm.
经典的Horn-Schunck光流计算方法主要是基于空间平滑性假设和亮度恒常性假设。假设I(x,y,t)为图像坐标为(x,y)点在t时刻的灰度(亮度)值。当t+dt时刻该点运动到(x+dx,y+dy)点,根据亮度恒常性假设,可以认为在t+dt时刻在图像点(x+dx,y+dy)处的亮度与时刻t的(x,y)点的亮度是相同的,即光流约束方程为:The classic Horn-Schunck optical flow calculation method is mainly based on the assumption of spatial smoothness and brightness constancy. Assume that I(x, y, t) is the grayscale (brightness) value of the image coordinate (x, y) point at time t. When the point moves to point (x+dx, y+dy) at the time t+dt, according to the assumption of brightness constancy, it can be considered that the brightness at the image point (x+dx, y+dy) at the time t+dt is related to the time The brightness of the (x, y) point of t is the same, that is, the optical flow constraint equation is:
I(x,y,t)=I(x+dx,y+dy,t+dt) (2)I(x,y,t)=I(x+dx,y+dy,t+dt) (2)
如果认为图像灰度是位置和时间的连续变化函数,则将上式右边用泰勒级数展开并略去二次项和高阶项,得到光流场的基本方程:If the gray level of the image is considered to be a continuous change function of position and time, the right side of the above formula is expanded by Taylor series and the quadratic term and higher-order term are omitted to obtain the basic equation of the optical flow field:
Horn和Schunck根据同一运动物体引起的光流场应该是连续平滑的,提出了空间平滑性约束假设。该假设认为,在许多情况下物体的运动速度是局部光滑的,或随着点的改变而缓慢变化的,但在局部区域的变化非常小。特别是目标在作无形变刚体运动时,各相邻像素点应具有相同的运动速度,即相邻点速度的空间变化率为零,可以表示为:According to Horn and Schunck, the optical flow field caused by the same moving object should be continuous and smooth, and proposed a spatial smoothness constraint assumption. This hypothesis holds that in many cases the motion velocity of an object is locally smooth, or changes slowly as the point changes, but the change in the local area is very small. Especially when the target is moving without deformation of the rigid body, each adjacent pixel point should have the same motion speed, that is, the spatial change rate of the adjacent point speed is zero, which can be expressed as:
其中u和v为时刻图像平面上坐标为(x,y)的像素点在方向以及在方向的瞬时速度分量,也就是光流。将式(3)和(4)结合起来求解,就形成了对两个约束条件进行加权求极值的问题。Among them, u and v are the pixel point with coordinates (x, y) on the image plane at the moment and the instantaneous velocity component in the direction, that is, the optical flow. Combining equations (3) and (4) to solve, forms the problem of weighting the two constraints to find the extremum.
光流携带了有关物体运动和景物三维结构的丰富信息,所以该方法不仅可以用于运动目标检测,甚至可以直接用于运动目标跟踪,而且在摄像头存在运动的前提下也能正确地检测出运动目标。但在实际应用中,由于多光源、遮挡性、噪声等原因,使得光流场基本方程的灰度守恒假设条件往往不能满足,不能求解出正确的光流场,同时大多数光流计算方法也相当复杂,计算量巨大,不能满足实时要求。Optical flow carries rich information about the motion of objects and the three-dimensional structure of the scene, so this method can not only be used for moving target detection, but can even be directly used for moving target tracking, and it can also correctly detect motion under the premise of camera motion Target. However, in practical applications, due to reasons such as multiple light sources, occlusion, and noise, the gray-scale conservation assumptions of the basic equations of the optical flow field are often not satisfied, and the correct optical flow field cannot be solved. At the same time, most optical flow calculation methods are also It is quite complicated and has a huge amount of calculation, which cannot meet the real-time requirements.
本发明首先对每一帧图像检测出的运动车辆区域进行角点特征提取,再应用金字塔Lucas-Kanade稀疏光流算法对所有帧视频图像中的角点进行跟踪,得到运动车辆运动轨迹。The present invention firstly extracts the corner point features of the moving vehicle area detected in each frame image, and then uses the pyramid Lucas-Kanade sparse optical flow algorithm to track the corner points in all frames of video images to obtain the moving vehicle trajectory.
步骤2:车辆轨迹特征提取Step 2: Vehicle Trajectory Feature Extraction
车辆轨迹的最基本的数据形式,即为位置坐标。根据位置坐标,通过计算可得到诸如目标车辆运动速度、方向等信息。以步骤1目标检测跟踪得到的运动车辆的运动轨迹为基础,使用矩阵网格和双向直方图相结合的轨迹特征提取方法,构造车辆轨迹特征向量,可以为车辆行为分类提供特征依据。具体为:The most basic data form of vehicle trajectory is position coordinates. According to the position coordinates, information such as the speed and direction of the target vehicle can be obtained through calculation. Based on the trajectory of the moving vehicle obtained from the target detection and tracking in step 1, the trajectory feature extraction method combining matrix grid and bidirectional histogram is used to construct the vehicle trajectory feature vector, which can provide feature basis for vehicle behavior classification. Specifically:
(1)以运动车辆轨迹坐标为基准,以轨迹横坐标为x轴、轨迹纵坐标为y轴构造车辆轨迹坐标系Oxy,分别求出所有点的x轴方向最大值xmax和最小值xmin、y轴方向最大值ymax和最小值ymin,在坐标系Oxy创建宽为高为的矩阵网格,对网格中每一个子网格赋予初始值0;(1) Based on the trajectory coordinates of the moving vehicle, the vehicle trajectory coordinate system Oxy is constructed with the trajectory abscissa as the x-axis and the trajectory ordinate as the y-axis, and the maximum value xmax and the minimum value xmin of all points in the x-axis direction are respectively obtained, the maximum value ymax and the minimum value ymin in the y-axis direction, and the width created in the coordinate system Oxy is high for The matrix grid of , assign an initial value of 0 to each sub-grid in the grid;
(2)对轨迹坐标所在的子网格进行赋值,先沿x轴正方向,再沿y轴正方向,按顺序赋予递增的权重值1、2、3、……,得到轨迹坐标矩阵;(2) Assign values to the sub-grid where the trajectory coordinates are located, first along the positive direction of the x-axis, then along the positive direction of the y-axis, and assign increasing weight values 1, 2, 3, ... in order to obtain the trajectory coordinate matrix;
(3)对轨迹坐标矩阵的行和列分别构造直方图,使用双向直方图估算车辆目标的基本行为趋势,即依据轨迹坐标矩阵行和列的直方图整体趋势走向,如果自左至右递减,则趋向于右转弯;如果自左至右递增,则趋向于左转弯;如果中间向两边递减,则轨迹线呈直行趋势,得到运动车辆行为趋势为左转弯、右转弯或直行;(3) Construct histograms for the rows and columns of the trajectory coordinate matrix respectively, and use the two-way histogram to estimate the basic behavior trend of the vehicle target, that is, according to the overall trend of the histogram of the rows and columns of the trajectory coordinate matrix, if it decreases from left to right, Then it tends to turn right; if it increases from left to right, it tends to turn left; if the middle decreases to both sides, the trajectory line shows a straight trend, and the behavior trend of moving vehicles is left turn, right turn or straight;
其中,直方图横坐标表示轨迹坐标矩阵的行数或列数,直方图的纵坐标统计轨迹坐标矩阵中某一行或列中矩阵元素值不为0的个数;Wherein, the abscissa of the histogram represents the number of rows or columns of the trajectory coordinate matrix, and the ordinate of the histogram counts the number of matrix element values in a certain row or column in the trajectory coordinate matrix that are not 0;
(4)由运动车辆行为趋势和轨迹坐标矩阵则构成运动车辆的轨迹特征向量。(4) The trajectory feature vector of the moving vehicle is formed by the behavior trend of the moving vehicle and the trajectory coordinate matrix.
步骤3:车辆行为训练Step 3: Vehicle Behavior Training
支撑矢量机(support vector machines,SVM)是一种基于类类间隔最大的分类模型。SVM因其对小样本,高维的分类问题有较好的效果,且训练和测试阶段相对简单,因此在模式识别和机器学习的问题中得到了大量的应用。由于支持向量机分类器仅能将输入识别成两类,而需要区分的特征往往是两类以上的,本发明采用可以识别三类的两层分类器结构。Support vector machines (SVM) is a classification model based on the largest class interval. SVM has been widely used in pattern recognition and machine learning problems because of its good effect on small sample and high-dimensional classification problems, and the training and testing phases are relatively simple. Because the support vector machine classifier can only identify the input into two categories, and the features to be distinguished are often more than two categories, the present invention adopts a two-layer classifier structure that can identify three categories.
在利用SVM进行车辆行为识别之前,先利用SVM两层分类器结构对车辆轨迹样本进行行为训练,得到样本数据的识别结果,具体为:Before using SVM for vehicle behavior recognition, first use the SVM two-layer classifier structure to conduct behavior training on vehicle trajectory samples, and obtain the recognition results of the sample data, specifically:
(1)创建轨迹样本:用图片生成工具导入预先采集的视频的模拟背景,并创建包括直行、左转弯、右转弯三种类型的轨迹样本各100张;(1) Create trajectory samples: use the image generation tool to import the simulated background of the pre-collected video, and create 100 trajectory samples of three types, including going straight, turning left, and turning right;
(2)根据步骤2的轨迹特征提取方法对轨迹样本进行轨迹特征提取,得到各种类型轨迹样本的轨迹特征向量;(2) according to the trajectory feature extraction method of step 2, trajectory sample is carried out trajectory feature extraction, obtains the trajectory feature vector of various types of trajectory samples;
(3)利用SVM两层分类器对样本的轨迹特征向量进行训练,得到样本数据的识别结果和训练好的SVM分类器;(3) Utilize the SVM two-layer classifier to train the trajectory feature vector of the sample, obtain the recognition result of the sample data and the trained SVM classifier;
步骤4:车辆行为识别Step 4: Vehicle Behavior Recognition
根据步骤3训练得到的SVM分类器,对步骤2得到的运动车辆轨迹特征向量进行车辆行为识别,最终得到车辆行为为左转弯、右转弯或直行。According to the SVM classifier trained in step 3, the vehicle behavior recognition is performed on the moving vehicle trajectory feature vector obtained in step 2, and finally the vehicle behavior is left turn, right turn or straight.
本实施例采用VS2010平台和OpenCV技术进行方法仿真,实现了对车辆检测、跟踪以及行为识别。测试结果表明,使用背景差分法的车辆目标检测方法和光流法算法可以完成对车辆目标的检测、跟踪(如图2所示),使用矩阵网格和双向直方图相结合可以提取车辆轨迹特征(如图3所示),使用基于SVM的分类算法可以完成对车辆行为的分类识别,且实验结果准确。In this embodiment, VS2010 platform and OpenCV technology are used for method simulation, and vehicle detection, tracking and behavior recognition are realized. The test results show that the vehicle target detection method using the background difference method and the optical flow algorithm can complete the detection and tracking of the vehicle target (as shown in Figure 2), and the vehicle trajectory features can be extracted by using the combination of matrix grid and bidirectional histogram ( As shown in Figure 3), the classification and recognition of vehicle behavior can be completed using the SVM-based classification algorithm, and the experimental results are accurate.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710027523.9ACN106875424B (en) | 2017-01-16 | 2017-01-16 | A kind of urban environment driving vehicle Activity recognition method based on machine vision |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710027523.9ACN106875424B (en) | 2017-01-16 | 2017-01-16 | A kind of urban environment driving vehicle Activity recognition method based on machine vision |
| Publication Number | Publication Date |
|---|---|
| CN106875424A CN106875424A (en) | 2017-06-20 |
| CN106875424Btrue CN106875424B (en) | 2019-09-24 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201710027523.9AExpired - Fee RelatedCN106875424B (en) | 2017-01-16 | 2017-01-16 | A kind of urban environment driving vehicle Activity recognition method based on machine vision |
| Country | Link |
|---|---|
| CN (1) | CN106875424B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108229407A (en)* | 2018-01-11 | 2018-06-29 | 武汉米人科技有限公司 | A kind of behavioral value method and system in video analysis |
| CN110533687B (en)* | 2018-05-11 | 2023-09-12 | 上海美城智能科技有限公司 | Multi-target three-dimensional track tracking method and device |
| CN108830246B (en)* | 2018-06-25 | 2022-02-15 | 中南大学 | Multi-dimensional motion feature visual extraction method for pedestrians in traffic environment |
| CN109165602B (en)* | 2018-08-27 | 2023-05-19 | 成都华安视讯科技有限公司 | Black smoke vehicle detection method based on video analysis |
| CN111105437B (en)* | 2018-10-29 | 2024-03-29 | 西安宇视信息科技有限公司 | Vehicle track abnormality judging method and device |
| CN109815856A (en)* | 2019-01-08 | 2019-05-28 | 深圳中兴网信科技有限公司 | Status indication method, system and the computer readable storage medium of target vehicle |
| US11260852B2 (en)* | 2019-03-26 | 2022-03-01 | GM Global Technology Operations LLC | Collision behavior recognition and avoidance |
| CN110414375B (en)* | 2019-07-08 | 2020-07-17 | 北京国卫星通科技有限公司 | Low-altitude target identification method and device, storage medium and electronic equipment |
| CN110675592B (en)* | 2019-08-16 | 2021-10-08 | 重庆特斯联智慧科技股份有限公司 | High-altitude parabolic early warning protection system based on target identification and control method |
| CN110782485A (en)* | 2019-10-31 | 2020-02-11 | 广东泓胜科技股份有限公司 | Vehicle lane change detection method and device |
| CN111126144B (en)* | 2019-11-20 | 2021-10-12 | 浙江工业大学 | Vehicle track abnormity detection method based on machine learning |
| CN110954893A (en)* | 2019-12-23 | 2020-04-03 | 山东师范大学 | Method and system for motion recognition behind wall based on wireless router |
| CN113538891A (en)* | 2020-04-17 | 2021-10-22 | 无锡锦铖人工智能科技有限公司 | Intelligent vehicle counting system |
| CN111754550B (en)* | 2020-06-12 | 2023-08-11 | 中国农业大学 | A method and device for detecting dynamic obstacles in the state of agricultural machinery movement |
| CN111914627A (en)* | 2020-06-18 | 2020-11-10 | 广州杰赛科技股份有限公司 | A vehicle identification and tracking method and device |
| CN111862142B (en)* | 2020-07-23 | 2024-08-02 | 软通智慧科技有限公司 | Motion trail generation method, device, equipment and medium |
| CN112037250B (en)* | 2020-07-27 | 2024-04-05 | 国网四川省电力公司 | Target vehicle vector track tracking and engineering view modeling method and device |
| CN112085063B (en)* | 2020-08-10 | 2023-10-13 | 深圳市优必选科技股份有限公司 | Target identification method, device, terminal equipment and storage medium |
| CN112084970A (en)* | 2020-09-14 | 2020-12-15 | 西安莱奥信息科技有限公司 | Vehicle identification method and device based on machine vision |
| CN112183252B (en)* | 2020-09-15 | 2024-09-10 | 珠海格力电器股份有限公司 | Video motion recognition method, device, computer equipment and storage medium |
| CN112258462A (en)* | 2020-10-13 | 2021-01-22 | 广州杰赛科技股份有限公司 | Vehicle detection method and device and computer readable storage medium |
| CN112132869B (en)* | 2020-11-02 | 2024-07-19 | 中远海运科技股份有限公司 | Vehicle target track tracking method and device |
| CN112418213A (en)* | 2020-11-06 | 2021-02-26 | 北京航天自动控制研究所 | Vehicle driving track identification method and device and storage medium |
| CN112416599B (en)* | 2020-12-03 | 2023-03-24 | 腾讯科技(深圳)有限公司 | Resource scheduling method, device, equipment and computer readable storage medium |
| CN113077494A (en)* | 2021-04-10 | 2021-07-06 | 山东沂蒙交通发展集团有限公司 | Road surface obstacle intelligent recognition equipment based on vehicle orbit |
| CN116071688A (en)* | 2023-03-06 | 2023-05-05 | 台州天视智能科技有限公司 | Behavior analysis method and device for vehicle, electronic equipment and storage medium |
| CN117555333A (en)* | 2023-11-21 | 2024-02-13 | 深圳云程科技有限公司 | Dynamic travel track processing system and method |
| CN118537819B (en)* | 2024-07-25 | 2024-10-11 | 中国海洋大学 | Low-calculation-force frame difference method road vehicle visual identification method, medium and system |
| CN120547309A (en)* | 2025-07-28 | 2025-08-26 | 厦门数翼达信息科技有限公司 | Multi-scenario IoT off-site dynamic inspection adaptive monitoring system |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104658006A (en)* | 2013-11-22 | 2015-05-27 | 上海宝康电子控制工程有限公司 | Method for achieving vehicle tracking based on variable split beam stream |
| CN104951764A (en)* | 2015-06-17 | 2015-09-30 | 浙江工业大学 | Identification method for behaviors of high-speed vehicle based on secondary spectrum clustering and HMM (Hidden Markov Model)-RF (Random Forest) hybrid model |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104658006A (en)* | 2013-11-22 | 2015-05-27 | 上海宝康电子控制工程有限公司 | Method for achieving vehicle tracking based on variable split beam stream |
| CN104951764A (en)* | 2015-06-17 | 2015-09-30 | 浙江工业大学 | Identification method for behaviors of high-speed vehicle based on secondary spectrum clustering and HMM (Hidden Markov Model)-RF (Random Forest) hybrid model |
| Title |
|---|
| A New Method on Vehicle Behavior Analysis of Intelligent Transportation Monitoring;Xuan Nie et al.;《LISS 2014》;20150421;第741-746页* |
| Publication number | Publication date |
|---|---|
| CN106875424A (en) | 2017-06-20 |
| Publication | Publication Date | Title |
|---|---|---|
| CN106875424B (en) | A kind of urban environment driving vehicle Activity recognition method based on machine vision | |
| Ke et al. | Multi-dimensional traffic congestion detection based on fusion of visual features and convolutional neural network | |
| CN102768804B (en) | Video-based traffic information acquisition method | |
| CN102542289B (en) | Pedestrian volume statistical method based on plurality of Gaussian counting models | |
| CN102629385B (en) | A target matching and tracking system and method based on multi-camera information fusion | |
| CN107122736B (en) | A method and device for predicting human body orientation based on deep learning | |
| CN103489199B (en) | video image target tracking processing method and system | |
| CN102385690B (en) | Target tracking method and system based on video image | |
| CN103310444B (en) | A kind of method of the monitoring people counting based on overhead camera head | |
| CN117949942B (en) | Target tracking method and system based on fusion of radar data and video data | |
| CN104318263A (en) | Real-time high-precision people stream counting method | |
| CN102663429A (en) | Method for motion pattern classification and action recognition of moving target | |
| CN106127812B (en) | A kind of passenger flow statistical method of the non-gate area in passenger station based on video monitoring | |
| CN106408594A (en) | Video multi-target tracking method based on multi-Bernoulli characteristic covariance | |
| CN106570490B (en) | A real-time pedestrian tracking method based on fast clustering | |
| CN101408983A (en) | Multi-object tracking method based on particle filtering and movable contour model | |
| CN103761747B (en) | Target tracking method based on weighted distribution field | |
| CN104200199A (en) | TOF (Time of Flight) camera based bad driving behavior detection method | |
| CN106778484A (en) | Moving vehicle tracking under traffic scene | |
| CN106780564A (en) | A kind of anti-interference contour tracing method based on Model Prior | |
| CN119339302B (en) | A method, device and medium for inter-frame image segmentation based on recursive neural network | |
| Wang et al. | Pedestrian abnormal event detection based on multi-feature fusion in traffic video | |
| Qing et al. | A novel particle filter implementation for a multiple-vehicle detection and tracking system using tail light segmentation | |
| CN106056078A (en) | Crowd density estimation method based on multi-feature regression ensemble learning | |
| Khan | Estimating Speeds and Directions of Pedestrians in Real-Time Videos: A solution to Road-Safety Problem. |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee | ||
| CF01 | Termination of patent right due to non-payment of annual fee | Granted publication date:20190924 Termination date:20200116 |