Movatterモバイル変換


[0]ホーム

URL:


CN106778484A - Moving vehicle tracking under traffic scene - Google Patents

Moving vehicle tracking under traffic scene
Download PDF

Info

Publication number
CN106778484A
CN106778484ACN201611030765.5ACN201611030765ACN106778484ACN 106778484 ACN106778484 ACN 106778484ACN 201611030765 ACN201611030765 ACN 201611030765ACN 106778484 ACN106778484 ACN 106778484A
Authority
CN
China
Prior art keywords
target
tracking
vehicle
moving
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201611030765.5A
Other languages
Chinese (zh)
Inventor
陈锡清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanning Haofa Technology Co Ltd
Original Assignee
Nanning Haofa Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanning Haofa Technology Co LtdfiledCriticalNanning Haofa Technology Co Ltd
Priority to CN201611030765.5ApriorityCriticalpatent/CN106778484A/en
Publication of CN106778484ApublicationCriticalpatent/CN106778484A/en
Withdrawnlegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The invention discloses the moving vehicle tracking under a kind of traffic scene, comprise the following steps:S1:Head end video IMAQ, image is pre-processed, and target detection is carried out to moving vehicle as tracking object;S2:Estimation is carried out to tracking target using Kalman filter, by setting up motion state model, the historical movement information according to tracked target predicts its position in the current frame;S3:According to the color histogram that tracking target is preserved in former frame, back projection is calculated in the estimation range that Kalman filter is provided, using Camshift algorithm search moving targets;S4:After marking moving target, judge whether target location overlaps, if there is the phenomenon then only positional information of more fresh target of target occlusion, more new histogram, does not update motion state and respective histogram simultaneously if without circumstance of occlusion;S5:Using the target after renewal as the tracking object of next frame, said process is repeated.

Description

Moving vehicle tracking method in traffic scene
Technical Field
The invention relates to a method for tracking a moving vehicle in a traffic scene.
Background
Along with the continuous acceleration of the urbanization process, the development of the transportation industry and the increase of the automobile holding amount bring great convenience to the working and traveling of people. However, problems follow, urban road construction is seriously delayed, urban traffic management experience is insufficient, and the passing capacity of a road network cannot meet the requirement of traffic volume increase. Traffic jam is becoming more serious and traffic accidents are occurring frequently, which is just the common problem faced by all countries in the world.
In order to solve various problems in urban traffic and meet the ever-increasing traffic demands, the traditional solution is to strengthen the construction of urban traffic infrastructure by continuously constructing more roads. Although the problems are relieved to a certain extent and smooth urban traffic is guaranteed, road resources available for extension are limited, the method is difficult to solve the problems encountered at present substantially, and various traffic accidents are still in continuous occurrence. With the continuous development and innovation of scientific technology, people begin to consider the utilization of technologies such as computer vision and the like to improve the existing urban road traffic and build a more convenient, efficient, safe and unblocked traffic management system, thereby obviously improving the transportation and management capacity of a traffic network. Intelligent transportation systems have been developed in this situation.
The intelligent transportation system-ITS (intelligent transportation system-ITS) is a hot spot and leading edge of the current world transportation development. The research in the field of intelligent transportation mainly comprises vehicle detection, vehicle tracking, vehicle information extraction, vehicle behavior analysis and the like, and the vehicle detection and tracking are used as core links of an intelligent transportation system, so that important guarantee is provided for subsequent vehicle information extraction and behavior analysis.
The main purpose of the vehicle detection and tracking technology is to accurately extract a vehicle target in a video image, realize matching by utilizing the characteristic information of a vehicle, determine the position of the target in each frame of image, and provide a motion track as the basis for vehicle behavior analysis. However, the actual traffic scene is very complex, and various interference factors such as human-vehicle mixing, traffic jam, light change and the like generally exist, which brings great difficulty to vehicle detection and tracking. The main difficulties of vehicle detection and tracking are the following:
1. the human body and the vehicle are mixed. In the road sections with more people flowing in city centers, districts and the like, the detection and tracking of vehicles are greatly interfered by pedestrians going in and out, and particularly in the rush hours, the phenomenon of people and vehicles mixing at traffic light intersections often occurs. How to effectively distinguish pedestrians from vehicles and avoid the interference of people flow is one of the main problems facing at present.
2. And (5) shielding the vehicle. The traffic flow speed on the highway is fast, and the distance between vehicles is large, so that the vehicle detection and tracking are relatively easy. However, the speed of the vehicle is generally slow in urban road sections, particularly during peak periods of traffic flow, traffic jam is easy to occur, and obvious shielding phenomena exist among vehicles, so that great challenges are brought to vehicle detection and tracking.
3. The illumination changes. The illumination condition under the traffic scene changes obviously with time, the vehicle shadow generated due to the illumination change can have great influence on the detection, and the characteristic information difference of the vehicle is obvious particularly under different illumination conditions in the day and at night. The method effectively solves the problem of light ray change, realizes all-weather stable work, and is a basic requirement for vehicle detection and tracking in traffic scenes.
4. The complexity of the algorithm. In practical application, the electronic police system has high requirement on the real-time performance of the algorithm, and the algorithm cannot be too complex.
Disclosure of Invention
The invention aims to provide a moving vehicle tracking method in a traffic scene.
The method for tracking the moving vehicle in the traffic scene comprises the following steps:
s1: acquiring a front-end video image through a camera, preprocessing the image, and detecting a target of a moving vehicle as a tracking object;
s2: carrying out motion estimation on a tracked target by using a Kalman filter, and predicting the position of the tracked target in a current frame according to historical motion information of the tracked target by establishing a motion state model;
s3: tracking a Camshift target, calculating a back projection in a prediction range given by a Kalman filter according to a color histogram stored in a previous frame of the tracked target, and searching a moving target by utilizing a Camshift algorithm;
s4: after the moving target is marked, judging whether the target position is overlapped, if the target shielding phenomenon exists, only updating the position information of the target, not updating the histogram, and if the target shielding condition does not exist, updating the moving state and the corresponding histogram at the same time;
s5: and taking the updated target as a tracking object of the next frame, and repeatedly executing the process.
Further, the specific method for detecting the moving vehicle target is as follows:
s1-1: extracting a large number of vehicle images from the video images as positive samples, extracting non-vehicle images as negative samples, and extracting Haar-like rectangular features from training samples as training feature sets;
s1-2: assuming that the sample space is X, the sample is represented by Y ═ {0, 1}, where 0 represents non-vehicle and 1 represents vehicle. Assuming that the total number of Haar-like features is N, wt,jRepresents the weight of the ith sample in the t round cycle;
s1-3: the training method of the strong classifier is as follows:
(1) for a series of training samples (x)1,y1),(x2,y2),...,(xn,yn) Assuming that n samples in the sample library are uniformly distributed, the sample weight wt,j=1/n;
(2)Fort=1toT:
1) The sample weight distribution is normalized by the weight distribution of the samples,
2) for each feature j, at a given weight wt,jDown-training weak classifier ht,j(x) Calculating the classification error rate:
3) selecting the optimal weak classifier h from the weak classifierst(x) The method comprises the following steps Order toThen h ist(x)=ft,k(x) And the classification error rate of the sample set istt,k
4) Updating the sample weight according to the classification error rate of the previous round:
wherein,ei0 represents correct classification, and ei1 represents the classification error, and the final strong classifier is:wherein,
s1-4: and scanning windows with different scales on the image to be detected, and finally outputting all detected vehicle targets.
Further, a specific method for performing motion estimation on the tracking target by using a Kalman filter is as follows:
s2-1: the Kalman filtering algorithm model comprises a state equation and an observation equation:
S(n)=A(n)S(n-1)+W(n-1),
X(n)=C(n)S(n)+V(n),
wherein, s (n) and x (n) are respectively the state vector and observation vector at n moments, a (n) is a state transition matrix, c (n) is an observation matrix, w (n) and v (n) are state noise and observation noise, which are both uncorrelated white gaussian noise with an average value of 0;
s2-2: the central point of a vehicle target rectangle is used as a prediction object, and a motion state vector X of the central point of a motion target is establishedxAnd Xy
Wherein s isx,sy,vx,vy,ax,ayRepresenting the position, velocity and acceleration of the vehicle target in the horizontal and vertical directions, respectively;
s2-3: the equation of motion for tracking the center of the target in the horizontal direction is:
wherein s isx(n),vx(n),ax(n) represents the position, velocity and acceleration of the center point of the object at time n, ox(n-1) is white noise;
rewriting the above formula to a matrix form:
the only motion state components that can be observed are the positions of the moving objects:
s2-4: comparing the state equation and the observation equation of the formula Kalman filter in the formula S2-1, the state equation and the observation equation of the tracking target center point can be obtained as follows:
wherein,C(n)=[1 0 0]。
further, the concrete flow of the Camshift algorithm is as follows:
s3-1: initializing a search window to enable a target to be tracked to be in the search window;
s3-2: extracting H components at the corresponding positions of the windows in the HSV space to obtain H component histograms, and calculating a color probability distribution map, namely a reverse projection map, of the whole tracking area according to the H component histograms;
s3-3: selecting a search window with the same size as the initial window in the reverse projection graph;
s3-4: adjusting the size of the window according to the pixel sum S in the search window, and moving the center of the window to the position of the mass center;
s3-5: judging whether convergence is achieved or not, outputting the centroid (x, y) if convergence is achieved, otherwise, repeating the steps S3-3 and S3-4 until convergence is achieved or the maximum iteration number is achieved;
s3-6: and taking the position and the size of the finally obtained search window as the initial window of the next frame, and continuously executing circulation.
The invention has the beneficial effects that:
1) the vehicle detection algorithm based on the Haar-like features and the Adaboost classifier can obtain a reliable vehicle classifier through enriching training samples, better adapts to complex changes in traffic scenes, has extremely high detection rate and low false alarm rate, and can meet the actual working requirements of an electronic police system;
2) the invention realizes the tracking of the vehicle target by adopting the idea of combining the Camshift tracking method based on the target color information and the Kalman tracking method based on the motion information prediction, and has better tracking effect.
Detailed Description
The following specific examples further illustrate the invention but are not intended to limit the invention thereto.
The method for tracking the moving vehicle in the traffic scene comprises the following steps:
s1: acquiring a front-end video image through a camera, preprocessing the image, and detecting a target of a moving vehicle as a tracking object;
s2: carrying out motion estimation on a tracked target by using a Kalman filter, and predicting the position of the tracked target in a current frame according to historical motion information of the tracked target by establishing a motion state model;
s3: tracking a Camshift target, calculating a back projection in a prediction range given by a Kalman filter according to a color histogram stored in a previous frame of the tracked target, and searching a moving target by utilizing a Camshift algorithm;
s4: after the moving target is marked, judging whether the target position is overlapped, if the target shielding phenomenon exists, only updating the position information of the target, not updating the histogram, and if the target shielding condition does not exist, updating the moving state and the corresponding histogram at the same time;
s5: and taking the updated target as a tracking object of the next frame, and repeatedly executing the process.
The specific method for detecting the moving vehicle target is as follows:
s1-1: extracting a large number of vehicle images from the video images as positive samples, extracting non-vehicle images as negative samples, and extracting Haar-like rectangular features from training samples as training feature sets;
s1-2: assuming that the sample space is X, the sample is represented by Y ═ {0, 1}, where 0 represents non-vehicle and 1 represents vehicle. Assuming that the total number of Haar-like features is N, wt,jRepresents the weight of the ith sample in the t round cycle;
s1-3: the training method of the strong classifier is as follows:
(1) for a series of training samples (x)1,y1),(x2,y2),...,(xn,yn) Assuming that n samples in the sample library are uniformly distributed, the sample weight wt,j=1/n;
(2)Fort=1toT:
1) The sample weight distribution is normalized by the weight distribution of the samples,
2) for each feature j, at a given weight wt,jDown-training weak classifier ht,j(x) Calculating the classification error rate:
3) selecting the optimal weak classifier h from the weak classifierst(x) The method comprises the following steps Order toThen h ist(x)=ft,k(x) And the classification error rate of the sample set istt,k
4) Updating the sample weight according to the classification error rate of the previous round:
wherein,ei0 represents correct classification, and ei1 represents the classification error, and the final strong classifier is:wherein,
s1-4: and scanning windows with different scales on the image to be detected, and finally outputting all detected vehicle targets.
The specific method for performing motion estimation on the tracking target by using the Kalman filter is as follows:
s2-1: the Kalman filtering algorithm model comprises a state equation and an observation equation:
S(n)=A(n)S(n-1)+W(n-1),
X(n)=C(n)S(n)+V(n),
wherein, s (n) and x (n) are respectively the state vector and observation vector at n moments, a (n) is a state transition matrix, c (n) is an observation matrix, w (n) and v (n) are state noise and observation noise, which are both uncorrelated white gaussian noise with an average value of 0;
s2-2: the central point of a vehicle target rectangle is used as a prediction object, and a motion state vector X of the central point of a motion target is establishedxAnd Xy
Wherein s isx,sy,vx,vy,ax,ayRepresenting the position, velocity and acceleration of the vehicle target in the horizontal and vertical directions, respectively;
s2-3: the equation of motion for tracking the center of the target in the horizontal direction is:
wherein s isx(n),vx(n),ax(n) represents the position, velocity and acceleration of the center point of the object at time n, ox(n-1) is white noise;
rewriting the above formula to a matrix form:
the only motion state components that can be observed are the positions of the moving objects:
s2-4: comparing the state equation and the observation equation of the formula Kalman filter in the formula S2-1, the state equation and the observation equation of the tracking target center point can be obtained as follows:
wherein,C(n)=[1 0 0]。
the concrete flow of the Camshift algorithm is as follows:
s3-1: initializing a search window to enable a target to be tracked to be in the search window;
s3-2: extracting H components at the corresponding positions of the windows in the HSV space to obtain H component histograms, and calculating a color probability distribution map, namely a reverse projection map, of the whole tracking area according to the H component histograms;
s3-3: selecting a search window with the same size as the initial window in the reverse projection graph;
s3-4: adjusting the size of the window according to the pixel sum S in the search window, and moving the center of the window to the position of the mass center;
s3-5: judging whether convergence is achieved or not, outputting the centroid (x, y) if convergence is achieved, otherwise, repeating the steps S3-3 and S3-4 until convergence is achieved or the maximum iteration number is achieved;
s3-6: and taking the position and the size of the finally obtained search window as the initial window of the next frame, and continuously executing circulation.

Claims (4)

CN201611030765.5A2016-11-162016-11-16Moving vehicle tracking under traffic sceneWithdrawnCN106778484A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201611030765.5ACN106778484A (en)2016-11-162016-11-16Moving vehicle tracking under traffic scene

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201611030765.5ACN106778484A (en)2016-11-162016-11-16Moving vehicle tracking under traffic scene

Publications (1)

Publication NumberPublication Date
CN106778484Atrue CN106778484A (en)2017-05-31

Family

ID=58971792

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201611030765.5AWithdrawnCN106778484A (en)2016-11-162016-11-16Moving vehicle tracking under traffic scene

Country Status (1)

CountryLink
CN (1)CN106778484A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108022589A (en)*2017-10-312018-05-11努比亚技术有限公司Aiming field classifier training method, specimen discerning method, terminal and storage medium
CN108776974A (en)*2018-05-242018-11-09南京行者易智能交通科技有限公司A kind of real-time modeling method method suitable for public transport scene
CN109934162A (en)*2019-03-122019-06-25哈尔滨理工大学 Face image recognition and video clip interception method based on Struck tracking algorithm
CN110032978A (en)*2019-04-182019-07-19北京字节跳动网络技术有限公司Method and apparatus for handling video
TWI675580B (en)*2017-07-272019-10-21香港商阿里巴巴集團服務有限公司 Method and device for user authentication based on feature information
CN114419106A (en)*2022-03-302022-04-29深圳市海清视讯科技有限公司Vehicle violation detection method, device and storage medium
CN115174861A (en)*2022-07-072022-10-11广州后为科技有限公司Method and device for automatically tracking moving target by pan-tilt camera
CN119027656A (en)*2024-10-292024-11-26开拓导航控制技术股份有限公司 Aerial target tracking method, device, storage medium, and controller

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102737385A (en)*2012-04-242012-10-17中山大学Video target tracking method based on CAMSHIFT and Kalman filtering
CN104866823A (en)*2015-05-112015-08-26重庆邮电大学Vehicle detection and tracking method based on monocular vision

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102737385A (en)*2012-04-242012-10-17中山大学Video target tracking method based on CAMSHIFT and Kalman filtering
CN104866823A (en)*2015-05-112015-08-26重庆邮电大学Vehicle detection and tracking method based on monocular vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张志鹏: "嵌入式电子警察系统中的车辆检测与跟踪", 《中国优秀硕士学位论文全文数据库信息科技辑》*

Cited By (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
TWI675580B (en)*2017-07-272019-10-21香港商阿里巴巴集團服務有限公司 Method and device for user authentication based on feature information
CN108022589A (en)*2017-10-312018-05-11努比亚技术有限公司Aiming field classifier training method, specimen discerning method, terminal and storage medium
CN108776974A (en)*2018-05-242018-11-09南京行者易智能交通科技有限公司A kind of real-time modeling method method suitable for public transport scene
CN109934162A (en)*2019-03-122019-06-25哈尔滨理工大学 Face image recognition and video clip interception method based on Struck tracking algorithm
CN110032978A (en)*2019-04-182019-07-19北京字节跳动网络技术有限公司Method and apparatus for handling video
CN114419106A (en)*2022-03-302022-04-29深圳市海清视讯科技有限公司Vehicle violation detection method, device and storage medium
CN115174861A (en)*2022-07-072022-10-11广州后为科技有限公司Method and device for automatically tracking moving target by pan-tilt camera
CN115174861B (en)*2022-07-072023-09-22广州后为科技有限公司Method and device for automatically tracking moving target by holder camera
CN119027656A (en)*2024-10-292024-11-26开拓导航控制技术股份有限公司 Aerial target tracking method, device, storage medium, and controller

Similar Documents

PublicationPublication DateTitle
CN106778484A (en)Moving vehicle tracking under traffic scene
Wei et al.Multi-vehicle detection algorithm through combining Harr and HOG features
Mahaur et al.Road object detection: a comparative study of deep learning-based algorithms
CN106875424B (en)A kind of urban environment driving vehicle Activity recognition method based on machine vision
Hadi et al.Vehicle detection and tracking techniques: a concise review
Feng et al.Mixed road user trajectory extraction from moving aerial videos based on convolution neural network detection
CN114898296B (en)Bus lane occupation detection method based on millimeter wave radar and vision fusion
CN104298969B (en)Crowd size's statistical method based on color Yu HAAR Fusion Features
CN103942560B (en)A kind of high-resolution video vehicle checking method in intelligent traffic monitoring system
CN103871079A (en)Vehicle tracking method based on machine learning and optical flow
CN102324183A (en) Vehicle Detection and Capture Method Based on Composite Virtual Coil
CN105654073A (en)Automatic speed control method based on visual detection
CN107038411A (en)A kind of Roadside Parking behavior precise recognition method based on vehicle movement track in video
CN109272482A (en) Urban Intersection Vehicle Queuing Detection System Based on Sequence Image
CN104200199A (en)TOF (Time of Flight) camera based bad driving behavior detection method
CN103761747B (en)Target tracking method based on weighted distribution field
CN103268706B (en)Method for detecting vehicle queue length based on local variance
Liu et al.Effective road lane detection and tracking method using line segment detector
Xia et al.Vehicles overtaking detection using RGB-D data
Cao et al.Application of convolutional neural networks and image processing algorithms based on traffic video in vehicle taillight detection
Zhang et al.Traffic sign detection algorithm based on improved YOLOv7
CN105243354A (en)Vehicle detection method based on target feature points
Zhou et al.Real-time traffic light recognition based on c-hog features
CN101261683A (en) A Vehicle Detection Method Based on Color Video
Ng et al.Real-time detection of objects on roads for autonomous vehicles using deep learning

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
WW01Invention patent application withdrawn after publication
WW01Invention patent application withdrawn after publication

Application publication date:20170531


[8]ページ先頭

©2009-2025 Movatter.jp