Movatterモバイル変換


[0]ホーム

URL:


CN105163065B - A kind of traffic speed-measuring method based on video camera front-end processing - Google Patents

A kind of traffic speed-measuring method based on video camera front-end processing
Download PDF

Info

Publication number
CN105163065B
CN105163065BCN201510469779.6ACN201510469779ACN105163065BCN 105163065 BCN105163065 BCN 105163065BCN 201510469779 ACN201510469779 ACN 201510469779ACN 105163065 BCN105163065 BCN 105163065B
Authority
CN
China
Prior art keywords
vehicle
camera
speed
point
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510469779.6A
Other languages
Chinese (zh)
Other versions
CN105163065A (en
Inventor
曹泉
何小晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN HAGONGDA TRAFFIC ELECTRONIC TECHNOLOGY Co Ltd
Original Assignee
SHENZHEN HAGONGDA TRAFFIC ELECTRONIC TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN HAGONGDA TRAFFIC ELECTRONIC TECHNOLOGY Co LtdfiledCriticalSHENZHEN HAGONGDA TRAFFIC ELECTRONIC TECHNOLOGY Co Ltd
Priority to CN201510469779.6ApriorityCriticalpatent/CN105163065B/en
Publication of CN105163065ApublicationCriticalpatent/CN105163065A/en
Application grantedgrantedCritical
Publication of CN105163065BpublicationCriticalpatent/CN105163065B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Landscapes

Abstract

The invention discloses a kind of traffic speed-measuring methods based on video camera front-end processing, which comprises the following steps: (1) installs video camera, demarcate to video camera, to obtain camera parameters;(2) scene coordinate reconstruction is carried out to the raw video image of video camera shooting, grounding point detection is carried out to vehicle in rebuilding figure and grounding point tracks, then calculates operating range and travel speed;Alternatively, carrying out grounding point detection and grounding point tracking to vehicle in original video frame, by the grounding point coordinate transformation of multiframe at scene actual coordinate, the operating range and travel speed of vehicle are then calculated.The present invention can realize that high precision velocity detects at low cost, have very high practical value.

Description

Traffic speed measuring method based on camera front-end processing
Technical Field
The invention relates to a vehicle speed measuring method, in particular to a traffic speed measuring method based on front-end processing of a camera.
Background
In order to ensure the safety of road driving, the driving speed detection of vehicles driving on the road is one of hot applications in the traffic field, and relates to the management and statistics of road traffic and responsibility confirmation after accidents.
The existing technology for detecting the speed of a running vehicle mainly comprises several modes of radar detection, coil detection and video detection. Although the accuracy and stability of radar detection and coil detection are high, the radar detection and coil detection can only be installed at the main intersection, high-speed bayonet and other heavy points due to high cost of hardware facilities or construction.
And the video speed measurement is the most economical and practical detection mode because the camera can be used for illegal evidence collection at the same time. Video speed measurement methods are roughly divided into two types, namely virtual coil detection and three-dimensional calibration detection, according to different principles. The main defects of the virtual coil detection method are as follows: firstly, the time difference can only be determined according to the frame number of the video, the time of the vehicle passing through the zone bit of the virtual coil cannot be accurately measured, the system error cannot be avoided, and the error can be increased along with the increase of the vehicle speed; secondly, errors are also brought by a method for determining the position of a vehicle according to the positions of the license plates and other marks in the image, and the actual positions of the vehicles with different license plate heights can be different by several meters or even tens of meters when the license plates are positioned at the same position in the image. The main defects of the three-dimensional calibration detection method are as follows: firstly, a manually selected point needs to know a certain dimension of three-dimensional coordinates, and the manual interaction mode is difficult to be practical in actual speed measurement application; secondly, more than six points of known three-dimensional coordinates are needed for calibrating the reference points, and the difficulty in obtaining the three-dimensional coordinates in practical application is high.
Based on the current situation, a speed measuring method which is easy to apply and can automatically detect the grounding point needs to be developed to meet the needs in actual work.
Disclosure of Invention
The invention aims to provide a traffic speed measuring method based on front-end processing of a camera, which solves the problems in the prior art, reduces the difficulty of vehicle speed measurement and improves the accuracy without increasing the cost.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a traffic speed measurement method based on camera front-end processing comprises the following steps:
(1) assuming that the vertical included angle of the optical axis of the camera relative to the X-Y plane in the three-dimensional world coordinate is a pitch angle t, the field angle of the camera is theta, and the horizontal included angle from the Y axis in the three-dimensional coordinate system along the anticlockwise direction to the optical axis of the camera on the projection line of the X-Y plane is a rotation angle p; installing a camera according to the mode that the pitch angle t is more than or equal to theta/2 and the rotation angle p is more than 70 degrees, inputting image coordinates of four calibration points, and calibrating the camera by adopting a double-vanishing-point calibration method or a single-vanishing-point calibration method to obtain parameters of the camera;
(2) carrying out scene coordinate reconstruction on an original video image shot by a camera, carrying out ground point detection and ground point tracking on a vehicle in a reconstructed image, and then calculating a driving distance and a driving speed; or, carrying out ground point detection and ground point tracking on the vehicle in the original video frame, converting the coordinates of the ground point of the multiple frames into the actual coordinates of the scene, and then calculating the running distance and the running speed of the vehicle.
In addition, in a scene with insufficient light, light can be supplemented twice through the flash lamp, then the grounding point of the vehicle can be positioned according to two frames of light supplementing images, and the driving distance and speed of the vehicle can be calculated; or tracking and positioning the vehicle through the characteristic points on the vehicle lamp, and calculating the running speed of the vehicle by using the estimated vehicle lamp height value; or, firstly, speed of the vehicle is estimated when the vehicle is far away, when the estimated speed value exceeds the speed limit value of the current scene, the flash lamp is triggered to carry out light supplement shooting twice on the vehicle entering the optimal shooting area, the grounding point of the vehicle is positioned according to two frames of light supplement pictures, and then the driving distance and the speed of the vehicle are calculated.
Preferably, in the step (2), converting the coordinates of the grounding point of the multiple frames into the actual coordinates of the scene is performed by the following formula:
preferably, the interval time between the two supplementary lights is 0.5 second.
Preferably, the running speed of the vehicle is calculated by the following formula:
preferably, the running distance and the running speed of the vehicle are calculated by an OMAPL138 chip.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, the installation mode of the camera is reasonably set, and the shooting errors including positioning errors, shaking errors, license plate deformation and the like are reduced from the working angle of the camera, so that a reliable foundation is laid for the accuracy of vehicle speed detection. Secondly, calibrating the camera by using the sixty-nine marked lines of the road and a manually paved calibration template, and determining the working environment of the camera by a simple method; and finally, the running speed of the vehicle is obtained by using the OMAPL138 chip through mutual matching of ground point detection and tracking and scene actual coordinate conversion, so that the vehicle speed detection with low cost and high precision is realized, and the method has high practical value and market prospect.
Drawings
Fig. 1 is a camera imaging model in the present invention.
FIG. 2 is a calibration template for the double vanishing point calibration method of the present invention.
FIG. 3 is a calibration template for the single vanishing point calibration method of the present invention.
Fig. 4 shows the error of the pitch angle calculated by the two calibration methods under different rotation angle environments.
FIG. 5 shows the pitch angle calculation errors caused by different calibration error levels in the pathological condition environment according to the two calibration methods of the present invention.
FIG. 6 is a schematic diagram of the relationship between the height of the tracking point and the projection distance in the present invention.
Detailed Description
The present invention is further illustrated by the following figures and examples, which include, but are not limited to, the following examples.
Examples
As shown in fig. 1, the traffic speed measurement method based on the front-end processing of the camera disclosed by the invention mainly uses an OMAPL138 chip to implement algorithm processing. The chip is a dual-core high-speed processor which is provided with a C6748 floating point DSP core and an ARM9 core and is promoted by TI company, integrates image, voice, network and storage, and has high cost performance. The C6748 kernel with the frequency up to 456MHz provides floating point working capacity and higher-performance fixed point working capacity; ARM9 has high flexibility, and developers can use operating systems such as Linux and the like on the ARM9, and conveniently add human-computer interfaces, network functions, touch screens and the like for applications.
The OMAPL138 chip has the total power consumption of 440mW and the standby mode power consumption of 15mW under different use conditions, has abundant internal memory and peripheral resources, can meet the design requirements of a high-precision video speed measurement system, and is convenient for system expansion and upgrading in the future.
Before actual speed measurement, a camera is firstly installed, and the installation mode of the camera influences the accuracy of speed detection to a great extent. The mounting height of the camera needs to consider not only speed measurement error and positioning error, but also construction difficulty and view range, and in order to reduce jitter error, the pitch angle is increased as much as possible under the condition of ensuring the view, and in the embodiment, the pitch angle t is set to be larger than or equal to theta/2. The rotation angle determines whether the camera is side-mounted or front-mounted, the rotation angle is close to 90 degrees when the camera is front-mounted, and the rotation angle is close to 90 degrees as much as possible in order to increase the visual angle and reduce the deformation of the license plate when the camera is side-mounted, so that the rotation angle is larger than 70 degrees in the embodiment. As for the rotation angle, it is installed at approximately 0 °.
After the camera is installed, the camera is calibrated and coordinates are converted. As shown in fig. 1, several parameters, such as pitch angle t, rotation angle p, rotation angle s, focal length f, and camera height h, are used to define the relationship between the image plane and the actual three-dimensional world coordinates. The pitch angle t is a vertical included angle of the optical axis of the camera relative to an X-Y plane in a three-dimensional world coordinate; the rotation angle p is a horizontal included angle from the Y axis in the three-dimensional coordinate system along the anticlockwise direction to the optical axis of the camera on the projection line of the X-Y plane; the rotation angle s refers to the rotation angle of the camera along its optical axis; focal length refers to the distance from the image plane along the optical axis to the center of the camera lens; camera height refers to the vertical height from the center of the camera lens to the X-Y plane.
Suppose Q is (X)Q,YQ,ZQ) Is a point in the three-dimensional world coordinate, and the corresponding point in the two-dimensional image coordinate is q ═ xq,yq). A forward mapping function Φ from a point in three-dimensional world coordinates to image coordinates can be given by the following equation:
where the rotation matrix R and the translation matrix T are used to characterize the extrinsic parameters of the camera, and the 3 x 3 upper triangular matrix K is used to characterize the intrinsic parameters of the camera, they have the following form:
wherein, XCAM=h sin p cot t,YCAM=h cos p cot t,ZCAMH. And f is the focal length, a ═ fu/fvFor an aspect ratio (typically 1),for the tilt factor (normally set to 0), (u)0,v0) The origin coordinate (usually set to (0,0)) at which the optical axis intersects the image plane. From the above equation, one can deduce:
and
if the Q point lies in the X-Y plane, then ZQIs equal to zero, XQAnd YQCan be prepared from (x)q,yq) Is calculated toTo:
the calculation of the camera parameters through the calibration points is the core content of camera calibration, and for the calculation of the external parameters of the camera, the algorithm adopted by the embodiment comprises a double-vanishing-point calibration method and a single-vanishing-point calibration method.
As shown in FIG. 2, assume that the template is calibratedAndin parallel with each other, the two groups of the material,andin parallel, the following five equations can be listed:
since ABCD is all points on the road surface, i.e., Z is 0, substituting equations (4) and (5) into the above equations can solve the following camera parameters:
wherein, αPQ=xq-xpPQ=yq-ypPQ=xpyq-xqyp;(xA,yA),(xB,yB),(xC,yC),(xD,yD) The image coordinates of the four ABCD points, respectively.
The calibration template for the single vanishing point calibration is shown in fig. 3, and according to the calibration template, the set of equations is listed as follows:
from this set of equations, the camera parameters are solved as follows:
α thereinPQ=xq-xp,βPQ=yq-ypPQ=xpyq-xqyp
Here, the
Wherein,
UQ=xq sin s+yq cos s,VQ=xq cos s-yq sin s
f=F/tan t (15)
wherein if f < 0, let f-f and s-s + pi.
Wherein if h < 0, let h-h and p-p + pi.
The double vanishing point calibration method is relatively simple in calculation, but when the rotation angle is close to integral multiple of 90 degrees, a ill-conditioned phenomenon occurs, namely one of two vanishing points is close to infinity, so that the calculated camera parameters are particularly sensitive to errors of the calibration points. And the single vanishing point method selects one of the two vanishing points which is closer to the original point of the image, thereby effectively avoiding the occurrence of ill-conditioned phenomenon and being well suitable for side installation and normal installation of the camera. The embodiment automatically selects a corresponding calibration method according to the configuration condition of the camera in the specific implementation environment.
Fig. 4 shows the error between the calculated pitch angle and the actual pitch angle in the environments with different rotation angles when gaussian noise with one pixel variance is added to the calibration point. Fig. 5 shows the error of the pitch angle calculated by the two calibration methods under different calibration point error levels in a pathological phenomenon environment (p is 89 °). In particular, when the calibration point error is one pixel, the pitch angle error is about 0.10 °, and the velocity measurement error caused by the pitch angle error is about 2%. The ratio of the focal length of the camera to the size of a single pixel of the CCD is set to be 800 pixels, the focal length of the camera in actual road measurement is generally larger than the value, namely the image resolution is higher, the speed measurement error caused by the quantization error of the calibration point is smaller, and the error range is within the error range of the national standard of video speed measurement.
In the embodiment, the camera calibration needs to input two pairs of road marking fixed points at least, namely the positions of four points, the method can be easily expanded to input a plurality of pairs of road markings, and the calibration precision is improved by straight line fitting and a least square method.
After the camera calibration is completed to obtain the camera parameters, the scene coordinate reconstruction is carried out on the original video image shot by the camera according to the formula (4) and the formula (5), and the reconstructed image is an X-Y plane reconstruction image. In the reconstructed image, the image coordinates of all objects with the height of 0 are consistent with the actual coordinates, and the grounding point of the vehicle can be utilized to track, position, measure distance and measure speed of the vehicle in the reconstructed image. The algorithm for detecting and tracking the moving object comprises background modeling, foreground extraction, optical flow tracking or template matching tracking on the feature points and the like, and the algorithm is mature, is not the key point of the invention, and is not described in detail herein. It should be noted that, in this embodiment, the vehicle may be located and tracked in the original video frame, and then the coordinates of the location coordinates are transformed, so as to calculate the vehicle running speed; or firstly reconstructing scene coordinates of the original video frame, positioning and tracking the vehicle in a reconstructed image, and then directly calculating the driving speed.
In some states, for example, under the condition that light is not supplemented at night, the grounding point of the vehicle is often invisible, and generally, only the point of the position of the vehicle lamp or the point of the position of the license plate can be used as a reference point for speed measurement and tracking. The influence of the height of the vehicle speed measurement tracking point on the speed measurement precision refers to a scene model shown in fig. 6. The height of the known camera A is hcamObserve the height of the vehicle as hBPoint of (A), BkAnd Bk+1The positions of the speed measurement observation points in the two frames of images are respectively. CkAnd Ck+1Are respectively BkAnd Bk+1Projection on the ground along the optical axis of the camera, CkAnd Ck+1A distance d betweenpWe call the projection distance, DkAnd Dk+1Are respectively BkAnd Bk+1The distance between the vertical projections on the ground we refer to as the actual distance da. According to the geometric relationship in the graph, the following can be calculated:
when we assume that the height of the B point is zero, the calculated B point displacement is actually the C point displacement, i.e. the projection distance. So the velocity measurement error at this time is:
the actual vehicle running speed is:
here, vpThe resulting projection velocities are calculated for tracking feature points in the scene coordinate reconstruction map.
According to the formula, when the vehicle speed measurement tracking points are positioned and tested on the reconstructed image:
(1) the higher the height of the speed measurement tracking point is, the larger the speed measurement error is;
(2) the higher the height of the camera is, the smaller the error caused by the height of the speed measurement tracking point is;
(3) under the condition that the height of the camera and the height of the speed measurement tracking point are known, the error can be measured and compensated.
The method can realize high-precision speed detection at low cost, automatically detect the vehicle and position the grounding point of the vehicle during speed measurement, convert the position of the vehicle in an actual three-dimensional coordinate system according to the coordinate of the grounding point in an image, and calculate the running speed of the vehicle according to the position difference of the vehicle in different video frames, thereby having very high market prospect and practical value.
The above-mentioned embodiments are only preferred embodiments of the present invention, and do not limit the scope of the present invention, but all the modifications made by the principles of the present invention and the non-inventive efforts based on the above-mentioned embodiments shall fall within the scope of the present invention.

Claims (4)

(1) assuming that the vertical included angle of the optical axis of the camera relative to the X-Y plane in the three-dimensional world coordinate is a pitch angle t, the field angle of the camera is theta, and the horizontal included angle from the Y axis in the three-dimensional coordinate system along the anticlockwise direction to the optical axis of the camera on the projection line of the X-Y plane is a rotation angle p; installing a camera according to the mode that the pitch angle t is more than or equal to theta/2 and the rotation angle p is more than 70 degrees, inputting image coordinates of four calibration points, and calibrating the camera by adopting a double-vanishing-point calibration method or a single-vanishing-point calibration method to obtain parameters of the camera;
CN201510469779.6A2015-08-042015-08-04A kind of traffic speed-measuring method based on video camera front-end processingActiveCN105163065B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201510469779.6ACN105163065B (en)2015-08-042015-08-04A kind of traffic speed-measuring method based on video camera front-end processing

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201510469779.6ACN105163065B (en)2015-08-042015-08-04A kind of traffic speed-measuring method based on video camera front-end processing

Publications (2)

Publication NumberPublication Date
CN105163065A CN105163065A (en)2015-12-16
CN105163065Btrue CN105163065B (en)2019-04-16

Family

ID=54803807

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201510469779.6AActiveCN105163065B (en)2015-08-042015-08-04A kind of traffic speed-measuring method based on video camera front-end processing

Country Status (1)

CountryLink
CN (1)CN105163065B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106781536A (en)*2016-11-212017-05-31四川大学A kind of vehicle speed measuring method based on video detection
AT519679A1 (en)*2017-02-272018-09-15Zactrack Gmbh Method for calibrating a rotating and pivoting stage equipment
CN106991414A (en)*2017-05-172017-07-28司法部司法鉴定科学技术研究所A kind of method that state of motion of vehicle is obtained based on video image
CN107492123B (en)*2017-07-072020-01-14长安大学Road monitoring camera self-calibration method using road surface information
CN110310492B (en)*2019-06-252020-09-04重庆紫光华山智安科技有限公司 A method and device for measuring the speed of a mobile vehicle
CN110632339A (en)*2019-10-092019-12-31天津天地伟业信息系统集成有限公司Water flow testing method of video flow velocity tester
CN111612849A (en)*2020-05-122020-09-01深圳市哈工大交通电子技术有限公司Camera calibration method and system based on mobile vehicle
CN111899525A (en)*2020-08-182020-11-06重庆紫光华山智安科技有限公司Distance measuring method, distance measuring device, electronic device, and storage medium
CN112489106A (en)*2020-12-082021-03-12深圳市哈工交通电子有限公司Video-based vehicle size measuring method and device, terminal and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CA2706695C (en)*2006-12-042019-04-30Lynx System Developers, Inc.Autonomous systems and methods for still and moving picture production
WO2008086293A2 (en)*2007-01-052008-07-17Nestor, Inc.A system and method for measuring the speed of vehicles or other objects
US8456527B2 (en)*2007-07-272013-06-04Sportvision, Inc.Detecting an object in an image using templates indexed to location or camera sensors
CN102254318B (en)*2011-04-082013-01-09上海交通大学Method for measuring speed through vehicle road traffic videos based on image perspective projection transformation
CN102722886B (en)*2012-05-212015-12-09浙江捷尚视觉科技股份有限公司A kind of video frequency speed-measuring method based on three-dimensional scaling and Feature Points Matching
US9641806B2 (en)*2013-03-122017-05-023M Innovative Properties CompanyAverage speed detection with flash illumination

Also Published As

Publication numberPublication date
CN105163065A (en)2015-12-16

Similar Documents

PublicationPublication DateTitle
CN105163065B (en)A kind of traffic speed-measuring method based on video camera front-end processing
CN102254318B (en)Method for measuring speed through vehicle road traffic videos based on image perspective projection transformation
CN104021676B (en)Vehicle location based on vehicle dynamic video features and vehicle speed measurement method
CN102538763B (en)Method for measuring three-dimensional terrain in river model test
CN102622767B (en)Method for positioning binocular non-calibrated space
CN107884767A (en)A kind of method of binocular vision system measurement ship distance and height
CN102592454A (en)Intersection vehicle movement parameter measuring method based on detection of vehicle side face and road intersection line
CN106978774A (en)A kind of road surface pit automatic testing method
CN107705331A (en)A kind of automobile video frequency speed-measuring method based on multiple views video camera
CN113804916B (en) A frequency domain spatiotemporal image velocimetry method based on prior information of maximum flow velocity
CN111382591B (en)Binocular camera ranging correction method and vehicle-mounted equipment
CN111091076B (en) Measurement method of tunnel boundary data based on stereo vision
CN104063863B (en) Downward-looking binocular vision system and image processing method for river channel monitoring
CN111696162A (en)Binocular stereo vision fine terrain measurement system and method
CN104143192A (en)Calibration method and device of lane departure early warning system
CN103487033A (en)River surface photographic surveying method based on height-change homography
CN116958218B (en)Point cloud and image registration method and equipment based on calibration plate corner alignment
WO2022078440A1 (en)Device and method for acquiring and determining space occupancy comprising moving object
CN105865350A (en) 3D Object Point Cloud Imaging Method
CN116576884A (en)System, method and machine-readable storage medium for calibrating parameters of multiple sensors
Liu et al.Lightweight defect detection equipment for road tunnels
US20230177724A1 (en)Vehicle to infrastructure extrinsic calibration system and method
Sun et al.Pavement Potholes Quantification: A Study Based on 3D Point Cloud Analysis
CN111442817A (en)Non-contact structured light binocular vision sewage level measuring device and method
CN104180794A (en)Method for treating texture distortion area of digital orthoimage

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp