Disclosure of Invention
The invention aims to provide a traffic speed measuring method based on front-end processing of a camera, which solves the problems in the prior art, reduces the difficulty of vehicle speed measurement and improves the accuracy without increasing the cost.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a traffic speed measurement method based on camera front-end processing comprises the following steps:
(1) assuming that the vertical included angle of the optical axis of the camera relative to the X-Y plane in the three-dimensional world coordinate is a pitch angle t, the field angle of the camera is theta, and the horizontal included angle from the Y axis in the three-dimensional coordinate system along the anticlockwise direction to the optical axis of the camera on the projection line of the X-Y plane is a rotation angle p; installing a camera according to the mode that the pitch angle t is more than or equal to theta/2 and the rotation angle p is more than 70 degrees, inputting image coordinates of four calibration points, and calibrating the camera by adopting a double-vanishing-point calibration method or a single-vanishing-point calibration method to obtain parameters of the camera;
(2) carrying out scene coordinate reconstruction on an original video image shot by a camera, carrying out ground point detection and ground point tracking on a vehicle in a reconstructed image, and then calculating a driving distance and a driving speed; or, carrying out ground point detection and ground point tracking on the vehicle in the original video frame, converting the coordinates of the ground point of the multiple frames into the actual coordinates of the scene, and then calculating the running distance and the running speed of the vehicle.
In addition, in a scene with insufficient light, light can be supplemented twice through the flash lamp, then the grounding point of the vehicle can be positioned according to two frames of light supplementing images, and the driving distance and speed of the vehicle can be calculated; or tracking and positioning the vehicle through the characteristic points on the vehicle lamp, and calculating the running speed of the vehicle by using the estimated vehicle lamp height value; or, firstly, speed of the vehicle is estimated when the vehicle is far away, when the estimated speed value exceeds the speed limit value of the current scene, the flash lamp is triggered to carry out light supplement shooting twice on the vehicle entering the optimal shooting area, the grounding point of the vehicle is positioned according to two frames of light supplement pictures, and then the driving distance and the speed of the vehicle are calculated.
Preferably, in the step (2), converting the coordinates of the grounding point of the multiple frames into the actual coordinates of the scene is performed by the following formula:
preferably, the interval time between the two supplementary lights is 0.5 second.
Preferably, the running speed of the vehicle is calculated by the following formula:
preferably, the running distance and the running speed of the vehicle are calculated by an OMAPL138 chip.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, the installation mode of the camera is reasonably set, and the shooting errors including positioning errors, shaking errors, license plate deformation and the like are reduced from the working angle of the camera, so that a reliable foundation is laid for the accuracy of vehicle speed detection. Secondly, calibrating the camera by using the sixty-nine marked lines of the road and a manually paved calibration template, and determining the working environment of the camera by a simple method; and finally, the running speed of the vehicle is obtained by using the OMAPL138 chip through mutual matching of ground point detection and tracking and scene actual coordinate conversion, so that the vehicle speed detection with low cost and high precision is realized, and the method has high practical value and market prospect.
Examples
As shown in fig. 1, the traffic speed measurement method based on the front-end processing of the camera disclosed by the invention mainly uses an OMAPL138 chip to implement algorithm processing. The chip is a dual-core high-speed processor which is provided with a C6748 floating point DSP core and an ARM9 core and is promoted by TI company, integrates image, voice, network and storage, and has high cost performance. The C6748 kernel with the frequency up to 456MHz provides floating point working capacity and higher-performance fixed point working capacity; ARM9 has high flexibility, and developers can use operating systems such as Linux and the like on the ARM9, and conveniently add human-computer interfaces, network functions, touch screens and the like for applications.
The OMAPL138 chip has the total power consumption of 440mW and the standby mode power consumption of 15mW under different use conditions, has abundant internal memory and peripheral resources, can meet the design requirements of a high-precision video speed measurement system, and is convenient for system expansion and upgrading in the future.
Before actual speed measurement, a camera is firstly installed, and the installation mode of the camera influences the accuracy of speed detection to a great extent. The mounting height of the camera needs to consider not only speed measurement error and positioning error, but also construction difficulty and view range, and in order to reduce jitter error, the pitch angle is increased as much as possible under the condition of ensuring the view, and in the embodiment, the pitch angle t is set to be larger than or equal to theta/2. The rotation angle determines whether the camera is side-mounted or front-mounted, the rotation angle is close to 90 degrees when the camera is front-mounted, and the rotation angle is close to 90 degrees as much as possible in order to increase the visual angle and reduce the deformation of the license plate when the camera is side-mounted, so that the rotation angle is larger than 70 degrees in the embodiment. As for the rotation angle, it is installed at approximately 0 °.
After the camera is installed, the camera is calibrated and coordinates are converted. As shown in fig. 1, several parameters, such as pitch angle t, rotation angle p, rotation angle s, focal length f, and camera height h, are used to define the relationship between the image plane and the actual three-dimensional world coordinates. The pitch angle t is a vertical included angle of the optical axis of the camera relative to an X-Y plane in a three-dimensional world coordinate; the rotation angle p is a horizontal included angle from the Y axis in the three-dimensional coordinate system along the anticlockwise direction to the optical axis of the camera on the projection line of the X-Y plane; the rotation angle s refers to the rotation angle of the camera along its optical axis; focal length refers to the distance from the image plane along the optical axis to the center of the camera lens; camera height refers to the vertical height from the center of the camera lens to the X-Y plane.
Suppose Q is (X)Q,YQ,ZQ) Is a point in the three-dimensional world coordinate, and the corresponding point in the two-dimensional image coordinate is q ═ xq,yq). A forward mapping function Φ from a point in three-dimensional world coordinates to image coordinates can be given by the following equation:
where the rotation matrix R and the translation matrix T are used to characterize the extrinsic parameters of the camera, and the 3 x 3 upper triangular matrix K is used to characterize the intrinsic parameters of the camera, they have the following form:
wherein, XCAM=h sin p cot t,YCAM=h cos p cot t,ZCAMH. And f is the focal length, a ═ fu/fvFor an aspect ratio (typically 1),for the tilt factor (normally set to 0), (u)0,v0) The origin coordinate (usually set to (0,0)) at which the optical axis intersects the image plane. From the above equation, one can deduce:
and
if the Q point lies in the X-Y plane, then ZQIs equal to zero, XQAnd YQCan be prepared from (x)q,yq) Is calculated toTo:
the calculation of the camera parameters through the calibration points is the core content of camera calibration, and for the calculation of the external parameters of the camera, the algorithm adopted by the embodiment comprises a double-vanishing-point calibration method and a single-vanishing-point calibration method.
As shown in FIG. 2, assume that the template is calibratedAndin parallel with each other, the two groups of the material,andin parallel, the following five equations can be listed:
since ABCD is all points on the road surface, i.e., Z is 0, substituting equations (4) and (5) into the above equations can solve the following camera parameters:
wherein, αPQ=xq-xp,βPQ=yq-yp,χPQ=xpyq-xqyp;(xA,yA),(xB,yB),(xC,yC),(xD,yD) The image coordinates of the four ABCD points, respectively.
The calibration template for the single vanishing point calibration is shown in fig. 3, and according to the calibration template, the set of equations is listed as follows:
from this set of equations, the camera parameters are solved as follows:
α thereinPQ=xq-xp,βPQ=yq-yp,χPQ=xpyq-xqyp。
Here, the
Wherein,
UQ=xq sin s+yq cos s,VQ=xq cos s-yq sin s
f=F/tan t (15)
wherein if f < 0, let f-f and s-s + pi.
Wherein if h < 0, let h-h and p-p + pi.
The double vanishing point calibration method is relatively simple in calculation, but when the rotation angle is close to integral multiple of 90 degrees, a ill-conditioned phenomenon occurs, namely one of two vanishing points is close to infinity, so that the calculated camera parameters are particularly sensitive to errors of the calibration points. And the single vanishing point method selects one of the two vanishing points which is closer to the original point of the image, thereby effectively avoiding the occurrence of ill-conditioned phenomenon and being well suitable for side installation and normal installation of the camera. The embodiment automatically selects a corresponding calibration method according to the configuration condition of the camera in the specific implementation environment.
Fig. 4 shows the error between the calculated pitch angle and the actual pitch angle in the environments with different rotation angles when gaussian noise with one pixel variance is added to the calibration point. Fig. 5 shows the error of the pitch angle calculated by the two calibration methods under different calibration point error levels in a pathological phenomenon environment (p is 89 °). In particular, when the calibration point error is one pixel, the pitch angle error is about 0.10 °, and the velocity measurement error caused by the pitch angle error is about 2%. The ratio of the focal length of the camera to the size of a single pixel of the CCD is set to be 800 pixels, the focal length of the camera in actual road measurement is generally larger than the value, namely the image resolution is higher, the speed measurement error caused by the quantization error of the calibration point is smaller, and the error range is within the error range of the national standard of video speed measurement.
In the embodiment, the camera calibration needs to input two pairs of road marking fixed points at least, namely the positions of four points, the method can be easily expanded to input a plurality of pairs of road markings, and the calibration precision is improved by straight line fitting and a least square method.
After the camera calibration is completed to obtain the camera parameters, the scene coordinate reconstruction is carried out on the original video image shot by the camera according to the formula (4) and the formula (5), and the reconstructed image is an X-Y plane reconstruction image. In the reconstructed image, the image coordinates of all objects with the height of 0 are consistent with the actual coordinates, and the grounding point of the vehicle can be utilized to track, position, measure distance and measure speed of the vehicle in the reconstructed image. The algorithm for detecting and tracking the moving object comprises background modeling, foreground extraction, optical flow tracking or template matching tracking on the feature points and the like, and the algorithm is mature, is not the key point of the invention, and is not described in detail herein. It should be noted that, in this embodiment, the vehicle may be located and tracked in the original video frame, and then the coordinates of the location coordinates are transformed, so as to calculate the vehicle running speed; or firstly reconstructing scene coordinates of the original video frame, positioning and tracking the vehicle in a reconstructed image, and then directly calculating the driving speed.
In some states, for example, under the condition that light is not supplemented at night, the grounding point of the vehicle is often invisible, and generally, only the point of the position of the vehicle lamp or the point of the position of the license plate can be used as a reference point for speed measurement and tracking. The influence of the height of the vehicle speed measurement tracking point on the speed measurement precision refers to a scene model shown in fig. 6. The height of the known camera A is hcamObserve the height of the vehicle as hBPoint of (A), BkAnd Bk+1The positions of the speed measurement observation points in the two frames of images are respectively. CkAnd Ck+1Are respectively BkAnd Bk+1Projection on the ground along the optical axis of the camera, CkAnd Ck+1A distance d betweenpWe call the projection distance, DkAnd Dk+1Are respectively BkAnd Bk+1The distance between the vertical projections on the ground we refer to as the actual distance da. According to the geometric relationship in the graph, the following can be calculated:
when we assume that the height of the B point is zero, the calculated B point displacement is actually the C point displacement, i.e. the projection distance. So the velocity measurement error at this time is:
the actual vehicle running speed is:
here, vpThe resulting projection velocities are calculated for tracking feature points in the scene coordinate reconstruction map.
According to the formula, when the vehicle speed measurement tracking points are positioned and tested on the reconstructed image:
(1) the higher the height of the speed measurement tracking point is, the larger the speed measurement error is;
(2) the higher the height of the camera is, the smaller the error caused by the height of the speed measurement tracking point is;
(3) under the condition that the height of the camera and the height of the speed measurement tracking point are known, the error can be measured and compensated.
The method can realize high-precision speed detection at low cost, automatically detect the vehicle and position the grounding point of the vehicle during speed measurement, convert the position of the vehicle in an actual three-dimensional coordinate system according to the coordinate of the grounding point in an image, and calculate the running speed of the vehicle according to the position difference of the vehicle in different video frames, thereby having very high market prospect and practical value.
The above-mentioned embodiments are only preferred embodiments of the present invention, and do not limit the scope of the present invention, but all the modifications made by the principles of the present invention and the non-inventive efforts based on the above-mentioned embodiments shall fall within the scope of the present invention.