所属技术领域Technical field
本发明属于目标检测与测距领域,尤其涉及一种基于单目视觉的车辆测距方法.The invention belongs to the field of target detection and ranging, in particular to a vehicle ranging method based on monocular vision.
背景技术Background technique
随着公路交通特别是高速公路系统的发展,交通事故率也呈现上升趋势,交通安全越来越成为人们关注的焦点。因此,研究车辆安全辅助驾驶技术,为车辆提供安全辅助驾驶功能,从而为减少因驾驶者主观因素造成的交通事故提供智能技术服务。计算机视觉因其提供的信息量大、稳定性好等因素,逐渐成为车辆安全辅助驾驶技术的研究重点,随着计算机视觉技术的不断发展,它在智能车辆系统中的作用不断的完善,把计算机视觉技术应用在车辆检测中,对汽车安全性的提高产生了重大的作用。With the development of highway traffic, especially the expressway system, the traffic accident rate is also showing an upward trend, and traffic safety has become the focus of people's attention. Therefore, research on vehicle safety assisted driving technology provides vehicles with safety assisted driving functions, thereby providing intelligent technical services for reducing traffic accidents caused by drivers' subjective factors. Because of the large amount of information it provides and good stability, computer vision has gradually become the research focus of vehicle safety assisted driving technology. With the continuous development of computer vision technology, its role in intelligent vehicle systems has been continuously improved. The application of vision technology in vehicle detection has played a significant role in improving the safety of vehicles.
利用单目摄像头进行运动目标(例如车辆)的检测,其发展大致经历了三个阶段:被动学习阶段——主动学习阶段——自适应学习阶段。在被动学习阶段,主要是根据图像的特点,对所有存在的对象进行拟合,比较前后帧图像的不同点之后区分所要检测的目标,其主要算法有混合高斯模型、背景差分法、卡尔曼滤波器等。在主动学习阶段,主要是针对特定的运动目标,研究其固有特征,通过对特征的学习,对运动目标进行检测。目前针对运动的车辆,常用的固有特征包括车辆底部产生的阴影、阴影的熵值、车辆边缘的对称性、车辆像素的亮度、车辆的纹理等。在自适应学习中,运动目标的检测大致分为三个步骤:第一步是特征提取,这里的特征主要是数学特征,现今常用的特征提取算法有HOG、Haar、SIFT、LBT等,这一步是后面两步的基础;第二部是分类器的训练,通过输入大量正负样本,经过训练获得识别目标数学特征的分类器,现今常用的分类器算法包括SVM、Adaboost等;第三步是运动目标检测,这一步主要是运用经过训练得到的分类器,对输入的视频图像进行运动目标检测。The development of the detection of moving targets (such as vehicles) using monocular cameras has roughly gone through three stages: passive learning stage-active learning stage-adaptive learning stage. In the passive learning stage, it mainly fits all existing objects according to the characteristics of the image, and distinguishes the target to be detected after comparing the differences between the front and rear frame images. The main algorithms include mixed Gaussian model, background difference method, and Kalman filter. device etc. In the active learning stage, it is mainly to study the inherent characteristics of a specific moving target, and detect the moving target through the learning of the features. At present, for moving vehicles, the commonly used inherent features include the shadow generated by the bottom of the vehicle, the entropy value of the shadow, the symmetry of the vehicle edge, the brightness of the vehicle pixel, and the texture of the vehicle. In adaptive learning, the detection of moving objects is roughly divided into three steps: the first step is feature extraction, where the features are mainly mathematical features, and the commonly used feature extraction algorithms today include HOG, Haar, SIFT, LBT, etc., this step It is the basis of the next two steps; the second part is the training of the classifier. By inputting a large number of positive and negative samples, the classifier that recognizes the mathematical characteristics of the target is obtained after training. The commonly used classifier algorithms include SVM, Adaboost, etc.; the third step is Moving target detection, this step is mainly to use the trained classifier to detect the moving target on the input video image.
单目视觉测距是利用一个摄像机获得的图片得出深度信息,按照测量的原理主要分为基于几何关系的测量方法和基于数据信息的测量方法。基于几何关系的测量方法是指利用摄像机的结构和摄像机得到的图片测得深度距离。利用计算机视觉理论与方法,在对行车道内的前方车辆进行快速探测以及对摄像机进行预先标定的基础上,利用摄像机参数和道路几何模型,获得前方车辆距离。上述测量的缺点是要对一幅或几幅图片进行特征点的匹配,匹配误差对测量结果有明显的影响,同时处理时间长,对于多幅图像而言则必然需要更多的计算时间。Monocular vision distance measurement is to use a camera to obtain depth information. According to the principle of measurement, it is mainly divided into measurement methods based on geometric relations and measurement methods based on data information. The measurement method based on the geometric relationship refers to measuring the depth distance by using the structure of the camera and the pictures obtained by the camera. Using the theory and method of computer vision, on the basis of fast detection of the vehicle in front of the lane and pre-calibration of the camera, the distance of the vehicle in front is obtained by using the camera parameters and the geometric model of the road. The disadvantage of the above measurement is that it needs to match the feature points of one or several pictures, and the matching error has a significant impact on the measurement results. At the same time, the processing time is long, and it will inevitably require more calculation time for multiple images.
基于数据信息的测量方法是指在已知物体信息的条件下利用摄像机获得的目标图片得到深度距离。该类方法的缺点是需要利用图像的准确信息进行测量,容易因为图像信息的不准确而导致测量的不准确。The measurement method based on data information refers to using the target picture obtained by the camera to obtain the depth distance under the condition of known object information. The disadvantage of this type of method is that accurate information of the image needs to be used for measurement, and it is easy to cause inaccurate measurement due to inaccurate image information.
发明内容Contents of the invention
针对现有方法存在的不足,本发明提出一种基于单目视觉的车辆测距方法。Aiming at the shortcomings of existing methods, the present invention proposes a vehicle distance measurement method based on monocular vision.
本发明的技术方案是这样实现的:Technical scheme of the present invention is realized like this:
一种基于单目视觉的车辆测距方法,车辆测距对象为同向行驶的前方车辆,包括如下步骤:A method for vehicle distance measurement based on monocular vision, wherein the vehicle distance measurement object is a front vehicle traveling in the same direction, comprising the following steps:
步骤1:在车辆上安装单目摄像头,测取摄像头高度及其俯仰角,并确定摄像头焦距参数;Step 1: Install a monocular camera on the vehicle, measure the height and pitch angle of the camera, and determine the focal length parameters of the camera;
首先把单目摄像头固定在车辆前方,确定该摄像头距离地面的高度及其轴线与水平方向的夹角,即该单目摄像头的垂直高度和俯仰角;First fix the monocular camera in front of the vehicle, determine the height of the camera from the ground and the angle between the axis and the horizontal direction, that is, the vertical height and pitch angle of the monocular camera;
步骤2:利用所述单目摄像头采集高速公路环境下的视频图像;Step 2: using the monocular camera to collect video images under the highway environment;
步骤3:目标车辆检测前的视频图像预处理过程;Step 3: the video image preprocessing process before the target vehicle detection;
步骤3.1:采用高斯滤波对视频图像进行初步去噪、滤波处理;Step 3.1: Use Gaussian filtering to perform preliminary denoising and filtering processing on the video image;
步骤3.2:对步骤3.1初步处理后的视频图像进行目标车辆检测前的兴趣区域分割预处理;Step 3.2: Preprocessing the region of interest segmentation before target vehicle detection on the video image preliminarily processed in step 3.1;
步骤4:目标车辆检测过程;Step 4: Target vehicle detection process;
在分割后的视频图像区域内进行车辆检测,并将检测出的目标车辆实时标注于画面上;Vehicle detection is performed in the segmented video image area, and the detected target vehicle is marked on the screen in real time;
步骤5:目标车辆测距过程;Step 5: Target vehicle ranging process;
测量目标车辆距离并在视频画面上实时显示该目标车辆距离。Measure the target vehicle distance and display the target vehicle distance on the video screen in real time.
所述的步骤3.2包括如下具体步骤:Described step 3.2 comprises following specific steps:
步骤3.2.1:对采集的视频图像进行天空区域分割;Step 3.2.1: performing sky region segmentation on the collected video images;
对采集的视频图像采用基于颜色空间进行天空区域分割,方法具体如下:首先获取视频图像在HIS(色调、色饱和度、亮度)、RGB(红、绿、蓝三色)、YIQ(亮度、色调、饱和度)和YCbCr(颜色亮度、蓝色和红色颜色偏移量)四种颜色空间中的直方图;然后分别在该四种颜色空间中确定天空区域的各个颜色分量的分布范围,计算和比较所确定的四组分布范围数据的方差和极值,选取其中方差和极值最小也即各个分量分布最集中的YCbCr色彩空间作为天空域分割的色彩空间;最后对视频图像进行二值化处理,确定天空的联通区域并计算其面积,并采用Otsu自适应阈值法,自动调整分割阈值,将天空部分从图像中去除;The collected video images are segmented based on the color space of the sky area, the method is as follows: firstly, the video images are obtained in HIS (hue, color saturation, brightness), RGB (red, green, blue), YIQ (brightness, hue) , saturation) and the histogram in four color spaces of YCbCr (color brightness, blue and red color offset); then determine the distribution range of each color component of the sky region in these four color spaces respectively, calculate and Compare the variance and extremum of the determined four sets of distribution range data, and select the YCbCr color space with the smallest variance and extremum, that is, the most concentrated distribution of each component, as the color space for sky domain segmentation; finally, binarize the video image , determine the connected area of the sky and calculate its area, and use the Otsu adaptive threshold method to automatically adjust the segmentation threshold to remove the sky part from the image;
步骤3.2.2:对天空区域分割后的视频图像,采用最小误差阈值分割法检测车道线,即最靠近图像边缘的道路边沿线,并对检测出的车道线建立二维直线方程,并以此方程为基础去除边缘线之外的区域,进一步减小车辆检测区域面积;Step 3.2.2: For the video image after the sky region segmentation, use the minimum error threshold segmentation method to detect the lane line, that is, the road edge line closest to the edge of the image, and establish a two-dimensional straight line equation for the detected lane line, and use this Based on the equation, the area outside the edge line is removed to further reduce the area of the vehicle detection area;
所述的步骤4包括如下具体步骤:Described step 4 comprises following concrete steps:
步骤4.1:采集正负样本图像(正样本是指车辆后部图片,负样本是指其它任意图片,但不能包含车辆后部),对所有的正负样本图片进行归一化处理为同样大小;Step 4.1: collect positive and negative sample images (positive samples refer to pictures of the rear of the vehicle, negative samples refer to other arbitrary pictures, but cannot include the rear of the vehicle), and normalize all positive and negative sample pictures to the same size;
步骤4.2:在Haar特征中增加车尾特征和车后轮特征,并根据该Haar特征,采用Adaboost算法训练正负样本集,获得级联分类器;Step 4.2: Add the rear feature and the rear wheel feature to the Haar feature, and use the Adaboost algorithm to train the positive and negative sample sets according to the Haar feature to obtain a cascade classifier;
步骤4.3:利用获得的级联分类器,对单目摄像头采集的视频图像进行目标车辆检测,并将检测出的目标车辆实时标注于画面上;Step 4.3: Use the obtained cascade classifier to detect the target vehicle on the video image collected by the monocular camera, and mark the detected target vehicle on the screen in real time;
所述的步骤4.3中的目标车辆检测过程中,利用多尺度窗口方法(multiscale approach)对单目摄像头采集的视频图像进行扫描检测;In the target vehicle detection process in described step 4.3, utilize multi-scale window method (multiscale approach) to carry out scanning detection to the video image that monocular camera collects;
所述的步骤5中测量目标车辆距离的方法如下:The method for measuring the target vehicle distance in the described step 5 is as follows:
若目标车辆距离在30米以内,则根据小孔成像原理建立摄像机投影模型,把世界坐标系投影到图像坐标系中,通过两坐标系之间对应的关系来建立车辆测距几何关系模型,求取前方目标车辆距离;若目标车辆距离大于30米,则先通过数据拟合方法获取实际道路样本点与像平面之间的映射关系,并根据该映射关系求取前方目标车辆距离。If the distance of the target vehicle is within 30 meters, the camera projection model is established according to the principle of pinhole imaging, the world coordinate system is projected into the image coordinate system, and the vehicle ranging geometric relationship model is established through the corresponding relationship between the two coordinate systems. Take the distance of the target vehicle ahead; if the distance of the target vehicle is greater than 30 meters, first obtain the mapping relationship between the actual road sample points and the image plane through the data fitting method, and calculate the distance of the target vehicle ahead according to the mapping relationship.
本发明的优点是:本发明的基于单目视觉的车辆测距方法的适用环境为高速公路,该方法首先对安装在车辆上的单目摄像头进行必要参数的获取,然后采用高斯滤波对该摄像机采集的视频图像进行初步处理后,再对视频图像进行预处理:首先是采用基于色彩空间的天空区域分割方法,通过自适应的调整阈值的方法找到合理的分割阈值,分辨出天空区域,减少了图像的扫描面积;接下来对车道域进行分割,进一步减少了图像的扫描面积;在目标车辆检测的过程中采用的是增加了车轮特征和车尾特征的Haar特征,有效提高了目标车辆识别的准确度;在目标车辆检测(识别)的过程中,采用多尺度窗口方法对单目摄像头采集的视频图像进行扫描检测,也会显著提高目标车辆检测速度。在目标车辆距离测量中,在近距离范围内(30米以内)采用基于小孔成像的测距方法;在长距离的范围上(大于30米),采用数据拟合(线性差值)的测距方法,降低了误差率,可以达到实时测距的效果。而且,本发明的方法只采用一个摄像头采集视频,设备简单。因此,本发明的方法具有检测速度快,准确率高,实时性较强且成本低的优点。The advantages of the present invention are: the applicable environment of the vehicle ranging method based on monocular vision of the present invention is a highway, the method first obtains the necessary parameters of the monocular camera installed on the vehicle, and then uses Gaussian filtering to the camera After preliminary processing of the collected video images, the video images are preprocessed: firstly, the sky region segmentation method based on the color space is adopted, and a reasonable segmentation threshold is found by adaptively adjusting the threshold to distinguish the sky region, reducing the The scanning area of the image; next, the lane domain is segmented to further reduce the scanning area of the image; in the process of target vehicle detection, the Haar feature that adds wheel features and rear features is used, which effectively improves the accuracy of target vehicle recognition. Accuracy; in the process of target vehicle detection (recognition), the multi-scale window method is used to scan and detect the video images collected by the monocular camera, which will also significantly improve the target vehicle detection speed. In the distance measurement of the target vehicle, the distance measurement method based on pinhole imaging is adopted in the short range (within 30 meters); in the long distance range (greater than 30 meters), the measurement method of data fitting (linear difference) is used. The distance method reduces the error rate and can achieve the effect of real-time distance measurement. Moreover, the method of the invention only uses one camera to collect video, and the equipment is simple. Therefore, the method of the present invention has the advantages of fast detection speed, high accuracy, strong real-time performance and low cost.
附图说明Description of drawings
图1为本发明一种实施方式的基于单目视觉的车辆测距方法流程图;Fig. 1 is a flow chart of a vehicle ranging method based on monocular vision in an embodiment of the present invention;
图2为本发明一种实施方式的Haar特征集合图;Fig. 2 is the Haar feature collection diagram of an embodiment of the present invention;
图3为本发明一种实施方式的Adaboost训练流程图;Fig. 3 is the Adaboost training flowchart of an embodiment of the present invention;
图4为本发明一种实施方式的多尺度窗口方法扫描过程示意图;Fig. 4 is a schematic diagram of the scanning process of the multi-scale window method according to an embodiment of the present invention;
图5为本发明一种实施方式的小孔成像原理示意图;Fig. 5 is a schematic diagram of the principle of pinhole imaging in an embodiment of the present invention;
图6为本发明一种实施方式的小孔成像中待测点偏离中心时的示意图。Fig. 6 is a schematic diagram when the point to be measured is off-center in pinhole imaging according to an embodiment of the present invention.
具体实施方式Detailed ways
下面结合附图和具体实施方式对本发明作进一步详细说明。The present invention will be described in further detail below in conjunction with the accompanying drawings and specific embodiments.
图1为本实施方式的基于单目视觉的车辆测距方法流程图,该方法包括如下具体步骤:Fig. 1 is the flow chart of the vehicle ranging method based on monocular vision of the present embodiment, and the method includes the following specific steps:
步骤1:在车辆上安装单目摄像头,测取摄像头高度,获知摄像头焦距参数;Step 1: Install a monocular camera on the vehicle, measure the height of the camera, and obtain the focal length parameters of the camera;
首先把单目摄像头固定在车辆前方,确定该摄像头距离地面的高度及其其轴线与水平方向的夹角,即该单目摄像头的垂直高度和俯仰角;First fix the monocular camera in front of the vehicle, determine the height of the camera from the ground and the angle between its axis and the horizontal direction, that is, the vertical height and pitch angle of the monocular camera;
步骤2:利用所述单目摄像头采集高速公路环境下的视频图像;Step 2: using the monocular camera to collect video images under the highway environment;
步骤3:目标车辆检测前的视频图像预处理过程;Step 3: the video image preprocessing process before the target vehicle detection;
步骤3.1:采用高斯滤波对视频图像进行初步去噪、滤波处理;Step 3.1: Use Gaussian filtering to perform preliminary denoising and filtering processing on the video image;
步骤3.2:对步骤3.1初步处理后的视频图像进行目标车辆检测前的兴趣区域分割预处理,以减少需处理区域,提高目标车辆检测速度;Step 3.2: Carry out preprocessing of the region of interest segmentation before the target vehicle detection on the video image after the preliminary processing in step 3.1, so as to reduce the area to be processed and improve the detection speed of the target vehicle;
步骤3.2.1:对采集的视频图像进行天空区域分割;Step 3.2.1: performing sky region segmentation on the collected video images;
采用基于颜色空间进行天空区域分割的方法,具体如下:首先获取视频图像在HIS(色调、色饱和度、亮度)、RGB(红、绿、蓝三色)、YIQ(亮度、色调、饱和度)和YCbCr(颜色亮度、蓝色和红色颜色偏移量)四种颜色空间中的直方图;然后分别在该四种颜色空间中确定天空区域的各个颜色分量的分布范围,计算和比较所确定的四组分布范围数据的方差和极值,选取其中方差和极值最小也即各个分量分布最集中的YCbCr色彩空间作为天空域分割的色彩空间,实验发现,对于YCbCr色彩空间,Y和Cr分布较集中,能够较好的辨认出天空的分布区域,且对整幅图像的影响较小,因此采用YCbCr颜色空间作为去除天空的色彩空间;最后对视频图像进行二值化处理,确定天空的联通区域并计算其面积,并采用Otsu自适应阈值法,自动调整分割阈值,将天空部分从图像中去除;本实施方式是为了去除光照等不良因素对天空域分割过程中色彩分布区域的不良影响,采用Otsu自适应阈值法自动适当调整阈值从而提高天空分割的准确性;The method of sky region segmentation based on color space is adopted, as follows: firstly, the video image is obtained in HIS (hue, color saturation, brightness), RGB (red, green, blue), YIQ (brightness, hue, saturation) and YCbCr (color brightness, blue and red color offset) histograms in four color spaces; then determine the distribution range of each color component of the sky area in the four color spaces, calculate and compare the determined The variance and extreme value of the four groups of distribution range data, the YCbCr color space with the smallest variance and extreme value, that is, the most concentrated distribution of each component, is selected as the color space for sky domain segmentation. The experiment found that for the YCbCr color space, the distribution of Y and Cr is smaller Concentration can better identify the distribution area of the sky, and has little impact on the entire image, so the YCbCr color space is used as the color space for removing the sky; finally, the video image is binarized to determine the connected area of the sky And calculate its area, and adopt Otsu self-adaptive threshold value method, automatically adjust segmentation threshold value, sky part is removed from image; This implementation mode is to remove the bad influence such as illuminating factor to the color distribution region in sky domain segmentation process, adopts The Otsu adaptive threshold method automatically adjusts the threshold to improve the accuracy of sky segmentation;
公式(1)为实验中对天空区域的处理函数,F(x,y)为图像的灰度值。当天空区域满足公式(1)中的色彩元素范围时,把天空区域的灰度值设置为255,去除天空区域。Formula (1) is the processing function of the sky area in the experiment, and F(x, y) is the gray value of the image. When the sky area satisfies the color element range in formula (1), set the gray value of the sky area to 255 to remove the sky area.
为了排除图片中可能存在的与天空颜色一致的干扰项(如车的颜色、建筑等),只有当联通区域的面积大于图像的1/10时,碎丁才会被去除掉,这样在很大程度上排除了干扰因素。当天空区域的图像检测出来之后,会进行相应的裁剪处理,用于后续的车辆检测处理。In order to exclude interference items that may exist in the picture that are consistent with the color of the sky (such as the color of the car, buildings, etc.), only when the area of the connected area is greater than 1/10 of the image, the fragments will be removed. Interfering factors are excluded to a certain extent. After the image of the sky area is detected, it will be cropped accordingly for subsequent vehicle detection processing.
由于天空的像素范围易受到光照等不良因素的影响,通过图像的直方图分布可以发现,天空的颜色范围是是服从高斯正态分布的,如果能够找到合适的阈值,将图像分为天空和非天空部分的话,那么就可以很好的解决光线等问题的影响了.Since the pixel range of the sky is easily affected by bad factors such as lighting, it can be found through the histogram distribution of the image that the color range of the sky obeys the Gaussian normal distribution. If a suitable threshold can be found, the image can be divided into sky and non-color. For the sky part, then the influence of light and other problems can be solved very well.
根据Otsu自适应阈值法,用阈值ψ作为二值化的分界点,大于ψ的归为一类,小于ψ的归为另一类,设两类的均值分别为v1和v2,两类的分布概率分别为u1和u2。According to the Otsu adaptive threshold method, the threshold ψ is used as the cut-off point of binarization, and those larger than ψ are classified into one class, and those smaller than ψ are classified into another class. The mean values of the two classes are respectively v1 and v2 . The distribution probabilities of are u1 and u2 respectively.
α2=u1u2(v2-v1)2 (2)α2 =u1 u2 (v2 -v1 )2 (2)
通过调整阈值ψ的大小,让方差α2取得最大值,这时候的ψ可以作为天空区域分割的阈值。By adjusting the size of the threshold ψ, let the variance α2 achieve the maximum value, and ψ at this time can be used as the threshold for sky region segmentation.
本实施方式采用自适应分割算法,对于处理光照不理想的天空来说,效果较为理想。This embodiment adopts an adaptive segmentation algorithm, which is ideal for dealing with the sky with unsatisfactory illumination.
步骤3.2.2:对天空区域分割后的视频图像,采用最小误差阈值分割法检测车道线,即最靠近图像边缘的道路边沿线,并对检测出的车道线建立二维直线方程,并以此方程为基础去除边缘线之外的区域,进一步减小车辆检测区域面积;经过天空区域分割和车道域分割过程这两次分割可以去除大部分无关区域、缩小车辆检测中所要处理的图像区域,从而提高车辆检测的速度。Step 3.2.2: For the video image after the sky region segmentation, use the minimum error threshold segmentation method to detect the lane line, that is, the road edge line closest to the edge of the image, and establish a two-dimensional straight line equation for the detected lane line, and use this The area outside the edge line is removed based on the equation, and the area of the vehicle detection area is further reduced; after the two segmentations of the sky area segmentation and the lane domain segmentation process, most irrelevant areas can be removed, and the image area to be processed in the vehicle detection can be reduced, thereby Improve the speed of vehicle detection.
本实施方式是采用最小误差阈值分割法检测车道线,并以此来确定道路区域和非道路区域。将每帧图片中的无关区域去除掉,只保留道路部分。最小误差阈值分割算法具有良好的抗噪性,检测效果理想。相比于二维最小误差阈值分割算法,一维最小误差算法的计算量更小,可以大大缩短处理时间,并且可以满足针对目标车辆识别(检测)的图像前期处理的要求。In this embodiment, the minimum error threshold segmentation method is used to detect lane lines, and to determine road areas and non-road areas. Remove the irrelevant areas in each frame of the picture, and only keep the road part. The minimum error threshold segmentation algorithm has good noise immunity, and the detection effect is ideal. Compared with the two-dimensional minimum error threshold segmentation algorithm, the one-dimensional minimum error algorithm has less calculation, can greatly shorten the processing time, and can meet the requirements of image pre-processing for target vehicle recognition (detection).
具体过程如下:The specific process is as follows:
对于一幅大小为M×N的图像,设坐标为(x,y)的像素点的灰度值为h(x,y)∈G=[0,1,…,255]。用函数f(g)表示图像的一维直方图,它代表了图像中每个灰度值出现的频数,对于灰度阈值m∈G=[0,1,…,255],最小分类误差的函数为For an image with a size of M×N, the gray value of the pixel with the coordinates (x, y) is h(x, y)∈G=[0,1,…,255]. Use the function f(g) to represent the one-dimensional histogram of the image, which represents the frequency of each gray value in the image. For the gray threshold m∈G=[0,1,...,255], the minimum classification error The function is
Z(m)=1+2[P0(m)lnσ0(m)+P1(m)lnσ1(m)]-2[P0(m)lnP0(m)+P1(m)lnP1(m)] (3)Z(m)=1+2[P0 (m)lnσ0 (m)+P1 (m)lnσ1 (m)]-2[P0 (m)lnP0 (m)+P1 (m) lnP1 (m)] (3)
其中,
其中,为最小误差阈值分割之后的像元;h(x,y)为最小误差阈值分割之前的像元。in, is the pixel after minimum error threshold segmentation; h(x, y) is the pixel before minimum error threshold segmentation.
实验证明,最小误差阈值分割法能够很好的将车道线检测出来。通过建立二维平面内的车道线方程可以将车道与非车道分离开来,可以去掉大量的无关区域,从而可以降低车辆检测过程的工作量。The experiment proves that the minimum error threshold segmentation method can detect the lane lines very well. By establishing the lane line equation in the two-dimensional plane, the lane can be separated from the non-lane, and a large number of irrelevant areas can be removed, thereby reducing the workload of the vehicle detection process.
步骤4:目标车辆检测过程;Step 4: Target vehicle detection process;
在分割后的视频图像区域内进行车辆检测,并将检测出的目标车辆实时标注于画面上;Vehicle detection is performed in the segmented video image area, and the detected target vehicle is marked on the screen in real time;
步骤4.1:采集正负样本图像;其中正样本是指车辆后部图片,负样本是指其它任意图片,但不能包含车辆后部,对所有的正负样本图片都进行归一化处理,本实施方式是将所有图片都归一化为20*20。Step 4.1: Collect positive and negative sample images; the positive sample refers to the picture of the rear of the vehicle, and the negative sample refers to any other picture, but not including the rear of the vehicle. All positive and negative sample pictures are normalized. This implementation The way is to normalize all pictures to 20*20.
步骤4.2:在Haar特征中增加车尾特征和车轮特征,并根据修改后的Haar特征,采用Adaboost算法训练正负样本集,获得级联分类器。Step 4.2: Add the rear feature and wheel feature to the Haar feature, and use the Adaboost algorithm to train the positive and negative sample sets according to the modified Haar feature to obtain a cascade classifier.
根据特征模板的特征值定义,白色矩形像素和减去黑色矩形像素和,由于积分图具有计算速度快且计算值只与边缘有关的特点,所以,本文采用积分图计算Haar特征值。According to the eigenvalue definition of the feature template, the white rectangular pixel sum subtracts the black rectangular pixel sum. Since the integral map has the characteristics of fast calculation and the calculated value is only related to the edge, this paper uses the integral map to calculate the Haar eigenvalue.
本实施方式在原有Haar特征的基础上,增加了车轮特征和车尾特征如图2所示。可以提高目标车辆检测的准确率。Haar特征的计算方法是白色矩形像素和减去黑色矩形像素和,由于积分图具有计算速度快且计算值只与边缘有关的特点,所以,本文采用积分图计算Haar特征值以提高计算速度。In this embodiment, on the basis of the original Haar feature, the wheel feature and the rear feature are added, as shown in FIG. 2 . The accuracy of target vehicle detection can be improved. The calculation method of the Haar feature is the sum of white rectangular pixels minus the sum of black rectangular pixels. Since the integral map has the characteristics of fast calculation speed and the calculated value is only related to the edge, this paper uses the integral map to calculate the Haar feature value to improve the calculation speed.
本实施方式进一步根据Haar特征,利用Adaboost算法对正负样本进行训练,得到级联分类器。训练流程图如图3所示,级联分类器由多个弱分类器组成,每一级都比前一级复杂。每个分类器可以让几乎所有的正例通过,同时滤除大部分负例。这样每一级的待检测正例就比前一级少,排除了大量的非检测目标,可大大提高检测速度。This embodiment further uses the Adaboost algorithm to train the positive and negative samples according to the Haar feature to obtain a cascade classifier. The training flow chart is shown in Figure 3. The cascade classifier consists of multiple weak classifiers, and each level is more complex than the previous level. Each classifier can pass almost all positive examples while filtering out most negative examples. In this way, the number of positive cases to be detected at each level is less than that of the previous level, and a large number of non-detection targets are excluded, which can greatly improve the detection speed.
步骤4.3:利用获得的级联分类器,对单目摄像头采集的视频图像进行检测,并且在检测过程中,利用多尺度窗口方法(multiscale approach)进行扫描;Step 4.3: Use the obtained cascade classifier to detect the video image collected by the monocular camera, and during the detection process, use the multiscale approach to scan;
为了减少在计算矩形特征时花费的时间,本实施方式采用了多尺度窗口方法进行扫描,即用不同大小的子窗口分别对图像进行分区检测,并同时计算不同区域的Haar特征,扫描示意图如图4所示,根据车辆检测的特点,利用车辆在不同距离时车尾大小不同的特点,将图像分为四个区域,并用四个不同大小的子窗口对图像进行分区检测,且同时计算四个子窗口的Haar特征。In order to reduce the time spent on calculating rectangular features, this embodiment adopts the multi-scale window method for scanning, that is, sub-windows of different sizes are used to perform partition detection on the image, and the Haar features of different areas are calculated at the same time. The scanning schematic diagram is shown in the figure As shown in 4, according to the characteristics of vehicle detection, the image is divided into four regions by using the characteristics of different sizes of the rear of the vehicle at different distances, and four sub-windows of different sizes are used to perform partition detection on the image, and four sub-windows are calculated simultaneously Haar characteristics of the window.
步骤5:测量目标车辆距离并在视频画面上实时显示该目标车辆距离。Step 5: Measure the target vehicle distance and display the target vehicle distance on the video screen in real time.
若目标车辆距离在30米以内,则根据小孔成像原理建立摄像机投影模型,把世界坐标系投影到图像坐标系中,通过两坐标系之间对应的关系来建立车辆测距几何关系模型,求取前方目标车辆距离;根据小孔成像原理,将世界坐标系中的物体经过小孔投影到图像坐标系中,如图5所示,平面bchq为世界坐标系所对应的现实影像,平面BCHQ为图像坐标系所对应的投影影像,在平面bchq中hq边与bc边延伸交点为s,同时s点也为道路延长线的交点,路面实际被观察到的为梯形bchq。M点所代表的是凸透镜所形成的小孔。If the distance of the target vehicle is within 30 meters, the camera projection model is established according to the principle of pinhole imaging, the world coordinate system is projected into the image coordinate system, and the vehicle ranging geometric relationship model is established through the corresponding relationship between the two coordinate systems. Take the distance of the target vehicle in front; according to the principle of pinhole imaging, the object in the world coordinate system is projected into the image coordinate system through the pinhole, as shown in Figure 5, the plane bchq is the real image corresponding to the world coordinate system, and the plane BCHQ is The projected image corresponding to the image coordinate system, in the plane bchq, the intersection point of hq side and bc side extension is s, and s point is also the intersection point of the road extension line, and the actually observed road surface is trapezoidal bchq. Point M represents the small hole formed by the convex lens.
如果不考虑摄像头畸变的影响,光轴Ii在两个对应平面的交点都是两个平面的中心位置,如图5所示的I,i两点。If the influence of camera distortion is not considered, the intersection point of the optical axis Ii at the two corresponding planes is the center position of the two planes, as shown in Figure 5 at the two points I and i.
如图5所示,在世界坐标系里取点t,经过点M的投影,得到在图像坐标系里的点T。可以发现ΔMIT的∠TMI与∠α2是相等的,而线段MI为摄像机中透镜的焦距f,所以有tanα2=TI/f,图5中MN为水平线,∠α1为光轴与水平线所成的俯仰角,摄像头安装的高度为δ=MW,所观察的点t到光心的距离为S=Wi,有:As shown in Figure 5, take point t in the world coordinate system, and get point T in the image coordinate system through the projection of point M. It can be found that ∠TMI and ∠α2 of ΔMIT are equal, and the line segment MI is the focal length f of the lens in the camera, so tanα2 =TI/f, MN in Figure 5 is the horizontal line, ∠α1 is the distance between the optical axis and the horizontal line The resulting pitch angle, the installation height of the camera is δ=MW, the distance from the observed point t to the optical center is S=Wi, there are:
从而可以求得距离公式:Thus the distance formula can be obtained:
如图6所示,当待测点偏离中心位置的时候,需要计算一个偏离中心的横向距离那么所测得的实际距离为As shown in Figure 6, when the point to be measured deviates from the center position, it is necessary to calculate a lateral distance from the center Then the measured actual distance is
为了求取d2点,选取b1点为测试点,b1点在图像坐标系的投影点为B1;与边缘平行且与中心线相交的另外一个测试点为a1,相应的投影到图像坐标系的交点为A1;由投影原理和垂直平面的存在,可以知道∠MA1B1和∠Ma1b1均为90°,由于还存在一对对顶角,所以ΔMA1B1和ΔMa1b1是相似的。根据相似三角形对应边成比例的准则,得到下式:In order to obtain point d2 , select point b1 as the test point, and the projection point of point b1 in the image coordinate system is B1 ; another test point parallel to the edge and intersecting with the center line is a1 , which is projected to The intersection point of the image coordinate system is A1 ; from the principle of projection and the existence of a vertical plane, it can be known that ∠MA1 B1 and ∠Ma1 b1 are both 90°. Since there is still a pair of opposite angles, ΔMA1 B1 and ΔMa1 b1 are similar. According to the principle that the corresponding sides of similar triangles are proportional, the following formula is obtained:
对于A1B1,在图像坐标系里,设横坐标像素与纵坐标像素间的距离分别为dx和dy,则A1B1间的实际距离则为其中代表A1B1的像素差值。For A1 B1 , in the image coordinate system, let the distances between pixels on the abscissa and pixels on the ordinate be dx and dy respectively, then the actual distance between A1 B1 is in Represents the pixel difference of A1 B1 .
同理对于MA1,根据像素与实际距离之间的关系可以推得到其表达式为:Similarly, for MA1 , according to the relationship between the pixel and the actual distance, its expression can be deduced as:
其中表示的是A1I的像素差值。in Indicates the pixel difference value of A1 I.
对于Ma1,由于通过图5已经求得了d1,即图中的Wa1,由图中的几何关系可知:For Ma1 , since d1 has been obtained through Fig. 5, that is, Wa1 in the figure, it can be known from the geometric relationship in the figure:
a1b1即为要求的参量d2。综上所述,计算距离的公式为:a1 b1 is the required parameter d2 . To sum up, the formula for calculating the distance is:
公式(9)中,摄像机的焦距f和TI可从摄像头的规格参数中查得,安装高度h可以测量得到,α1可以用式(5)求得。In formula (9), the focal length f and TI of the camera can be obtained from the specification parameters of the camera, the installation height h can be measured, and α1 can be obtained by formula (5 ).
若目标车辆距离大于30米,则先通过数据拟合方法获取实际道路样本点与像平面之间的映射关系,并根据该映射关系求取前方目标车辆距离。对于一架规格确定的垂直于地面安放的摄像机,当其俯仰角确定时它所能探测到的前方道路的距离范围是一定的,图像域上各点对应的世界坐标系中的位置坐标也是一定的,所以只要找到图像坐标系和世界坐标系之间的位置对应关系便可以在实际测距中通过这一关系求取前方车辆距离。这种方法有效克服了实际成像中存在的透镜畸变等光路误差造成的不良影响,以及摄像机放置方式、路面情况、道路类型(直道或结构化道路)等问题。If the distance of the target vehicle is greater than 30 meters, the mapping relationship between the actual road sample points and the image plane is first obtained through the data fitting method, and the distance of the target vehicle ahead is calculated according to the mapping relationship. For a camera placed perpendicular to the ground with a certain specification, when its pitch angle is determined, the distance range of the road ahead it can detect is certain, and the position coordinates in the world coordinate system corresponding to each point on the image domain are also certain. Therefore, as long as the position correspondence relationship between the image coordinate system and the world coordinate system is found, the distance of the vehicle in front can be obtained through this relationship in the actual distance measurement. This method effectively overcomes the adverse effects caused by optical path errors such as lens distortion in actual imaging, as well as problems such as camera placement, road conditions, and road types (straight or structured roads).
两坐标系的关系拟合需要通过实验获取数据来实现。实验中获得的数据构成的是离散的点,本实施方式是采用三次样条插值的方法构造两坐标系的关系。The fitting of the relationship between the two coordinate systems needs to obtain data through experiments. The data obtained in the experiment constitutes discrete points. In this embodiment, a cubic spline interpolation method is used to construct the relationship between the two coordinate systems.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510229988.3ACN104899554A (en) | 2015-05-07 | 2015-05-07 | Vehicle ranging method based on monocular vision |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510229988.3ACN104899554A (en) | 2015-05-07 | 2015-05-07 | Vehicle ranging method based on monocular vision |
| Publication Number | Publication Date |
|---|---|
| CN104899554Atrue CN104899554A (en) | 2015-09-09 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201510229988.3APendingCN104899554A (en) | 2015-05-07 | 2015-05-07 | Vehicle ranging method based on monocular vision |
| Country | Link |
|---|---|
| CN (1) | CN104899554A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105354857A (en)* | 2015-12-07 | 2016-02-24 | 北京航空航天大学 | Matching method for vehicle track shielded by overpass |
| CN105488454A (en)* | 2015-11-17 | 2016-04-13 | 天津工业大学 | Monocular vision based front vehicle detection and ranging method |
| CN105718870A (en)* | 2016-01-15 | 2016-06-29 | 武汉光庭科技有限公司 | Road marking line extracting method based on forward camera head in automatic driving |
| CN106157311A (en)* | 2016-07-04 | 2016-11-23 | 南京安驾信息科技有限公司 | Scaling method and the device of system is identified for vehicle ADAS |
| FR3045816A1 (en)* | 2015-12-22 | 2017-06-23 | Renault Sas | METHOD AND DEVICE FOR DETERMINING SPATIAL POSITIONS AND CINEMATIC PARAMETERS OF OBJECTS IN THE ENVIRONMENT OF A VEHICLE |
| CN107148639A (en)* | 2015-09-15 | 2017-09-08 | 深圳市大疆创新科技有限公司 | Method and device for determining location information of tracking target, tracking device and system |
| CN107305632A (en)* | 2017-02-16 | 2017-10-31 | 武汉极目智能技术有限公司 | Destination object distance measurement method and system based on monocular computer vision technique |
| CN107341478A (en)* | 2017-07-11 | 2017-11-10 | 京东方科技集团股份有限公司 | The vehicle checking method and its device of a kind of DAS (Driver Assistant System) |
| CN107764233A (en)* | 2016-08-15 | 2018-03-06 | 杭州海康威视数字技术股份有限公司 | A kind of measuring method and device |
| CN107832788A (en)* | 2017-11-01 | 2018-03-23 | 青岛理工大学 | Vehicle distance measuring method based on monocular vision and license plate recognition |
| CN108122244A (en)* | 2016-11-30 | 2018-06-05 | 浙江宇视科技有限公司 | The video frequency speed-measuring method and device of a kind of video image |
| CN108759667A (en)* | 2018-05-29 | 2018-11-06 | 福州大学 | Front truck distance measuring method based on monocular vision and image segmentation under vehicle-mounted camera |
| CN108830159A (en)* | 2018-05-17 | 2018-11-16 | 武汉理工大学 | A kind of front vehicles monocular vision range-measurement system and method |
| CN109117691A (en)* | 2017-06-23 | 2019-01-01 | 百度在线网络技术(北京)有限公司 | Drivable region detection method, device, equipment and storage medium |
| CN109146948A (en)* | 2018-07-27 | 2019-01-04 | 内蒙古大学 | The quantization of crop growing state phenotypic parameter and the correlation with yield analysis method of view-based access control model |
| CN109443319A (en)* | 2018-12-21 | 2019-03-08 | 联创汽车电子有限公司 | Barrier range-measurement system and its distance measuring method based on monocular vision |
| CN109583267A (en)* | 2017-09-28 | 2019-04-05 | 京东方科技集团股份有限公司 | Vehicle object detection method, vehicle object detecting device and vehicle |
| CN109724560A (en)* | 2018-12-17 | 2019-05-07 | 南京理工大学 | A Monocular Ranging System Based on Inertial Measurement Unit |
| CN109827516A (en)* | 2019-03-19 | 2019-05-31 | 魔视智能科技(上海)有限公司 | A method of distance is measured by wheel |
| CN109979206A (en)* | 2017-12-28 | 2019-07-05 | 杭州海康威视系统技术有限公司 | Vehicle speed measuring method, device, system, electronic equipment and storage medium |
| CN110006422A (en)* | 2019-03-28 | 2019-07-12 | 浙江吉利汽车研究院有限公司 | A method, device, device and storage medium for determining safety operation parameters of equipment |
| CN110334678A (en)* | 2019-07-12 | 2019-10-15 | 哈尔滨理工大学 | A Pedestrian Detection Method Based on Vision Fusion |
| CN110533945A (en)* | 2019-08-28 | 2019-12-03 | 肇庆小鹏汽车有限公司 | Method for early warning, system, vehicle and the storage medium of traffic lights |
| CN110632339A (en)* | 2019-10-09 | 2019-12-31 | 天津天地伟业信息系统集成有限公司 | Water flow testing method of video flow velocity tester |
| CN110966981A (en)* | 2018-09-30 | 2020-04-07 | 北京奇虎科技有限公司 | Ranging method and device |
| CN111046843A (en)* | 2019-12-27 | 2020-04-21 | 华南理工大学 | A monocular ranging method in intelligent driving environment |
| US10860040B2 (en) | 2015-10-30 | 2020-12-08 | SZ DJI Technology Co., Ltd. | Systems and methods for UAV path planning and control |
| CN112562331A (en)* | 2020-11-30 | 2021-03-26 | 的卢技术有限公司 | Vision perception-based other-party vehicle track prediction method |
| CN112633182A (en)* | 2020-12-25 | 2021-04-09 | 广州文远知行科技有限公司 | Vehicle state detection method, device, equipment and storage medium |
| CN112802090A (en)* | 2021-01-23 | 2021-05-14 | 行云智能(深圳)技术有限公司 | Monocular vision distance measurement processing method |
| CN112880642A (en)* | 2021-03-01 | 2021-06-01 | 苏州挚途科技有限公司 | Distance measuring system and distance measuring method |
| CN113188509A (en)* | 2021-04-28 | 2021-07-30 | 上海商汤临港智能科技有限公司 | Distance measuring method and device, electronic equipment and storage medium |
| CN113221739A (en)* | 2021-05-12 | 2021-08-06 | 中国科学技术大学 | Monocular vision-based vehicle distance measuring method |
| CN113838075A (en)* | 2020-06-23 | 2021-12-24 | 南宁富桂精密工业有限公司 | Monocular distance measuring method, device and computer readable storage medium |
| CN114228432A (en)* | 2022-01-25 | 2022-03-25 | 同济大学 | Dynamic chassis pre-aiming control method based on pavement state recognition |
| CN114295099A (en)* | 2021-12-28 | 2022-04-08 | 合肥英睿系统技术有限公司 | Monocular camera-based ranging method, vehicle ranging device and storage medium |
| US20220121850A1 (en)* | 2020-10-19 | 2022-04-21 | Aurora Flight Sciences Corporation, a subsidiary of The Boeing Company | Above-horizon target tracking |
| US11392801B2 (en) | 2018-05-29 | 2022-07-19 | Huawei Technologies Co., Ltd. | Action recognition method and apparatus |
| US11478169B2 (en) | 2017-10-13 | 2022-10-25 | Huawei Technologies Co., Ltd. | Action recognition and pose estimation method and apparatus |
| CN115565153A (en)* | 2022-09-21 | 2023-01-03 | 河南科技大学 | Improved yolov7 unmanned tractor field obstacle recognition method |
| CN116007572A (en)* | 2022-12-29 | 2023-04-25 | 成都云科新能汽车技术有限公司 | Low-cost backward vehicle warning system and method based on monocular camera |
| CN117218681A (en)* | 2023-11-09 | 2023-12-12 | 厦门瑞为信息技术有限公司 | Height estimation method of monocular lens, child passing gate device and judging method |
| US12072204B2 (en) | 2020-10-19 | 2024-08-27 | The Boeing Company | Landing zone evaluation |
| CN118753347A (en)* | 2024-08-01 | 2024-10-11 | 北京西南交大盛阳科技股份有限公司 | Vehicle leading device, vehicle leading system based on machine vision and interlocking signal, and vehicle leading method |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102661733A (en)* | 2012-05-28 | 2012-09-12 | 天津工业大学 | Front vehicle ranging method based on monocular vision |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102661733A (en)* | 2012-05-28 | 2012-09-12 | 天津工业大学 | Front vehicle ranging method based on monocular vision |
| Title |
|---|
| 刘燕等: ""基于计算机视觉的单目摄影纵向车距测量系统研究"", 《公路交通科技》* |
| 宋彩霞等: ""车量测距方法"", 《电子测量与仪器学报》* |
| 韦庭: ""基于单目视觉的辅助驾驶系统中的图像处理研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10928838B2 (en) | 2015-09-15 | 2021-02-23 | SZ DJI Technology Co., Ltd. | Method and device of determining position of target, tracking device and tracking system |
| US12181879B2 (en) | 2015-09-15 | 2024-12-31 | SZ DJI Technology Co., Ltd. | System and method for supporting smooth target following |
| US11635775B2 (en) | 2015-09-15 | 2023-04-25 | SZ DJI Technology Co., Ltd. | Systems and methods for UAV interactive instructions and control |
| CN107148639B (en)* | 2015-09-15 | 2019-07-16 | 深圳市大疆创新科技有限公司 | Method and device for determining position information of tracking target, tracking device and system |
| CN107148639A (en)* | 2015-09-15 | 2017-09-08 | 深圳市大疆创新科技有限公司 | Method and device for determining location information of tracking target, tracking device and system |
| US10976753B2 (en) | 2015-09-15 | 2021-04-13 | SZ DJI Technology Co., Ltd. | System and method for supporting smooth target following |
| US10860040B2 (en) | 2015-10-30 | 2020-12-08 | SZ DJI Technology Co., Ltd. | Systems and methods for UAV path planning and control |
| CN105488454A (en)* | 2015-11-17 | 2016-04-13 | 天津工业大学 | Monocular vision based front vehicle detection and ranging method |
| CN105488454B (en)* | 2015-11-17 | 2019-04-23 | 天津工业大学 | Front vehicle detection and ranging based on monocular vision |
| CN105354857B (en)* | 2015-12-07 | 2018-09-21 | 北京航空航天大学 | A kind of track of vehicle matching process for thering is viaduct to block |
| CN105354857A (en)* | 2015-12-07 | 2016-02-24 | 北京航空航天大学 | Matching method for vehicle track shielded by overpass |
| WO2017109359A1 (en)* | 2015-12-22 | 2017-06-29 | Renault S.A.S | Method and device for determining spatial positions and kinematic parameters of objects in the environment of a vehicle |
| FR3045816A1 (en)* | 2015-12-22 | 2017-06-23 | Renault Sas | METHOD AND DEVICE FOR DETERMINING SPATIAL POSITIONS AND CINEMATIC PARAMETERS OF OBJECTS IN THE ENVIRONMENT OF A VEHICLE |
| CN105718870B (en)* | 2016-01-15 | 2019-06-14 | 武汉光庭科技有限公司 | Based on the preceding roadmarking extracting method to camera in automatic Pilot |
| CN105718870A (en)* | 2016-01-15 | 2016-06-29 | 武汉光庭科技有限公司 | Road marking line extracting method based on forward camera head in automatic driving |
| CN106157311A (en)* | 2016-07-04 | 2016-11-23 | 南京安驾信息科技有限公司 | Scaling method and the device of system is identified for vehicle ADAS |
| CN107764233A (en)* | 2016-08-15 | 2018-03-06 | 杭州海康威视数字技术股份有限公司 | A kind of measuring method and device |
| CN107764233B (en)* | 2016-08-15 | 2020-09-04 | 杭州海康威视数字技术股份有限公司 | Measuring method and device |
| US11017552B2 (en) | 2016-08-15 | 2021-05-25 | Hangzhou Hikvision Digital Technology Co., Ltd | Measurement method and apparatus |
| CN108122244A (en)* | 2016-11-30 | 2018-06-05 | 浙江宇视科技有限公司 | The video frequency speed-measuring method and device of a kind of video image |
| CN108122244B (en)* | 2016-11-30 | 2022-02-01 | 浙江宇视科技有限公司 | Video speed measuring method and device for video image |
| CN107305632A (en)* | 2017-02-16 | 2017-10-31 | 武汉极目智能技术有限公司 | Destination object distance measurement method and system based on monocular computer vision technique |
| CN107305632B (en)* | 2017-02-16 | 2020-06-12 | 武汉极目智能技术有限公司 | Monocular computer vision technology-based target object distance measuring method and system |
| CN109117691A (en)* | 2017-06-23 | 2019-01-01 | 百度在线网络技术(北京)有限公司 | Drivable region detection method, device, equipment and storage medium |
| US20190019041A1 (en)* | 2017-07-11 | 2019-01-17 | Boe Technology Group Co., Ltd. | Method and apparatus for detecting a vehicle in a driving assisting system |
| CN107341478A (en)* | 2017-07-11 | 2017-11-10 | 京东方科技集团股份有限公司 | The vehicle checking method and its device of a kind of DAS (Driver Assistant System) |
| US10579883B2 (en)* | 2017-07-11 | 2020-03-03 | Boe Technology Group Co., Ltd. | Method and apparatus for detecting a vehicle in a driving assisting system |
| US11482013B2 (en) | 2017-09-28 | 2022-10-25 | Beijing Boe Technology Development Co., Ltd. | Object tracking method, object tracking apparatus, vehicle having the same, and computer-program product |
| CN109583267A (en)* | 2017-09-28 | 2019-04-05 | 京东方科技集团股份有限公司 | Vehicle object detection method, vehicle object detecting device and vehicle |
| US11478169B2 (en) | 2017-10-13 | 2022-10-25 | Huawei Technologies Co., Ltd. | Action recognition and pose estimation method and apparatus |
| CN107832788A (en)* | 2017-11-01 | 2018-03-23 | 青岛理工大学 | Vehicle distance measuring method based on monocular vision and license plate recognition |
| CN107832788B (en)* | 2017-11-01 | 2021-07-23 | 青岛中汽特种汽车有限公司 | Vehicle distance measuring method based on monocular vision and license plate recognition |
| CN109979206A (en)* | 2017-12-28 | 2019-07-05 | 杭州海康威视系统技术有限公司 | Vehicle speed measuring method, device, system, electronic equipment and storage medium |
| CN108830159A (en)* | 2018-05-17 | 2018-11-16 | 武汉理工大学 | A kind of front vehicles monocular vision range-measurement system and method |
| US11392801B2 (en) | 2018-05-29 | 2022-07-19 | Huawei Technologies Co., Ltd. | Action recognition method and apparatus |
| CN108759667A (en)* | 2018-05-29 | 2018-11-06 | 福州大学 | Front truck distance measuring method based on monocular vision and image segmentation under vehicle-mounted camera |
| US11704938B2 (en) | 2018-05-29 | 2023-07-18 | Huawei Technologies Co., Ltd. | Action recognition method and apparatus |
| CN109146948A (en)* | 2018-07-27 | 2019-01-04 | 内蒙古大学 | The quantization of crop growing state phenotypic parameter and the correlation with yield analysis method of view-based access control model |
| CN110966981B (en)* | 2018-09-30 | 2023-03-24 | 北京奇虎科技有限公司 | Distance measuring method and device |
| CN110966981A (en)* | 2018-09-30 | 2020-04-07 | 北京奇虎科技有限公司 | Ranging method and device |
| CN109724560A (en)* | 2018-12-17 | 2019-05-07 | 南京理工大学 | A Monocular Ranging System Based on Inertial Measurement Unit |
| CN109443319A (en)* | 2018-12-21 | 2019-03-08 | 联创汽车电子有限公司 | Barrier range-measurement system and its distance measuring method based on monocular vision |
| CN109827516A (en)* | 2019-03-19 | 2019-05-31 | 魔视智能科技(上海)有限公司 | A method of distance is measured by wheel |
| CN109827516B (en)* | 2019-03-19 | 2020-09-04 | 魔视智能科技(上海)有限公司 | Method for measuring distance through wheel |
| CN110006422A (en)* | 2019-03-28 | 2019-07-12 | 浙江吉利汽车研究院有限公司 | A method, device, device and storage medium for determining safety operation parameters of equipment |
| CN110006422B (en)* | 2019-03-28 | 2021-03-09 | 浙江吉利汽车研究院有限公司 | Method, device, equipment and storage medium for determining safe operation parameters of equipment |
| CN110334678A (en)* | 2019-07-12 | 2019-10-15 | 哈尔滨理工大学 | A Pedestrian Detection Method Based on Vision Fusion |
| CN110533945A (en)* | 2019-08-28 | 2019-12-03 | 肇庆小鹏汽车有限公司 | Method for early warning, system, vehicle and the storage medium of traffic lights |
| CN110632339A (en)* | 2019-10-09 | 2019-12-31 | 天津天地伟业信息系统集成有限公司 | Water flow testing method of video flow velocity tester |
| CN111046843A (en)* | 2019-12-27 | 2020-04-21 | 华南理工大学 | A monocular ranging method in intelligent driving environment |
| CN111046843B (en)* | 2019-12-27 | 2023-06-20 | 华南理工大学 | Monocular ranging method in intelligent driving environment |
| CN113838075A (en)* | 2020-06-23 | 2021-12-24 | 南宁富桂精密工业有限公司 | Monocular distance measuring method, device and computer readable storage medium |
| CN113838075B (en)* | 2020-06-23 | 2024-01-09 | 南宁富联富桂精密工业有限公司 | Monocular ranging method, monocular ranging device and computer readable storage medium |
| US12100203B2 (en)* | 2020-10-19 | 2024-09-24 | The Boeing Company | Above-horizon target tracking |
| US12072204B2 (en) | 2020-10-19 | 2024-08-27 | The Boeing Company | Landing zone evaluation |
| US20220121850A1 (en)* | 2020-10-19 | 2022-04-21 | Aurora Flight Sciences Corporation, a subsidiary of The Boeing Company | Above-horizon target tracking |
| CN112562331A (en)* | 2020-11-30 | 2021-03-26 | 的卢技术有限公司 | Vision perception-based other-party vehicle track prediction method |
| CN112633182A (en)* | 2020-12-25 | 2021-04-09 | 广州文远知行科技有限公司 | Vehicle state detection method, device, equipment and storage medium |
| CN112802090A (en)* | 2021-01-23 | 2021-05-14 | 行云智能(深圳)技术有限公司 | Monocular vision distance measurement processing method |
| CN112880642A (en)* | 2021-03-01 | 2021-06-01 | 苏州挚途科技有限公司 | Distance measuring system and distance measuring method |
| CN113188509A (en)* | 2021-04-28 | 2021-07-30 | 上海商汤临港智能科技有限公司 | Distance measuring method and device, electronic equipment and storage medium |
| CN113188509B (en)* | 2021-04-28 | 2023-10-24 | 上海商汤临港智能科技有限公司 | Distance measurement method and device, electronic equipment and storage medium |
| CN113221739A (en)* | 2021-05-12 | 2021-08-06 | 中国科学技术大学 | Monocular vision-based vehicle distance measuring method |
| CN114295099B (en)* | 2021-12-28 | 2024-01-30 | 合肥英睿系统技术有限公司 | Ranging method based on monocular camera, vehicle-mounted ranging equipment and storage medium |
| CN114295099A (en)* | 2021-12-28 | 2022-04-08 | 合肥英睿系统技术有限公司 | Monocular camera-based ranging method, vehicle ranging device and storage medium |
| CN114228432A (en)* | 2022-01-25 | 2022-03-25 | 同济大学 | Dynamic chassis pre-aiming control method based on pavement state recognition |
| CN115565153A (en)* | 2022-09-21 | 2023-01-03 | 河南科技大学 | Improved yolov7 unmanned tractor field obstacle recognition method |
| CN116007572A (en)* | 2022-12-29 | 2023-04-25 | 成都云科新能汽车技术有限公司 | Low-cost backward vehicle warning system and method based on monocular camera |
| CN117218681A (en)* | 2023-11-09 | 2023-12-12 | 厦门瑞为信息技术有限公司 | Height estimation method of monocular lens, child passing gate device and judging method |
| CN117218681B (en)* | 2023-11-09 | 2024-02-06 | 厦门瑞为信息技术有限公司 | Height estimation method of monocular lens, child passing gate device and judging method |
| CN118753347A (en)* | 2024-08-01 | 2024-10-11 | 北京西南交大盛阳科技股份有限公司 | Vehicle leading device, vehicle leading system based on machine vision and interlocking signal, and vehicle leading method |
| Publication | Publication Date | Title |
|---|---|---|
| CN104899554A (en) | Vehicle ranging method based on monocular vision | |
| US10970566B2 (en) | Lane line detection method and apparatus | |
| CN105678285B (en) | A kind of adaptive road birds-eye view transform method and road track detection method | |
| CN107330376B (en) | Lane line identification method and system | |
| CN109657632B (en) | A method of lane line detection and recognition | |
| CN105893949B (en) | A kind of method for detecting lane lines under complex road condition scene | |
| CN104318258B (en) | Time domain fuzzy and kalman filter-based lane detection method | |
| CN103324930B (en) | A license plate character segmentation method based on gray histogram binarization | |
| CN104036246B (en) | Lane line positioning method based on multi-feature fusion and polymorphism mean value | |
| CN103198315B (en) | Based on the Character Segmentation of License Plate of character outline and template matches | |
| CN109299674B (en) | Tunnel illegal lane change detection method based on car lamp | |
| CN108805065A (en) | One kind being based on the improved method for detecting lane lines of geometric properties | |
| CN103136528B (en) | A kind of licence plate recognition method based on dual edge detection | |
| CN107066986A (en) | A kind of lane line based on monocular vision and preceding object object detecting method | |
| KR101191308B1 (en) | Road and lane detection system for intelligent transportation system and method therefor | |
| CN104809433B (en) | A kind of zebra line detecting method based on maximum stable region and stochastical sampling | |
| CN105488454A (en) | Monocular vision based front vehicle detection and ranging method | |
| CN106682586A (en) | Method for real-time lane line detection based on vision under complex lighting conditions | |
| CN102629326A (en) | Lane line detection method based on monocular vision | |
| CN102314599A (en) | Identification and deviation-detection method for lane | |
| CN107895151A (en) | Method for detecting lane lines based on machine vision under a kind of high light conditions | |
| CN106778551A (en) | A kind of fastlink and urban road Lane detection method | |
| CN108052904A (en) | The acquisition methods and device of lane line | |
| CN103927548B (en) | Novel vehicle collision avoiding brake behavior detection method | |
| KR20110001425A (en) | Lane Classification Method Using Statistical Model of HS Color Information |
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| WD01 | Invention patent application deemed withdrawn after publication | ||
| WD01 | Invention patent application deemed withdrawn after publication | Application publication date:20150909 |