Movatterモバイル変換


[0]ホーム

URL:


CN109635733B - Parking lot and vehicle target detection method based on visual saliency and queue correction - Google Patents

Parking lot and vehicle target detection method based on visual saliency and queue correction
Download PDF

Info

Publication number
CN109635733B
CN109635733BCN201811517627.9ACN201811517627ACN109635733BCN 109635733 BCN109635733 BCN 109635733BCN 201811517627 ACN201811517627 ACN 201811517627ACN 109635733 BCN109635733 BCN 109635733B
Authority
CN
China
Prior art keywords
parking lot
bbsm
map
vehicle
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811517627.9A
Other languages
Chinese (zh)
Other versions
CN109635733A (en
Inventor
陈浩
陈玲艳
陈稳
高通
赵静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology Shenzhen
Original Assignee
Harbin Institute of Technology Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology ShenzhenfiledCriticalHarbin Institute of Technology Shenzhen
Priority to CN201811517627.9ApriorityCriticalpatent/CN109635733B/en
Publication of CN109635733ApublicationCriticalpatent/CN109635733A/en
Application grantedgrantedCritical
Publication of CN109635733BpublicationCriticalpatent/CN109635733B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

A parking lot and vehicle target detection method based on visual saliency and queue correction belongs to the technical field of vehicle detection in parking lots. The invention solves the problems of low processing speed and poor vehicle target detection effect of the existing remote sensing image vehicle target detection method. According to the method, a saliency map BBSM based on brightness features is designed according to brightness features of a parking lot area and is used for rough extraction of the parking lot area, and then the color features and the surface features of the parking lot area are used for accurately extracting the parking lot outline; in each precisely extracted parking lot outline, suspected areas possibly containing vehicles are extracted, a calculation method of vehicle queue arrangement directions based on an edge statistical model is designed to correct the vehicle queue arrangement directions, finally, a sliding window method is used for cutting a vehicle queue to extract suspected vehicle slices, after HOG characteristics of the slices are extracted, an SVM classifier is used for secondary classification, targets classified into the vehicles are marked back to original images, and vehicle detection is achieved; the invention is applied to the technical field of vehicle detection in parking lots.

Description

Translated fromChinese
基于视觉显著性和队列修正的停车场和车辆目标检测方法Parking lot and vehicle object detection method based on visual saliency and queue correction

技术领域technical field

本发明属于停车场内车辆检测技术领域,具体涉及一种停车场和车辆目标检测方法。The invention belongs to the technical field of vehicle detection in a parking lot, and in particular relates to a parking lot and a vehicle target detection method.

背景技术Background technique

遥感图像中车辆目标的检测在城市规划、交通管理等方面都具有重要的应用意义。但现有的车辆检测研究较多集中在道路车辆的检测,对于停放在停车场内的车辆检测研究相对较少,其方法可以大致分成两种:基于模板匹配的目标分类方法;基于特征提取的目标分类方法。The detection of vehicle objects in remote sensing images has important application significance in urban planning and traffic management. However, most of the existing vehicle detection research focuses on the detection of road vehicles, and there are relatively few researches on the detection of vehicles parked in the parking lot. The methods can be roughly divided into two types: target classification methods based on template matching; target classification method.

基于模板匹配的方法是目标分类的基本方法之一,通过计算模板图像和待识别目标图像像素点的欧氏距离等度量其相似度,从而判断待识别区域或目标的类别。但其只能与模板库中含有的样本做匹配,导致算法的泛化能力较低,不具有良好的光照不变性、旋转不变性和视角变换不变性,而且逐点运算复杂度高,运算时间长,不能用于实时处理。基于特征的分类方法对遥感区域提取和目标检测识别是比较通用且有效的方法。通过分析目标与虚警在一些特征上的差异,如尺度不变特征变换特征、方向梯度直方图(Histogram ofOriented Gradient,HOG)特征、几何不变矩、长宽比、纹理特征等,然后选择其中有利于分类的特征描述切片,最后利用机器学习等方法对特征分类。The method based on template matching is one of the basic methods of target classification. By calculating the Euclidean distance between the template image and the pixels of the target image to be recognized, the similarity is measured, so as to determine the category of the area or target to be recognized. However, it can only be matched with the samples contained in the template library, resulting in a low generalization ability of the algorithm, and it does not have good illumination invariance, rotation invariance and viewing angle transformation invariance, and the point-by-point operation complexity is high and the operation time is high. long and cannot be used for real-time processing. The feature-based classification method is a common and effective method for remote sensing region extraction and target detection and recognition. By analyzing the differences between the target and the false alarm in some features, such as scale-invariant feature transformation features, Histogram of Oriented Gradient (HOG) features, geometric invariant moments, aspect ratio, texture features, etc. The features that are conducive to classification describe slices, and finally use methods such as machine learning to classify the features.

现有的遥感图像的车辆目标检测的主要问题在于区域跨度广,数据量大而且在亚米级分辨率下目标细节特征不明显。这将导致处理系统内存占用大,且处理过程较慢,而且车辆样本切片能提取的特征较少,影响分类器的分类效果,导致车辆目标检测的效果差。The main problems of vehicle target detection in existing remote sensing images are that the area span is wide, the amount of data is large, and the target detail features are not obvious at sub-meter resolution. This will result in a large memory usage of the processing system and a slow processing process, and fewer features can be extracted from the vehicle sample slice, which will affect the classification effect of the classifier and lead to poor vehicle target detection effect.

因此,研究一种既能减少无用数据的处理,又能准确检测出车辆目标的方法就显得很有必要。Therefore, it is necessary to study a method that can not only reduce the processing of useless data, but also accurately detect the vehicle target.

发明内容SUMMARY OF THE INVENTION

本发明的目的是为解决现有的遥感图像车辆目标检测方法的数据量大导致处理速度慢,以及车辆样本切片能提取的特征少导致车辆目标检测效果差的问题。The purpose of the present invention is to solve the problems that the large amount of data of the existing remote sensing image vehicle target detection method results in slow processing speed, and the few features that can be extracted from vehicle sample slices lead to poor vehicle target detection effect.

本发明为解决上述技术问题采取的技术方案是:基于视觉显著性和队列修正的停车场和车辆目标检测方法,该方法包括以下步骤:The technical scheme adopted by the present invention to solve the above-mentioned technical problems is: a parking lot and vehicle target detection method based on visual saliency and queue correction, and the method includes the following steps:

步骤一、输入亚米级高分辨率光学遥感图像,基于停车场的亮度特征和视觉显著性,计算出输入的亚米级高分辨率光学遥感图像的BBSM显著图,并将计算出的BBSM显著图二值化;Step 1: Input the sub-meter high-resolution optical remote sensing image, calculate the BBSM saliency map of the input sub-meter high-resolution optical remote sensing image based on the brightness characteristics and visual saliency of the parking lot, and calculate the calculated BBSM saliency map. Graph binarization;

步骤二、根据停车场的面特征对二值化后的BBSM显著图进行超像素的分割,得到分割后的全部超像素块,设置筛选条件对分割后的全部超像素块进行筛选,计算筛选出的每个超像素块的质心图CDM(i′,j′),利用计算出的每个超像素块的质心图CDM(i′,j′)更新二值化后的BBSM显著图,得到二值化后的BBSM显著图的质心图CDM;Step 2: Perform superpixel segmentation on the binarized BBSM saliency map according to the surface features of the parking lot to obtain all the segmented superpixel blocks, set filtering conditions to screen all the segmented superpixel blocks, and calculate and filter out The centroid map CDM(i′,j′) of each superpixel block is used to update the binarized BBSM saliency map using the calculated centroid map CDM(i′,j′) of each superpixel block to obtain two The centroid map CDM of the valued BBSM saliency map;

根据得到的CDM计算二值化后的BBSM显著图的质心密度分布指数CDDI图,并根据CDDI图获取ROI图像,即获得粗提取停车场区域图像;According to the obtained CDM, the centroid density distribution index CDDI map of the binarized BBSM saliency map is calculated, and the ROI image is obtained according to the CDDI map, that is, the coarsely extracted parking lot area image is obtained;

步骤三、根据粗提取的停车场区域图像及停车场的颜色特征和面特征,获得精提取的停车场区域图像,完成停车场检测;Step 3: According to the coarsely extracted parking lot area image and the color features and surface features of the parking lot, obtain a finely extracted parking lot area image, and complete the parking lot detection;

步骤四、计算精提取的停车场区域图像的SR显著图,并提取SR显著图内的疑似车辆区域;Step 4: Calculate the SR saliency map of the refined parking lot area image, and extract the suspected vehicle area in the SR saliency map;

计算每个疑似车辆区域的队列排布方向的角度,根据求得的角度将所有的疑似车辆区域旋转至水平排布的方向,完成疑似车辆区域的队列修正;Calculate the angle of the queue arrangement direction of each suspected vehicle area, rotate all the suspected vehicle areas to the horizontal arrangement direction according to the obtained angle, and complete the queue correction of the suspected vehicle area;

步骤五、利用滑窗切割方法将旋转后的所有疑似车辆区域切割成切片,并对切割成的切片进行HOG特征提取,利用SVM分类器对提取的HOG特征分类,将分类为车辆的切片标记回原图,完成车辆检测。Step 5: Use the sliding window cutting method to cut all the rotated suspected vehicle areas into slices, and extract the HOG features from the sliced slices. Use the SVM classifier to classify the extracted HOG features, and label the slices classified as vehicles. The original image, complete the vehicle inspection.

本发明的有益效果是:本发明提出了基于视觉显著性和队列修正的停车场和车辆目标检测方法,本发明首先根据停车场区域亮度高且亮度分布区间相对于输入图占比小的亮度特征,设计了一种基于亮度特征的显著图BBSM用于停车场区域的粗提取,然后再利用停车场区域无色且亮度高的颜色特征和区域面积大的面特征,进一步精确提取停车场轮廓,完成停车场检测;在每一个精提取的停车场轮廓内,利用SR显著图提取可能包含车辆的疑似区域,并设计了一种基于边缘统计模型的车辆队列排布方向的计算方法用于修正车辆队列的排布方向,最后利用滑窗方法切割车辆队列,提取出大量疑似车辆的切片,将这些切片提取HOG特征后利用SVM分类器进行真假车辆切片分类,将分类为车辆的目标标记回原图,最终实现车辆检测;本发明设计的停车场区域提取方法和基于队列修正与SR显著图的车辆检测方法可以保证车辆检测的精确度达到85%以上。The beneficial effects of the present invention are as follows: the present invention proposes a parking lot and vehicle target detection method based on visual saliency and queue correction. The present invention firstly bases on the brightness characteristics of the parking lot area with high brightness and a small proportion of the brightness distribution interval relative to the input image. , designed a saliency map BBSM based on brightness features for the rough extraction of the parking lot area, and then used the color features of the parking lot area with colorless and high brightness and the surface features of the large area to further accurately extract the parking lot outline, Complete parking lot detection; in each refined parking lot outline, use SR saliency map to extract suspected areas that may contain vehicles, and design a calculation method of vehicle queue arrangement direction based on edge statistical model to correct vehicles The arrangement direction of the queue, finally use the sliding window method to cut the vehicle queue, extract a large number of slices of suspected vehicles, extract the HOG features from these slices, use the SVM classifier to classify the real and fake vehicle slices, and return the target classified as a vehicle. Figure, and finally realize vehicle detection; the parking lot area extraction method and the vehicle detection method based on queue correction and SR saliency map designed by the present invention can ensure that the accuracy of vehicle detection reaches more than 85%.

而且与传统方法相比,本发明设计的BBSM显著图的计算方法可以显著加速车辆检测的处理速度。Moreover, compared with the traditional method, the calculation method of the BBSM saliency map designed by the present invention can significantly accelerate the processing speed of vehicle detection.

附图说明Description of drawings

图1是本发明的一种基于视觉显著性和队列修正的停车场和车辆目标检测方法的流程图;1 is a flowchart of a parking lot and vehicle target detection method based on visual saliency and queue correction of the present invention;

图2为本发明具体实施方式二所述的输入的亚米级高分辨率光学遥感图像的示意图;2 is a schematic diagram of an input sub-meter high-resolution optical remote sensing image according toEmbodiment 2 of the present invention;

图3为本发明具体实施方式二计算出的BBSM显著图;Fig. 3 is the BBSM saliency map calculated by the second embodiment of the present invention;

图4为本发明具体实施方式三所述的粗提取的停车场区域图像的示意图;4 is a schematic diagram of a roughly extracted parking lot area image according to Embodiment 3 of the present invention;

图5为本发明具体实施方式四所述的精提取的停车场区域图像的示意图;5 is a schematic diagram of the finely extracted parking lot area image according to Embodiment 4 of the present invention;

图6为本发明具体实施方式五所述的某停车场区域的SR显著图;6 is an SR saliency map of a parking lot area according toEmbodiment 5 of the present invention;

图7为本发明具体实施方式五所述的某停车场区域的二值化SR显著图;7 is a binarized SR saliency map of a parking lot area according toEmbodiment 5 of the present invention;

图8为本发明具体实施方式五所述的疑似车辆区域方向出现偏差的情况示意图;FIG. 8 is a schematic diagram of a situation in which the direction of a suspected vehicle area is deviated according toEmbodiment 5 of the present invention;

图9为本发明具体实施方式五所述的疑似车辆区域的边缘检测结果图;FIG. 9 is a graph showing the edge detection result of the suspected vehicle area according toEmbodiment 5 of the present invention;

图10为本发明具体实施方式五所述的AngleNum统计结果图;10 is a graph of the results of AngleNum statistics according toEmbodiment 5 of the present invention;

其中:横坐标为角度值,纵坐标为点对数量;Among them: the abscissa is the angle value, and the ordinate is the number of point pairs;

图11为本发明的车辆目标检测结果1图;FIG. 11 is a graph of vehicletarget detection result 1 of the present invention;

图12为本发明的车辆目标检测结果2图;Fig. 12 is theresult 2 of vehicle target detection according to the present invention;

图13为本发明的车辆目标检测结果3图;FIG. 13 is a diagram 3 of the vehicle target detection result of the present invention;

图14为本发明的车辆目标检测结果4图;FIG. 14 is a graph of vehicle target detection result 4 of the present invention;

具体实施方式Detailed ways

具体实施方式一:如图1所示,本实施方式所述的基于视觉显著性和队列修正的停车场和车辆目标检测方法,该方法包括以下步骤:Embodiment 1: As shown in FIG. 1 , the parking lot and vehicle target detection method based on visual saliency and queue correction described in this embodiment includes the following steps:

步骤一、输入亚米级高分辨率光学遥感图像,基于停车场的亮度特征和视觉显著性,计算出输入的亚米级高分辨率光学遥感图像的BBSM显著图,并将计算出的BBSM显著图二值化;Step 1: Input the sub-meter high-resolution optical remote sensing image, calculate the BBSM saliency map of the input sub-meter high-resolution optical remote sensing image based on the brightness characteristics and visual saliency of the parking lot, and calculate the calculated BBSM saliency map. Graph binarization;

步骤二、根据停车场的面特征对二值化后的BBSM显著图进行超像素的分割,得到分割后的全部超像素块,设置筛选条件对分割后的全部超像素块进行筛选,计算筛选出的每个超像素块的质心图CDM(i′,j′),利用计算出的每个超像素块的质心图CDM(i′,j′)更新二值化后的BBSM显著图,得到二值化后的BBSM显著图的质心图CDM;Step 2: Perform superpixel segmentation on the binarized BBSM saliency map according to the surface features of the parking lot to obtain all the segmented superpixel blocks, set filtering conditions to screen all the segmented superpixel blocks, and calculate and filter out The centroid map CDM(i′,j′) of each superpixel block is used to update the binarized BBSM saliency map using the calculated centroid map CDM(i′,j′) of each superpixel block to obtain two The centroid map CDM of the valued BBSM saliency map;

根据得到的CDM计算二值化后的BBSM显著图的质心密度分布指数CDDI图,并根据CDDI图获取ROI图像,即获得粗提取停车场区域图像;According to the obtained CDM, the centroid density distribution index CDDI map of the binarized BBSM saliency map is calculated, and the ROI image is obtained according to the CDDI map, that is, the coarsely extracted parking lot area image is obtained;

步骤三、根据粗提取的停车场区域图像及停车场的颜色特征和面特征,获得精提取的停车场区域图像,完成停车场检测;Step 3: According to the coarsely extracted parking lot area image and the color features and surface features of the parking lot, obtain a finely extracted parking lot area image, and complete the parking lot detection;

步骤四、计算精提取的停车场区域图像的SR显著图,并提取SR显著图内的疑似车辆区域;Step 4: Calculate the SR saliency map of the refined parking lot area image, and extract the suspected vehicle area in the SR saliency map;

计算每个疑似车辆区域的队列排布方向的角度,根据求得的角度将所有的疑似车辆区域旋转至水平排布的方向,完成疑似车辆区域的队列修正;Calculate the angle of the queue arrangement direction of each suspected vehicle area, rotate all the suspected vehicle areas to the horizontal arrangement direction according to the obtained angle, and complete the queue correction of the suspected vehicle area;

步骤五、利用滑窗切割方法将旋转后的所有疑似车辆区域切割成切片,并对切割成的切片进行HOG特征提取,利用SVM分类器对提取的HOG特征分类,将分类为车辆的切片标记回原图,完成车辆检测。Step 5: Use the sliding window cutting method to cut all the rotated suspected vehicle areas into slices, and extract the HOG features from the sliced slices. Use the SVM classifier to classify the extracted HOG features, and label the slices classified as vehicles. The original image, complete the vehicle inspection.

具体实施方式二:本实施方式与具体实施方式一不同的是:所述步骤一的具体过程为:Embodiment 2: The difference between this embodiment andEmbodiment 1 is that the specific process ofstep 1 is:

步骤一一、如图2所示,输入亚米级高分辨率光学遥感图像,将输入的亚米级高分辨率光学遥感图像转换到亮度通道后,得到亮度图像I,亮度图像I的尺寸为M×N,M和N分别代表亮度图像I的高度和宽度;Step 11, as shown in Figure 2, input a sub-meter high-resolution optical remote sensing image, convert the input sub-meter high-resolution optical remote sensing image to a brightness channel, and obtain a brightness image I, and the size of the brightness image I is: M×N, where M and N represent the height and width of the luminance image I, respectively;

统计亮度图像I中所有像素点对应的亮度值u,u=0,...,255;Count the brightness values u corresponding to all pixels in the brightness image I, u=0,...,255;

统计结果Pu表示亮度图像I中亮度值为u的像素点个数,例如P14表示I中亮度值为14的像素点个数,则计算出亮度图像I中每个像素点的BBSM值为:The statistical result Pu represents the number of pixels with a brightness value of u in the brightness image I, for example, P14 represents the number of pixels with a brightness value of 14 in I, then the BBSM value of each pixel in the brightness image I is calculated as :

Figure BDA0001902397550000041
Figure BDA0001902397550000041

其中:(i,j)代表亮度图像I中的任意一个像素点,I(i,j)代表亮度图像I中的任意一个像素点(i,j)的亮度值,BBSM(i,j)代表像素点(i,j)的BBSM的值;Among them: (i, j) represents any pixel in the brightness image I, I(i, j) represents the brightness value of any pixel (i, j) in the brightness image I, and BBSM(i, j) represents The value of the BBSM of the pixel point (i, j);

则每个像素点的BBSM值形成BBSM显著图;所述BBSM显著图如图3所示;Then the BBSM value of each pixel forms a BBSM saliency map; the BBSM saliency map is shown in Figure 3;

步骤一二、选取二值化阈值T1,对BBSM显著图做二值化得到二值化后的BBSM显著图BBSM':Steps 1 and 2: Select the binarization threshold T1 , and perform binarization on the BBSM saliency map to obtain the binarized BBSM saliency map BBSM':

T1=0.3×max(BBSM)+0.7×min(BBSM)T1 =0.3×max(BBSM)+0.7×min(BBSM)

其中:max(BBSM)为BBSM显著图的最大值,min(BBSM)为BBSM显著图的最小值;Where: max(BBSM) is the maximum value of BBSM saliency map, min(BBSM) is the minimum value of BBSM saliency map;

则像素点(i,j)的二值化后的BBSM值为:

Figure BDA0001902397550000042
Then the binarized BBSM value of the pixel point (i, j) is:
Figure BDA0001902397550000042

计算BBSM显著图的传统方法为:输入亚米级高分辨率光学遥感图像,将输入的亚米级高分辨率光学遥感图像转换到亮度通道后,得到亮度图像I,亮度图像I的尺寸为M×N,其中:i=1,…,M,j=1,…,N,则亮度图像I中像素点的BBSM值为:The traditional method for calculating BBSM saliency map is: input sub-meter high-resolution optical remote sensing image, convert the input sub-meter high-resolution optical remote sensing image to luminance channel, and obtain luminance image I, the size of luminance image I is M ×N, where: i=1,...,M,j=1,...,N, then the BBSM value of the pixel in the luminance image I is:

Figure BDA0001902397550000051
Figure BDA0001902397550000051

其中:M和N分别代表亮度图像I的高度和宽度,(i,j)代表亮度图像I中的任意一个像素点,Ii,j代表亮度图像I中的任意一个像素点的亮度值,BBSM(i,j)为像素(i,j)的BBSM值,D(I(i,j),I(m,n))是亮度值I(i,j)和亮度值I(m,n)的绝对差值,即Where: M and N represent the height and width of the brightness image I, respectively, (i, j) represent any pixel in the brightness image I, Ii, j represent the brightness value of any pixel in the brightness image I, BBSM(i,j) is the BBSM value of the pixel (i,j), D(I(i,j) ,I(m,n) ) is the luminance value I(i,j) and the luminance value I(m,n) The absolute difference of , that is

D(I(i,j),I(m,n))=|I(i,j)-I(m,n)|D(I(i,j) ,I(m,n) )=|I(i,j) -I(m,n) |

每个像素点的BBSM值形成BBSM显著图;The BBSM value of each pixel forms a BBSM saliency map;

由于在亮度图像I中,相同亮度的像素点应该具有相同的BBSM值,所以与传统方法相比较,本实施方式的BBSM显著图获取方法加速了处理速度。Since in the luminance image I, pixels with the same luminance should have the same BBSM value, the method for obtaining the BBSM saliency map of this embodiment speeds up the processing speed compared with the conventional method.

具体实施方式三:本实施方式与具体实施方式一不同的是:所述步骤二的具体过程为:Embodiment 3: The difference between this embodiment andEmbodiment 1 is that the specific process of the second step is:

步骤二一、步骤一中BBSM只用到了停车场的亮度特征,接下来要利用停车场的面特征;Step 21. Instep 1, BBSM only uses the brightness feature of the parking lot, and then uses the surface feature of the parking lot;

在二值化后的BBSM显著图上,取高度方向的切割步长为W、取宽度方向的切割步长为H,利用W和H将二值化后的BBSM显著图切割成

Figure BDA0001902397550000052
个超像素块SP,每一个超像素块其实是一个小的二值图,当W不能被M整除或者H不能被N整除时,则去除图像的少量边缘区域,以保证W能被M整除且H能被N整除。On the binarized BBSM saliency map, take the cutting step in the height direction as W and the cutting step in the width direction as H, and use W and H to cut the binarized BBSM saliency map into
Figure BDA0001902397550000052
Each superpixel block is actually a small binary image. When W cannot be divisible by M or H cannot be divisible by N, a small amount of edge areas of the image are removed to ensure that W can be divisible by M and H is divisible by N.

定义每个超像素块的权值WSP为:The weight WSP of each superpixel block is defined as:

Figure BDA0001902397550000053
Figure BDA0001902397550000053

其中:SP(i′,j′)为超像素块中像素点(i′,j′)的二值化后的BBSM值;Where: SP(i′,j′) is the binarized BBSM value of the pixel point (i′,j′) in the superpixel block;

步骤二二、设置筛选条件为WSP>β·W·H,其中:β为筛选系数,利用设置的筛选条件对步骤二一分割得到的超像素块进行筛选,筛选出权值大于筛选条件的超像素块,剔除权值不大于筛选条件的超像素块;Step 22: Set the screening condition as WSP > β·W·H, where: β is the screening coefficient, and use the set screening conditions to screen the superpixel blocks obtained by step 21, and screen out those whose weights are greater than the screening conditions. Super pixel block, remove the super pixel block whose weight is not greater than the screening condition;

步骤二三、计算筛选出的每个超像素块的质心图CDM(i′,j′):CDM指的是标识了每一个超像素块质心的二值图,Step 23: Calculate the centroid map CDM(i′,j′) of each superpixel block selected: CDM refers to a binary map that identifies the centroid of each superpixel block,

CDM的计算方法为The calculation method of CDM is

Figure BDA0001902397550000054
Figure BDA0001902397550000054

其中:

Figure BDA0001902397550000061
Figure BDA0001902397550000062
表示超像素块的质心,也就是每个超像素块中值为1的所有点的平均横坐标值和纵坐标值。即质心图中值为1的点表示该点为中心的一定大小的区域内存在停车场。利用计算出的每个超像素块的质心图CDM(i′,j′)更新二值化后的BBSM显著图,得到二值化后的BBSM显著图的质心图CDM;in:
Figure BDA0001902397550000061
and
Figure BDA0001902397550000062
Represents the centroid of the superpixel block, that is, the average abscissa value and ordinate value of all points with a value of 1 in each superpixel block. That is, a point with a value of 1 in the centroid map indicates that there is a parking lot in an area of a certain size centered on that point. Update the binarized BBSM saliency map using the calculated centroid map CDM(i',j') of each superpixel block, and obtain the centroid map CDM of the binarized BBSM saliency map;

步骤二四、用尺寸为k×k的单位矩阵Tek×k与二值化后的BBSM显著图的质心图CDM卷积得到二值化后的BBSM显著图的质心密度分布指数CDDI图;Step 24: Convolving the centroid map CDM of the binarized BBSM saliency map with the unit matrix Tek×k of size k×k to obtain the CDDI map of the centroid density distribution index of the binarized BBSM saliency map;

Figure BDA0001902397550000063
Figure BDA0001902397550000063

其中:

Figure BDA0001902397550000064
其中k的取值为120,这是考虑到在4米分辨率下,120×120像素对应的尺寸为480m×480m,比大多数停车场的尺寸要大,因此在卷积时候,能确保生成的ROI图像能覆盖所有停车场而不会有遗漏,以防最终车辆检测时出现过多漏警。in:
Figure BDA0001902397550000064
Among them, the value of k is 120. This is considering that at a resolution of 4 meters, the corresponding size of 120 × 120 pixels is 480 m × 480 m, which is larger than the size of most parking lots. Therefore, during convolution, it can ensure the generation of The ROI image can cover all parking lots without omission, in case there are too many missed alarms in the final vehicle detection.

步骤二五、选取二值化阈值T2,将二值化后的BBSM显著图的质心密度分布指数CDDI图二值化后得到ROI图像,得到粗提取的停车场区域图像。粗提取的停车场区域图像如图4所示,也就是说只要CDDI不为0的点,全部取1。Step 25: Select the binarization threshold T2 , binarize the centroid density distribution index CDDI map of the binarized BBSM saliency map to obtain a ROI image, and obtain a roughly extracted parking lot area image. The coarsely extracted parking lot area image is shown in Figure 4, that is to say, as long as the CDDI is not 0, all points are taken as 1.

具体实施方式四:本实施方式与具体实施方式一不同的是:所述步骤三的具体过程为:Embodiment 4: The difference between this embodiment andEmbodiment 1 is that the specific process of the third step is:

步骤三一、利用停车场具有的颜色特征剔除非路面无关区域,停车场颜色特征体现在场地呈现中性的灰白颜色,且亮度偏高,Step 31. Use the color features of the parking lot to eliminate non-road irrelevant areas. The color features of the parking lot are reflected in the neutral gray and white color of the site, and the brightness is high.

在粗提取的停车场区域图像内,通过设定R、G、B三个波段的值来筛选满足筛选条件的像素点,其中:筛选条件为:R、G、B三个波段的值均在(130,250)的范围内,且任意两个波段的差值不得超过40;In the roughly extracted parking lot area image, the pixels that meet the screening conditions are screened by setting the values of the three bands of R, G, and B. The screening conditions are: the values of the three bands of R, G, and B are all within (130,250), and the difference between any two bands must not exceed 40;

步骤三二、利用停车场具有的面特征剔除道路区域,其面特征在停车场相对于其他路面和场地区域具有较大的面积;Step 32: Use the surface feature of the parking lot to remove the road area, and the surface feature of the parking lot has a larger area than other road surfaces and site areas;

对步骤三一的筛选结果进行孔洞填充,利用腐蚀操作剔除宽度小于10米的道路后,再计算每个连通域的面积,以去除面积小于800平方米的连通域;经过膨胀还原后得到亚米级高分辨率光学遥感图像的停车场区域精提取结果,获得精提取的停车场区域图像,完成停车场检测。停车场区域精提取结果如图5所示;Fill holes in the screening results of step 31, and use the corrosion operation to remove roads with a width of less than 10 meters, and then calculate the area of each connected domain to remove connected domains with an area of less than 800 square meters; The refined extraction result of the parking lot area of the high-resolution optical remote sensing image is obtained, and the refined parking lot area image is obtained to complete the parking lot detection. The refined extraction results of the parking lot area are shown in Figure 5;

具体实施方式五:本实施方式与具体实施方式一不同的是:所述步骤四的具体过程为:Embodiment 5: The difference between this embodiment andEmbodiment 1 is that the specific process of the fourth step is:

步骤四一、将精提取的停车场区域图像进行傅里叶变换来计算幅度谱A(f)和相位谱P(f):Step 41. Perform Fourier transform on the extracted parking lot area image to calculate the amplitude spectrum A(f) and the phase spectrum P(f):

Figure BDA0001902397550000071
Figure BDA0001902397550000071

Figure BDA0001902397550000072
Figure BDA0001902397550000072

其中:I′(x)为精提取的停车场区域内的亮度图像;

Figure BDA0001902397550000073
表示I′(x)取傅里叶变换的幅值,
Figure BDA0001902397550000074
表示I′(x)取傅里叶变换的相位值;
Figure BDA0001902397550000075
代表傅里叶变换;Among them: I′(x) is the brightness image in the precisely extracted parking lot area;
Figure BDA0001902397550000073
Indicates that I'(x) takes the magnitude of the Fourier transform,
Figure BDA0001902397550000074
Indicates that I'(x) takes the phase value of the Fourier transform;
Figure BDA0001902397550000075
represents the Fourier transform;

将幅度谱A(f)变为对数谱L(f),再对对数谱进行线性空间滤波(滤波方式为3×3的均值滤波),将对数谱L(f)与线性空间滤波结果做差得到剩余谱R(f);Change the amplitude spectrum A(f) into the logarithmic spectrum L(f), and then perform linear space filtering on the logarithmic spectrum (the filtering method is 3×3 mean filtering), and the logarithmic spectrum L(f) and linear space filtering. The result is subtracted to obtain the residual spectrum R(f);

L(f)=log(A(f))L(f)=log(A(f))

R(f)=L(f)-hn(f)*L(f)R(f)=L(f)-hn (f)*L(f)

其中:hn(f)为均值滤波算子;Where: hn (f) is the mean filter operator;

把剩余谱R(f)和相位谱P(f)进行反向傅里叶变换得到反向傅里叶变换结果,再利用高斯滤波器对反向傅里叶变换结果进行线性空间滤波得到SR显著图:得到的SR显著图如图6所示,The inverse Fourier transform of the residual spectrum R(f) and the phase spectrum P(f) is performed to obtain the inverse Fourier transform result, and then the Gaussian filter is used to perform linear spatial filtering on the inverse Fourier transform result to obtain the SR significant value. Figure: The obtained SR saliency map is shown in Figure 6,

Figure BDA0001902397550000076
Figure BDA0001902397550000076

其中:g(x)为高斯滤波算子;SR(x)代表SR显著图;

Figure BDA0001902397550000077
代表反向傅里叶变换;Among them: g(x) is the Gaussian filter operator; SR(x) represents the SR saliency map;
Figure BDA0001902397550000077
represents the inverse Fourier transform;

对得到SR显著图进行归一化处理,再对归一化后的SR显著图进行二值化,设定二值化阈值T3=0.12·max(SR),max(SR)为SR显著图的最大值,得到二值化后的SR显著图,则二值化后的SR显著图中值为1的区域即为疑似车辆区域;即疑似车辆区域是一幅标识了所有车辆疑似区域的二值图,如图7所示。The obtained SR saliency map is normalized, and then the normalized SR saliency map is binarized, and the binarization threshold T3 =0.12·max(SR) is set, and max(SR) is the SR saliency map. The maximum value of , and the binarized SR saliency map is obtained, then the area with a value of 1 in the binarized SR saliency map is the suspected vehicle area; that is, the suspected vehicle area is a two-dimensional image that identifies all vehicle suspected areas. value graph, as shown in Figure 7.

在停车场上,大多数的车辆均聚集分布,即多辆车以同一个朝向,停放成一堆,为了使得到的疑似区域的图像中含有车辆的区域排布方向水平,本发明提出了一种区域中物体排布方向计算方法。图8是停车场中疑似区域的图像之一,由图可见,该疑似区域中车辆队列的旁边有另一辆车,该独立的车辆在SR显著图中的显著度较高,因此二值化时得到的二值图中车辆队列与独立车辆的标识图相连,导致其最小外接矩形的长边方向与原本的排布方向不一致,其原因在于车辆队列与独立车辆距离较近,导致其SR显著图二值化结果相连,而直接取该连通域的最小外接四边形的话,就使得最后的方向取值受到的来自于队列和独立目标同等的影响。而人的判断其实综合考虑了所有车辆的影响,也就是说如果把最后的这个角度当做一次投票,那么队列中15辆车辆应各具有一个投票权,而独立的车辆同样有一个投票权,最后一起投票,选择票数最高的角度。基于这一原理,本实施方式设计了一种角度计算方法。In the parking lot, most of the vehicles are clustered and distributed, that is, multiple vehicles are parked in a pile with the same orientation. In order to make the area of the suspected area in the image containing the vehicles arranged in a horizontal direction, the present invention proposes a The calculation method of the arrangement direction of objects in the area. Figure 8 is one of the images of the suspected area in the parking lot. It can be seen from the figure that there is another vehicle next to the vehicle queue in the suspected area. The independent vehicle has a high degree of saliency in the SR saliency map, so it is binarized The vehicle queue in the binary image obtained at the time is connected with the identification map of the independent vehicle, which causes the long-side direction of the minimum circumscribed rectangle to be inconsistent with the original arrangement direction. The graph binarization results are connected, and if the minimum circumscribed quadrilateral of the connected domain is directly taken, the final direction value will be equally affected by the queue and the independent target. In fact, human judgment takes into account the influence of all vehicles, that is to say, if the last angle is regarded as a vote, then each of the 15 vehicles in the queue should have a voting right, and an independent vehicle also has a voting right, and finally Vote together and choose the angle with the most votes. Based on this principle, this embodiment designs an angle calculation method.

步骤四二、对于队列停放的车辆,利用Sobel算子提取疑似车辆区域的边缘,得到疑似车辆区域的边缘图,疑似车辆区域的边缘图的结果如图9所示,再利用统计的方法,计算疑似车辆区域的边缘图中任意两点连线与水平方向的夹角的角度值,记录所有的角度值对应的点对数量,记为AngleNumu′,u′=0,...,179,(疑似车辆区域的边缘图中任意两点连线与水平方向的夹角的定义为:以任意两点连线的延长线与水平方向的交点为圆心,圆心右侧为水平方向正向,圆心左侧为水平方向负向,定义由水平方向正向开始,到任意两点连线为止经过的角度为“疑似车辆区域的边缘图中任意两点连线与水平方向的夹角”,夹角的取值范围是0—179度)其中:AngleNumu′代表角度值u′对应的点对数量;例如AngleNum5表示疑似车辆区域的边缘图中所有的连线角度为5(±0.5)度的点对的总个数;Step 42: For the vehicles parked in the queue, use the Sobel operator to extract the edge of the suspected vehicle area to obtain the edge map of the suspected vehicle area. The result of the edge map of the suspected vehicle area is shown in Figure 9, and then use the statistical method to calculate The angle value of the angle between any two points in the edge map of the suspected vehicle area and the horizontal direction, record the number of point pairs corresponding to all the angle values, denoted as AngleNumu′ ,u′=0,...,179, (The angle between the line connecting any two points and the horizontal direction in the edge map of the suspected vehicle area is defined as: the intersection of the extension line connecting any two points and the horizontal direction is the center of the circle, and the right side of the center of the circle is the positive direction of the horizontal direction, and the center of the circle is the horizontal direction. The left side is the negative direction of the horizontal direction, and the angle from the positive direction of the horizontal direction to the connection of any two points is defined as "the angle between the connection line between any two points in the edge map of the suspected vehicle area and the horizontal direction". The value range is 0-179 degrees) where: AngleNumu' represents the number of point pairs corresponding to the angle value u'; for example, AngleNum5 means that all the connection angles in the edge map of the suspected vehicle area are 5 (±0.5) degrees The total number of point pairs;

图10是图9的AngleNum统计结果图,这里AngleNum的最大值对应的角度为6度,Fig. 10 is the AngleNum statistical result graph of Fig. 9, where the angle corresponding to the maximum value of AngleNum is 6 degrees,

将最大的AngleNumu′对应的角度值u′作为疑似车辆区域的队列排布方向的角度,将疑似车辆区域按照求得的角度旋转来获得水平方向排布的疑似车辆区域;Taking the angle value u' corresponding to the largest AngleNum u' as the angle of the queue arrangement direction of the suspected vehicle area, and rotating the suspected vehicle area according to the obtained angle to obtain the suspected vehicle area arranged in the horizontal direction;

例如:某个疑似车辆区域对应的角度为8°,则将该疑似车辆区域向图像的边缘方向(即水平方向)旋转8°,使疑似车辆区域与图像的边缘平行,即获得水平方向排布的疑似车辆区域。For example, if the angle corresponding to a suspected vehicle area is 8°, then rotate the suspected vehicle area to the edge direction of the image (that is, the horizontal direction) by 8°, so that the suspected vehicle area is parallel to the edge of the image, that is, the horizontal arrangement is obtained. suspected vehicle area.

步骤四三、对于非队列停放的车辆,(非队列停放的车辆是指疑似车辆区域的面积不大于25平方米的区域),即疑似车辆区域中只有一辆车且几乎占满整个区域的情况,由于其面积较小,最小外接矩形的长边方向一般可以准确的表示车辆的朝向,则直接取疑似车辆区域的宽边与水平方向夹角的角度值作为队列排布方向的角度,将疑似车辆区域按照求得的角度值旋转来获得水平方向排布的疑似车辆区域;Step 43. For vehicles parked in non-platoons, (non-platooned vehicles refer to the area where the area of the suspected vehicle area is not greater than 25 square meters), that is, there is only one vehicle in the suspected vehicle area and it almost fills the entire area. , due to its small area, the direction of the long side of the minimum circumscribed rectangle can generally accurately represent the direction of the vehicle, then directly take the angle value of the angle between the wide side of the suspected vehicle area and the horizontal direction as the angle of the queue arrangement direction, The vehicle area is rotated according to the obtained angle value to obtain the suspected vehicle area arranged in the horizontal direction;

步骤四四、重复步骤四二和步骤四三的过程,计算出整个停车场内所有疑似车辆区域的排布方向的角度,将所有疑似车辆区域按照求得的对应角度旋转来获得水平方向排布的所有疑似车辆区域,完成疑似车辆区域的队列修正。Step 44: Repeat the process of Step 42 and Step 43, calculate the angle of the arrangement direction of all suspected vehicle areas in the entire parking lot, and rotate all suspected vehicle areas according to the obtained corresponding angle to obtain the horizontal direction arrangement. All suspected vehicle areas, complete the queue correction of the suspected vehicle areas.

具体实施方式六:本实施方式与具体实施方式一不同的是:所述步骤五的具体过程为:Embodiment 6: The difference between this embodiment andEmbodiment 1 is that the specific process of the fifth step is:

由于在亚米级分辨率下车辆尺寸小,难以利用特征初步定位车辆,同时也考虑到尽可能少的漏警;Due to the small size of the vehicle at sub-meter resolution, it is difficult to use the features to initially locate the vehicle, while also taking into account as few missing alarms as possible;

设置滑窗的窗口大小为20×12像素,步长为2像素,将旋转后的所有疑似车辆区域切割成切片,并对切割成的切片进行HOG特征提取,利用SVM分类器对提取的HOG特征进行二分类,将分类为车辆的切片标记回原图,完成车辆检测。Set the window size of the sliding window to 20 × 12 pixels and the step size to 2 pixels, cut all the suspected vehicle areas after rotation into slices, and perform HOG feature extraction on the sliced slices, and use the SVM classifier to extract the HOG features. Perform two-classification, label the slices classified as vehicles back to the original image, and complete vehicle detection.

滑窗的窗口大小比所有的常规车辆都要大,这是考虑到当窗口中包含车辆时,样本切片中应该有完整的车辆及其边缘,这样在提取HOG特征时能同时得到车辆本身细节的HOG特征和车辆轮廓的HOG特征,这样能更充分地利用车辆的特征信息,有利于提高分类的准确度。The window size of the sliding window is larger than that of all conventional vehicles. This is to consider that when the window contains vehicles, there should be a complete vehicle and its edges in the sample slice, so that the details of the vehicle itself can be obtained when extracting HOG features. The HOG feature and the HOG feature of the vehicle outline can more fully utilize the feature information of the vehicle, which is beneficial to improve the accuracy of the classification.

本发明中所采用的二分类的SVM分类器是经过训练的SVM分类器,其中用于训练SVM分类器的正样本为400个人工选取的真实车辆的切片所提取的HOG特征,负样本为500个人工选取的非车辆的切片所提取的HOG特征。The two-class SVM classifier used in the present invention is a trained SVM classifier, wherein the positive samples used for training the SVM classifier are HOG features extracted from 400 manually selected slices of real vehicles, and the negative samples are 500 HOG features extracted from hand-selected non-vehicle slices.

具体实施方式七:本实施方式与具体实施方式一或六不同的是:所述HOG特征提取时的参数设置为:单元格的尺寸为4像素,划分的角度块数目为5个,步长为2像素。最终提取得到的HOG特征维度为420维。Embodiment 7: The difference between this embodiment andEmbodiment 1 or 6 is that the parameters in the HOG feature extraction are set as: the size of the cell is 4 pixels, the number of divided angle blocks is 5, and the step size is 2 pixels. The final extracted HOG feature dimension is 420 dimensions.

具体实施方式八:本实施方式与具体实施方式三不同的是:所述高度方向的切割步长W和宽度方向的切割步长H的取值范围均为[10,100]。Embodiment 8: The difference between this embodiment and Embodiment 3 is that the value ranges of the cutting step length W in the height direction and the cutting step length H in the width direction are both [10, 100].

具体实施方式九:本实施方式与具体实施方式三不同的是:所述筛选系数β的取值范围为[0.3,0.8]。Embodiment 9: This embodiment differs from Embodiment 3 in that the value range of the screening coefficient β is [0.3, 0.8].

具体实施方式十:本实施方式与具体实施方式三不同的是:所述二值化阈值T2的取值范围为[0.1,0.9]。Embodiment 10: The difference between this embodiment and the third embodiment is that the value range of the binarization threshold T2 is [0.1, 0.9].

采用以下实施例验证本发明的有益效果:Adopt the following examples to verify the beneficial effects of the present invention:

实验图像是从Google Earth上获取的美国得克萨斯州某大型停车场所在区域的大场景图像,图像大小为13000*13000像素,分辨率为0.5米,包含有5145个车辆目标。测试结果表明,基于BBSM显著图的停车场提取方法,按照像素点计算可以提取出93.8%的停车场像素,能够较为准确地提取出停车场区域;基于队列修正与SR显著图的车辆检测算法,按照车辆目标计算,精确度为85.12%,召回率为89.76%,具有较高的检测率。如图11、图12、图13和图14所示,均为部分检测结果图。The experimental image is a large scene image obtained from Google Earth in the area of a large parking lot in Texas, USA. The size of the image is 13000*13000 pixels, the resolution is 0.5 meters, and there are 5145 vehicle targets. The test results show that the parking lot extraction method based on the BBSM saliency map can extract 93.8% of the parking lot pixels according to the pixel point calculation, and can extract the parking lot area more accurately; the vehicle detection algorithm based on the queue correction and SR saliency map, Calculated according to the vehicle target, the precision is 85.12% and the recall rate is 89.76%, which has a high detection rate. As shown in Fig. 11, Fig. 12, Fig. 13 and Fig. 14, they are all partial detection result graphs.

本发明的上述算例仅为详细地说明本发明的计算模型和计算流程,而并非是对本发明的实施方式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式的变化或变动,这里无法对所有的实施方式予以穷举,凡是属于本发明的技术方案所引伸出的显而易见的变化或变动仍处于本发明的保护范围之列。The above calculation examples of the present invention are only to illustrate the calculation model and calculation process of the present invention in detail, but are not intended to limit the embodiments of the present invention. For those of ordinary skill in the art, on the basis of the above description, other different forms of changes or changes can also be made, and it is impossible to list all the embodiments here. Obvious changes or modifications are still within the scope of the present invention.

Claims (9)

Translated fromChinese
1.基于视觉显著性和队列修正的停车场和车辆目标检测方法,其特征在于,该方法包括以下步骤:1. A parking lot and vehicle target detection method based on visual saliency and queue correction, characterized in that the method comprises the following steps:步骤一、输入亚米级高分辨率光学遥感图像,基于停车场的亮度特征和视觉显著性,计算出输入的亚米级高分辨率光学遥感图像的BBSM显著图,并将计算出的BBSM显著图二值化;其具体过程为:Step 1: Input the sub-meter high-resolution optical remote sensing image, calculate the BBSM saliency map of the input sub-meter high-resolution optical remote sensing image based on the brightness characteristics and visual saliency of the parking lot, and calculate the calculated BBSM saliency map. Figure binarization; its specific process is:步骤一一、输入亚米级高分辨率光学遥感图像,将输入的亚米级高分辨率光学遥感图像转换到亮度通道后,得到亮度图像I,亮度图像I的尺寸为M×N,M和N分别代表亮度图像I的高度和宽度;Step 11: Input a sub-meter high-resolution optical remote sensing image, convert the input sub-meter high-resolution optical remote sensing image to a brightness channel, and obtain a brightness image I, and the size of the brightness image I is M×N, M and N represents the height and width of the luminance image I, respectively;统计亮度图像I中所有像素点对应的亮度值u,u=0,...,255;Count the brightness values u corresponding to all pixels in the brightness image I, u=0,...,255;统计结果Pu表示亮度图像I中亮度值为u的像素点个数,则计算出亮度图像I中每个像素点的BBSM值为:The statistical result Pu represents the number of pixels with the brightness value u in the brightness image I, then the BBSM value of each pixel in the brightness image I is calculated as:
Figure FDA0002536231760000011
Figure FDA0002536231760000011
其中:(i,j)代表亮度图像I中的任意一个像素点,I(i,j)代表亮度图像I中的任意一个像素点(i,j)的亮度值,BBSM(i,j)代表像素点(i,j)的BBSM的值;Among them: (i, j) represents any pixel in the brightness image I, I(i, j) represents the brightness value of any pixel (i, j) in the brightness image I, and BBSM(i, j) represents The value of the BBSM of the pixel point (i, j);则每个像素点的BBSM值形成BBSM显著图;Then the BBSM value of each pixel forms a BBSM saliency map;步骤一二、选取二值化阈值T1,对BBSM显著图做二值化得到二值化后的BBSM显著图BBSM':Steps 1 and 2: Select the binarization threshold T1 , and perform binarization on the BBSM saliency map to obtain the binarized BBSM saliency map BBSM':T1=0.3×max(BBSM)+0.7×min(BBSM)T1 =0.3×max(BBSM)+0.7×min(BBSM)其中:max(BBSM)为BBSM显著图的最大值,min(BBSM)为BBSM显著图的最小值;Where: max(BBSM) is the maximum value of BBSM saliency map, min(BBSM) is the minimum value of BBSM saliency map;则像素点(i,j)的二值化后的BBSM值为
Figure FDA0002536231760000012
Then the binarized BBSM value of the pixel point (i, j) is
Figure FDA0002536231760000012
步骤二、根据停车场的面特征对二值化后的BBSM显著图进行超像素的分割,得到分割后的全部超像素块,设置筛选条件对分割后的全部超像素块进行筛选,计算筛选出的每个超像素块的质心图CDM(i′,j′),(i′,j′)代表超像素块中像素点(i′,j′),利用计算出的每个超像素块的质心图CDM(i′,j′)更新二值化后的BBSM显著图,得到二值化后的BBSM显著图的质心图CDM;Step 2: Perform superpixel segmentation on the binarized BBSM saliency map according to the surface features of the parking lot to obtain all the segmented superpixel blocks, set filtering conditions to screen all the segmented superpixel blocks, and calculate and filter out The centroid map of each superpixel block CDM(i′,j′) , (i′,j′) represents the pixel point (i′,j′) in the superpixel block, using the calculated value of each superpixel block The centroid map CDM(i',j') updates the binarized BBSM saliency map, and obtains the centroid map CDM of the binarized BBSM saliency map;根据得到的CDM计算二值化后的BBSM显著图的质心密度分布指数CDDI图,并根据CDDI图获取ROI图像,即获得粗提取停车场区域图像;According to the obtained CDM, the centroid density distribution index CDDI map of the binarized BBSM saliency map is calculated, and the ROI image is obtained according to the CDDI map, that is, the coarsely extracted parking lot area image is obtained;步骤三、根据粗提取的停车场区域图像及停车场的颜色特征和面特征,获得精提取的停车场区域图像,完成停车场检测;Step 3: According to the coarsely extracted parking lot area image and the color features and surface features of the parking lot, obtain a finely extracted parking lot area image, and complete the parking lot detection;步骤四、计算精提取的停车场区域图像的SR显著图,并提取SR显著图内的疑似车辆区域;Step 4: Calculate the SR saliency map of the refined parking lot area image, and extract the suspected vehicle area in the SR saliency map;计算每个疑似车辆区域的队列排布方向的角度,根据求得的角度将所有的疑似车辆区域旋转至水平排布的方向,完成疑似车辆区域的队列修正;Calculate the angle of the queue arrangement direction of each suspected vehicle area, rotate all the suspected vehicle areas to the horizontal arrangement direction according to the obtained angle, and complete the queue correction of the suspected vehicle area;步骤五、利用滑窗切割方法将旋转后的所有疑似车辆区域切割成切片,并对切割成的切片进行HOG特征提取,利用SVM分类器对提取的HOG特征分类,将分类为车辆的切片标记回原图,完成车辆检测。Step 5: Use the sliding window cutting method to cut all the rotated suspected vehicle areas into slices, and extract the HOG features from the sliced slices. Use the SVM classifier to classify the extracted HOG features, and label the slices classified as vehicles. The original image, complete the vehicle inspection.2.根据权利要求1所述的基于视觉显著性和队列修正的停车场和车辆目标检测方法,其特征在于,所述步骤二的具体过程为:2. the parking lot and vehicle target detection method based on visual salience and queue correction according to claim 1, it is characterized in that, the concrete process of described step 2 is:步骤二一、在二值化后的BBSM显著图上,取高度方向的切割步长为W、取宽度方向的切割步长为H,利用W和H将二值化后的BBSM显著图切割成
Figure FDA0002536231760000021
个超像素块SP;
Step 21. On the binarized BBSM saliency map, take the cutting step in the height direction as W and the cutting step in the width direction as H, and use W and H to cut the binarized BBSM saliency map into
Figure FDA0002536231760000021
a superpixel block SP;
定义每个超像素块的权值WSP为:The weight WSP of each superpixel block is defined as:
Figure FDA0002536231760000022
Figure FDA0002536231760000022
其中:SP(i′,j′)为超像素块中像素点(i′,j′)的二值化后的BBSM值;Where: SP(i′,j′) is the binarized BBSM value of the pixel point (i′,j′) in the superpixel block;步骤二二、设置筛选条件为WSP>β·W·H,其中:β为筛选系数,利用设置的筛选条件对步骤二一分割得到的超像素块进行筛选,筛选出权值大于筛选条件的超像素块,剔除权值不大于筛选条件的超像素块;Step 22: Set the screening condition as WSP > β·W·H, where: β is the screening coefficient, use the set screening conditions to screen the superpixel blocks obtained by the segmentation in step 21, and screen out those whose weights are greater than the screening conditions. Super pixel block, remove the super pixel block whose weight is not greater than the screening condition;步骤二三、计算筛选出的每个超像素块的质心图CDM(i′,j′)Step 23: Calculate the centroid map CDM(i′,j′) of each superpixel block selected:
Figure FDA0002536231760000023
Figure FDA0002536231760000023
其中:
Figure FDA0002536231760000024
Figure FDA0002536231760000025
表示超像素块的质心,利用计算出的每个超像素块的质心图CDM(i′,j′)更新二值化后的BBSM显著图,得到二值化后的BBSM显著图的质心图CDM;
in:
Figure FDA0002536231760000024
and
Figure FDA0002536231760000025
Represents the centroid of the superpixel block, updates the binarized BBSM saliency map using the calculated centroid map CDM(i′,j′) of each superpixel block, and obtains the centroid map CDM of the binarized BBSM saliency map ;
步骤二四、用尺寸为k×k的单位矩阵Tek×k与二值化后的BBSM显著图的质心图CDM卷积得到二值化后的BBSM显著图的质心密度分布指数CDDI图;Step 24: Convolving the centroid map CDM of the binarized BBSM saliency map with the unit matrix Tek×k of size k×k to obtain the CDDI map of the centroid density distribution index of the binarized BBSM saliency map;
Figure FDA0002536231760000031
Figure FDA0002536231760000031
其中:
Figure FDA0002536231760000032
in:
Figure FDA0002536231760000032
步骤二五、选取二值化阈值T2,将二值化后的BBSM显著图的质心密度分布指数CDDI图二值化后得到ROI图像,得到粗提取的停车场区域图像。Step 25: Select the binarization threshold T2 , binarize the centroid density distribution index CDDI map of the binarized BBSM saliency map to obtain a ROI image, and obtain a roughly extracted parking lot area image.
3.根据权利要求1所述的基于视觉显著性和队列修正的停车场和车辆目标检测方法,其特征在于,所述步骤三的具体过程为:3. the parking lot and vehicle target detection method based on visual saliency and queue correction according to claim 1, is characterized in that, the concrete process of described step 3 is:步骤三一、在粗提取的停车场区域图像内,通过设定R、G、B三个波段的值来筛选满足筛选条件的像素点,其中:筛选条件为:R、G、B三个波段的值均在(130,250)的范围内,且任意两个波段的差值小于等于40;Step 31. In the roughly extracted parking lot area image, set the values of the three bands of R, G, and B to screen the pixels that meet the screening conditions, wherein: the screening conditions are: the three bands of R, G, and B. The values of are all in the range of (130,250), and the difference between any two bands is less than or equal to 40;步骤三二、对步骤三一的筛选结果进行孔洞填充,利用腐蚀操作剔除宽度小于10米的道路后,再计算每个连通域的面积,以去除面积小于800平方米的连通域;经过膨胀还原后得到亚米级高分辨率光学遥感图像的停车场区域精提取结果,获得精提取的停车场区域图像,完成停车场检测。Step 32: Fill holes in the screening results of Step 31, remove roads with a width of less than 10 meters by the corrosion operation, and then calculate the area of each connected domain to remove connected domains with an area of less than 800 square meters; Then, the fine extraction result of the parking lot area of the sub-meter high-resolution optical remote sensing image is obtained, and the finely extracted parking lot area image is obtained, and the parking lot detection is completed.4.根据权利要求1所述的基于视觉显著性和队列修正的停车场和车辆目标检测方法,其特征在于,所述步骤四的具体过程为:4. the parking lot and vehicle target detection method based on visual saliency and queue correction according to claim 1, is characterized in that, the concrete process of described step 4 is:步骤四一、将精提取的停车场区域图像进行傅里叶变换来计算幅度谱A(f)和相位谱P(f):Step 41. Perform Fourier transform on the extracted parking lot area image to calculate the amplitude spectrum A(f) and the phase spectrum P(f):
Figure FDA0002536231760000033
Figure FDA0002536231760000033
Figure FDA0002536231760000034
Figure FDA0002536231760000034
其中:I′(x)为精提取的停车场区域内的亮度图像;
Figure FDA0002536231760000035
表示I′(x)取傅里叶变换的幅值,
Figure FDA0002536231760000036
表示I′(x)取傅里叶变换的相位值,
Figure FDA0002536231760000037
代表傅里叶变换;
Among them: I′(x) is the brightness image in the precisely extracted parking lot area;
Figure FDA0002536231760000035
Indicates that I'(x) takes the magnitude of the Fourier transform,
Figure FDA0002536231760000036
Indicates that I'(x) takes the phase value of the Fourier transform,
Figure FDA0002536231760000037
represents the Fourier transform;
将幅度谱A(f)变为对数谱L(f),再对对数谱进行线性空间滤波,将对数谱L(f)与线性空间滤波结果做差得到剩余谱R(f);Transform the amplitude spectrum A(f) into the logarithmic spectrum L(f), then perform linear space filtering on the logarithmic spectrum, and make the difference between the logarithmic spectrum L(f) and the linear space filtering result to obtain the residual spectrum R(f);L(f)=log(A(f))L(f)=log(A(f))R(f)=L(f)-hn(f)*L(f)R(f)=L(f)-hn (f)*L(f)其中:hn(f)为均值滤波算子;Where: hn (f) is the mean filter operator;把剩余谱R(f)和相位谱P(f)进行反向傅里叶变换得到反向傅里叶变换结果,再利用高斯滤波器对反向傅里叶变换结果进行线性空间滤波得到SR显著图:The inverse Fourier transform of the residual spectrum R(f) and the phase spectrum P(f) is performed to obtain the inverse Fourier transform result, and then the Gaussian filter is used to perform linear spatial filtering on the inverse Fourier transform result to obtain the SR significant value. picture:
Figure FDA0002536231760000041
Figure FDA0002536231760000041
其中:g(x)为高斯滤波算子;SR(x)代表SR显著图,
Figure FDA0002536231760000042
代表反向傅里叶变换;
Among them: g(x) is the Gaussian filter operator; SR(x) represents the SR saliency map,
Figure FDA0002536231760000042
represents the inverse Fourier transform;
对得到的SR显著图进行归一化处理,再对归一化后的SR显著图进行二值化,设定二值化阈值T3=0.12·max(SR),max(SR)为SR显著图的最大值,得到二值化后的SR显著图,则二值化后的SR显著图中值为1的区域即为疑似车辆区域;The obtained SR saliency map is normalized, and then the normalized SR saliency map is binarized, and the binarization threshold T3 =0.12·max(SR) is set, and max(SR) is the SR saliency. The maximum value of the map is obtained to obtain the SR saliency map after binarization, and the area with a value of 1 in the binarized SR saliency map is the suspected vehicle area;步骤四二、对于队列停放的车辆,利用Sobel算子提取疑似车辆区域的边缘,得到疑似车辆区域的边缘图,计算疑似车辆区域的边缘图中任意两点连线与水平方向的夹角的角度值,记录所有的角度值对应的点对数量,记为AngleNumu′,u′=0,...,179,其中:AngleNumu′代表角度值u′对应的点对数量;Step 42: For the vehicles parked in the queue, use the Sobel operator to extract the edge of the suspected vehicle area, obtain the edge map of the suspected vehicle area, and calculate the angle between any two points in the edge map of the suspected vehicle area and the angle between the horizontal direction value, record the number of point pairs corresponding to all angle values, denoted as AngleNumu′ , u′=0,...,179, where: AngleNumu′ represents the number of point pairs corresponding to the angle value u′;将最大的AngleNumu′对应的角度值u′作为疑似车辆区域的队列排布方向的角度,将疑似车辆区域按照求得的角度旋转来获得水平方向排布的疑似车辆区域;The angle value u' corresponding to the largest AngleNum u' is taken as the angle of the queue arrangement direction of the suspected vehicle area, and the suspected vehicle area is rotated according to the obtained angle to obtain the suspected vehicle area arranged in the horizontal direction;步骤四三、对于非队列停放的车辆,则直接取疑似车辆区域的宽边与水平方向夹角的角度值作为队列排布方向的角度,将疑似车辆区域按照求得的角度值旋转来获得水平方向排布的疑似车辆区域;Step 43: For vehicles that are not parked in a queue, directly take the angle value of the angle between the broad side of the suspected vehicle area and the horizontal direction as the angle of the queue arrangement direction, and rotate the suspected vehicle area according to the obtained angle value to obtain the horizontal Suspected vehicle areas arranged in directions;步骤四四、重复步骤四二和步骤四三的过程,计算出整个停车场内所有疑似车辆区域的排布方向的角度,将所有疑似车辆区域按照求得的对应角度旋转来获得水平方向排布的所有疑似车辆区域,完成疑似车辆区域的队列修正。Step 44: Repeat the process of Step 42 and Step 43, calculate the angle of the arrangement direction of all suspected vehicle areas in the entire parking lot, and rotate all suspected vehicle areas according to the obtained corresponding angle to obtain the horizontal direction arrangement. All suspected vehicle areas, complete the queue correction of the suspected vehicle areas.
5.根据权利要求1所述的基于视觉显著性和队列修正的停车场和车辆目标检测方法,其特征在于,所述步骤五的具体过程为:5. The parking lot and vehicle target detection method based on visual saliency and queue correction according to claim 1, it is characterized in that, the concrete process of described step 5 is:设置滑窗的窗口大小为20×12像素,步长为2像素,将旋转后的所有疑似车辆区域切割成切片,并对切割成的切片进行HOG特征提取,利用SVM分类器对提取的HOG特征进行二分类,将分类为车辆的切片标记回原图,完成车辆检测。Set the window size of the sliding window to 20 × 12 pixels and the step size to 2 pixels, cut all the suspected vehicle areas after rotation into slices, and perform HOG feature extraction on the sliced slices, and use the SVM classifier to extract the HOG features. Perform two-classification, label the slices classified as vehicles back to the original image, and complete vehicle detection.6.根据权利要求1或5所述的基于视觉显著性和队列修正的停车场和车辆目标检测方法,其特征在于,所述HOG特征提取时的参数设置为:单元格的尺寸为4像素,划分的角度块数目为5个,步长为2像素。6. The parking lot and vehicle target detection method based on visual saliency and queue correction according to claim 1 or 5, wherein the parameters during the HOG feature extraction are set as: the size of the cell is 4 pixels, The number of divided angle blocks is 5, and the step size is 2 pixels.7.根据权利要求2所述的基于视觉显著性和队列修正的停车场和车辆目标检测方法,其特征在于,所述高度方向的切割步长W和宽度方向的切割步长H的取值范围均为[10,100]。7. The parking lot and vehicle target detection method based on visual saliency and queue correction according to claim 2, wherein the value range of the cutting step W in the height direction and the cutting step H in the width direction Both are [10,100].8.根据权利要求2所述的基于视觉显著性和队列修正的停车场和车辆目标检测方法,其特征在于,所述筛选系数β的取值范围为[0.3,0.8]。8 . The parking lot and vehicle target detection method based on visual saliency and queue correction according to claim 2 , wherein the value range of the screening coefficient β is [0.3, 0.8]. 9 .9.根据权利要求2所述的基于视觉显著性和队列修正的停车场和车辆目标检测方法,其特征在于,所述二值化阈值T2的取值范围为[0.1,0.9]。9 . The parking lot and vehicle target detection method based on visual saliency and queue correction according to claim 2 , wherein the binarization threshold T2 has a value range of [0.1, 0.9]. 10 .
CN201811517627.9A2018-12-122018-12-12Parking lot and vehicle target detection method based on visual saliency and queue correctionActiveCN109635733B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201811517627.9ACN109635733B (en)2018-12-122018-12-12Parking lot and vehicle target detection method based on visual saliency and queue correction

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201811517627.9ACN109635733B (en)2018-12-122018-12-12Parking lot and vehicle target detection method based on visual saliency and queue correction

Publications (2)

Publication NumberPublication Date
CN109635733A CN109635733A (en)2019-04-16
CN109635733Btrue CN109635733B (en)2020-10-27

Family

ID=66073158

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201811517627.9AActiveCN109635733B (en)2018-12-122018-12-12Parking lot and vehicle target detection method based on visual saliency and queue correction

Country Status (1)

CountryLink
CN (1)CN109635733B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110488280B (en)*2019-08-292022-03-11广州小鹏自动驾驶科技有限公司Method and device for correcting parking space profile, vehicle and storage medium
CN111444806B (en)*2020-03-192023-06-20成都云盯科技有限公司Commodity touch information clustering method, device and equipment based on monitoring video
CN111753692B (en)*2020-06-152024-05-28珠海格力电器股份有限公司Target object extraction method, product detection method, device, computer and medium
CN113033408B (en)*2021-03-262023-10-20北京百度网讯科技有限公司Data queue dynamic updating method and device, electronic equipment and storage medium
CN115438702A (en)*2022-10-182022-12-06国网山东省电力公司营销服务中心(计量中心)Power line carrier channel noise detection method and system
CN116091948B (en)*2023-01-092025-09-02上海航遥信息技术有限公司 A method and system for rapidly processing images of aerial remote sensing imaging equipment
CN117876933A (en)*2024-01-152024-04-12浙江吉利控股集团有限公司Chassis neglected loading detection system, method and device
CN118135506B (en)*2024-04-302024-07-16东莞市杰瑞智能科技有限公司Electronic guideboard based on visual unit structure and road condition target self-identification method thereof
CN118196730B (en)*2024-05-132024-08-06深圳金语科技有限公司Method, device, equipment and storage medium for processing vehicle image data

Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103310195A (en)*2013-06-092013-09-18西北工业大学LLC-feature-based weak-supervision recognition method for vehicle high-resolution remote sensing images

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8655016B2 (en)*2011-07-292014-02-18International Business Machines CorporationExample-based object retrieval for video surveillance
CN107133558B (en)*2017-03-132020-10-20北京航空航天大学 An infrared pedestrian saliency detection method based on probability propagation

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103310195A (en)*2013-06-092013-09-18西北工业大学LLC-feature-based weak-supervision recognition method for vehicle high-resolution remote sensing images

Also Published As

Publication numberPublication date
CN109635733A (en)2019-04-16

Similar Documents

PublicationPublication DateTitle
CN109635733B (en)Parking lot and vehicle target detection method based on visual saliency and queue correction
CN110310264B (en) A large-scale target detection method and device based on DCNN
Yin et al.Hot region selection based on selective search and modified fuzzy C-means in remote sensing images
CN107239751B (en)High-resolution SAR image classification method based on non-subsampled contourlet full convolution network
CN102509290B (en)Saliency-based synthetic aperture radar (SAR) image airfield runway edge detection method
CN105374033B (en)SAR image segmentation method based on ridge ripple deconvolution network and sparse classification
CN114332026B (en)Visual detection method and device for scratch defects on surface of nameplate
CN102819841B (en)Global threshold partitioning method for partitioning target image
Choi et al.Vehicle detection from aerial images using local shape information
CN102708356A (en)Automatic license plate positioning and recognition method based on complex background
CN105005989B (en)A kind of vehicle target dividing method under weak contrast
CN111353371A (en) Shoreline extraction method based on spaceborne SAR images
CN102542293A (en)Class-I extraction and classification method aiming at high-resolution SAR (Synthetic Aperture Radar) image scene interpretation
CN109635789B (en) High-resolution SAR image classification method based on intensity ratio and spatial structure feature extraction
CN112418028A (en)Satellite image ship identification and segmentation method based on deep learning
CN102073873A (en)Method for selecting SAR (spaceborne synthetic aperture radar) scene matching area on basis of SVM (support vector machine)
CN110321855A (en)A kind of greasy weather detection prior-warning device
Kwon et al.ETVOS: An enhanced total variation optimization segmentation approach for SAR sea-ice image segmentation
CN110070545B (en) A Method for Automatically Extracting Urban Built-up Areas from Urban Texture Feature Density
CN119648696B (en)Intelligent detection method system for construction quality of low-grade highway
CN110310263B (en) A method for detecting residential areas in SAR images based on saliency analysis and background priors
CN114926635B (en)Target segmentation method in multi-focus image combined with deep learning method
CN103065296B (en)High-resolution remote sensing image residential area extraction method based on edge feature
CN114140698A (en)Water system information extraction algorithm based on FasterR-CNN
CN104050486B (en)Polarimetric SAR image classification method based on maps and Wishart distance

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
CB03Change of inventor or designer information
CB03Change of inventor or designer information

Inventor after:Chen Hao

Inventor after:Chen Wen

Inventor after:Qualcomm

Inventor after:Zhao Jing

Inventor before:Chen Hao

Inventor before:Chen Lingyan

Inventor before:Chen Wen

Inventor before:Qualcomm

Inventor before:Zhao Jing


[8]ページ先頭

©2009-2025 Movatter.jp