Movatterモバイル変換


[0]ホーム

URL:


CN103499350B - Vehicle high-precision localization method and the device of multi-source information is merged under GPS blind area - Google Patents

Vehicle high-precision localization method and the device of multi-source information is merged under GPS blind area
Download PDF

Info

Publication number
CN103499350B
CN103499350BCN201310455813.5ACN201310455813ACN103499350BCN 103499350 BCN103499350 BCN 103499350BCN 201310455813 ACN201310455813 ACN 201310455813ACN 103499350 BCN103499350 BCN 103499350B
Authority
CN
China
Prior art keywords
vehicle
image
coordinates
positioning
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310455813.5A
Other languages
Chinese (zh)
Other versions
CN103499350A (en
Inventor
赵祥模
徐志刚
程鑫
张立成
周经美
王润民
杨楠
白国柱
任亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changan University
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan UniversityfiledCriticalChangan University
Priority to CN201310455813.5ApriorityCriticalpatent/CN103499350B/en
Publication of CN103499350ApublicationCriticalpatent/CN103499350A/en
Application grantedgrantedCritical
Publication of CN103499350BpublicationCriticalpatent/CN103499350B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Vehicle high-precision localization method and the device of multi-source information is merged under the invention discloses a kind of GPS blind area, the acceleration information that the angular rate information that the method strap-down matrix algorithm exports according to gyroscope, accelerometer export calculates INS vehicle position information and the locating information obtained by road surface matching technique, adopts Kalman filtering algorithm to carry out merging thus exporting finally merging locating information.Because INS location algorithm has cumulative errors, the method arranges anchor node on road side and again corrects vehicle position information.Can be stablized by the method, reliable, high-precision vehicle high-precision positional information, be applicable to the intensive urban road of building, mountain area, tunnel etc. without gps signal environment.

Description

Translated fromChinese
GPS盲区下融合多源信息的车辆高精度定位方法及装置Vehicle high-precision positioning method and device based on fusion of multi-source information in GPS blind area

技术领域technical field

本发明属于交通检测以及车辆主动定位技术领域,涉及GPS盲区下的车辆高精度定位方法,特别涉及一种GPS盲区下基于多源信息融合的车辆高精度定位方法。The invention belongs to the technical field of traffic detection and vehicle active positioning, and relates to a high-precision vehicle positioning method in a GPS blind area, in particular to a high-precision vehicle positioning method based on multi-source information fusion in a GPS blind area.

背景技术Background technique

对于车辆高精度位置信息的检测,是车联网领域中的一个重要参数,特别是在山区、隧道等复杂环境下,车辆高精度位置信息变得尤为重要。所以,在GPS盲区下获取车辆的高精度位置信息意义重大。The detection of high-precision vehicle location information is an important parameter in the field of Internet of Vehicles, especially in complex environments such as mountainous areas and tunnels, where high-precision vehicle location information becomes particularly important. Therefore, it is of great significance to obtain the high-precision position information of the vehicle in the GPS blind area.

目前,比较常用的车辆高精度获取方法主要有:GPS定位、惯性导航技术、路面匹配技术等。其中,GPS是目前应用最为广泛的定位技术,现有的智能交通系统普遍采用GPS方法进行车辆定位,但该方法难以满足未来车联网的应用需求,主要存在以下缺陷:(1)定位精度低。普通民用GPS定位精度为5~25米,差分GPS的定位误差也在1米左右。(2)存在GPS信号盲区。车辆在山区高速公路、隧道、建筑物密集的市区内行驶时,GPS信号容易被遮挡,此时车辆将进入GPS信号盲区,无法使用GPS进行定位。(3)输出频率低。At present, the commonly used high-precision vehicle acquisition methods mainly include: GPS positioning, inertial navigation technology, road surface matching technology, etc. Among them, GPS is the most widely used positioning technology at present. The existing intelligent transportation system generally adopts the GPS method for vehicle positioning, but this method is difficult to meet the application requirements of the future Internet of Vehicles. The main defects are as follows: (1) Low positioning accuracy. Ordinary civilian GPS positioning accuracy is 5 to 25 meters, and the positioning error of differential GPS is also about 1 meter. (2) There are GPS signal blind spots. When the vehicle is driving in mountainous highways, tunnels, and urban areas with dense buildings, the GPS signal is easily blocked. At this time, the vehicle will enter the GPS signal blind area and cannot use GPS for positioning. (3) The output frequency is low.

INS(InertialNavigationSystem).惯性导航系统,简称惯导)是20世纪初发展起来的导航定位系统,其基本原理是根据惯性空间的力学定律,利用陀螺和加速度计等惯性元件感受运载体在运动过程中的旋转角速度和加速度,通过伺服系统的地垂跟踪或坐标系旋转变换,在一定的坐标系内积分计算,最终得到运载体的相对位置、速度和姿态等导航参数。但INS由于累积误差问题,很难长时间维持高精度定位。INS (Inertial Navigation System). Inertial Navigation System, referred to as inertial navigation system, is a navigation and positioning system developed in the early 20th century. Its basic principle is to use inertial components such as gyroscopes and accelerometers to sense the movement of the carrier according to the laws of mechanics in inertial space. The rotation angular velocity and acceleration of the servo system are integrally calculated in a certain coordinate system through the vertical tracking of the servo system or the rotation transformation of the coordinate system, and finally the navigation parameters such as the relative position, velocity and attitude of the vehicle are obtained. However, due to the cumulative error problem of INS, it is difficult to maintain high-precision positioning for a long time.

路面匹配导航系统具备自主性、高精度、全天候、抗干扰能力强等优点,随着图像传感器技术的不断发展及数字地图库制备能力的不断增强,路面匹配导航技术发展前景广阔。但其技术缺点也十分明显,主要表现在:(1)基准图的制作周期太长;(2)当面对大范围内路面特征变化不明显的区域时,路面匹配导航系统将因为无法定位而无法使用或者使导航精度大大降低;(3)难以适应机动目标。The road matching navigation system has the advantages of autonomy, high precision, all-weather, and strong anti-interference ability. With the continuous development of image sensor technology and the continuous enhancement of digital map library preparation capabilities, the development prospect of road matching navigation technology is broad. However, its technical shortcomings are also very obvious, mainly in: (1) the production cycle of the reference map is too long; (2) when facing a large area where the road surface characteristics do not change significantly, the road matching navigation system will fail because it cannot locate It cannot be used or the navigation accuracy is greatly reduced; (3) it is difficult to adapt to maneuvering targets.

由于路面匹配算法所存在的适应性、效率等缺陷,目前大多数基于融合思想的车辆高精度定位方法均采用GPS、INS为融合信息源,但在GPS盲区情况下由于累积误差问题无法达到预期的理想效果。Due to the shortcomings of adaptability and efficiency in the road matching algorithm, most of the current high-precision vehicle positioning methods based on fusion ideas use GPS and INS as fusion information sources. Ideal effect.

发明内容Contents of the invention

针对上述各种技术单独使用时存在的缺陷或不足,本发明的目的在于提出一种GPS盲区下融合多源信息的车辆高精度定位方法,能够避免现有技术的弊端,从而实现GPS盲区下有效获取车辆高精度位置信息的功能。Aiming at the defects or deficiencies that exist when the above-mentioned various technologies are used alone, the purpose of the present invention is to propose a high-precision positioning method for vehicles that fuses multi-source information under GPS blind spots, which can avoid the drawbacks of the prior art, thereby realizing effective positioning under GPS blind spots. The function of obtaining the high-precision position information of the vehicle.

为了实现上述任务,本发明采取如下的技术解决方案:In order to realize above-mentioned task, the present invention takes following technical solution:

一种GPS盲区下融合多源信息的车辆高精度定位方法,包括以下步骤:A high-precision vehicle positioning method for fusing multi-source information under a GPS blind spot, comprising the following steps:

步骤一,INS定位处理:Step 1, INS positioning processing:

步骤S10,获取车辆当前的加速度a和角速度ω,将加速度和角速度作为输入数据,根据捷连矩阵算法得到车辆的INS定位坐标(x1,y1);Step S10, obtain the current acceleration a and angular velocity ω of the vehicle, use the acceleration and angular velocity as input data, and obtain the INS positioning coordinates (x1 , y1 ) of the vehicle according to the strap-down matrix algorithm;

步骤二,路面匹配定位处理:Step 2, road surface matching and positioning processing:

步骤S20,以系统初始化后获取的第一幅路面矩形图像边缘为x轴和y轴建立基准坐标系,记录此时车辆的初始坐标;Step S20, establish a reference coordinate system with the edge of the first rectangular road surface image obtained after system initialization as the x-axis and y-axis, and record the initial coordinates of the vehicle at this time;

步骤S21,获取连续两张路面矩形图像:fn(x,y)和fn+1(x,y),且两张图像在像素上有重合区域;Step S21, obtaining two consecutive road surface rectangular images: fn (x, y) and fn+1 (x, y), and the two images have overlapping areas in pixels;

步骤S22,以获取的路面图像fn(x,y)和fn+1(x,y)作为输入数据,根据SIFT匹配算法得到图像fn(x,y)在重合区域内的点P1和点Q1在图像fn+1(x,y)上的匹配点P2和Q2Step S22, take the obtained road surface images fn (x, y) and fn+1 (x, y) as input data, and obtain the point P1 of the image fn (x, y) in the overlapping area according to the SIFT matching algorithm Matching points P2 and Q2 of point Q1 on image fn+1 (x, y);

步骤S23,在图像fn+1(x,y)上寻找两点P0和Q0,使P0和Q0在图像fn+1(x,y)上的位置与点P1和Q1在图像fn(x,y)上的位置相同;Step S23, find two points P0 and Q0 on the image f n+1 (x, y), so that the positions of P0 and Q0 on the image fn+1 (x, y) are the same as the points P1 and Q1 has the same position on the image fn (x, y);

步骤S24,以图像fn+1(x,y)的边缘为x轴和y轴建立坐标系XOY,在XOY坐标系中以公式1计算图像fn+1(x,y)相对于fn(x,y)的线性偏移量(Δx,Δy):Step S24, using the edge of the image fn+1 (x, y) as the x-axis and the y-axis to establish a coordinate system XOY, and calculating the image fn+1 (x, y) relative to fn in the XOY coordinate system with formula 1 Linear offset (Δx,Δy) from (x,y):

Δx=P0(x)-P2(x)Δy=P0(y)-P2(y)(公式1)Δ x = P 0 ( x ) - P 2 ( x ) Δ the y = P 0 ( the y ) - P 2 ( the y ) (Formula 1)

式中,P0(x)和P0(y)为点P0在坐标系XOY中的横坐标和纵坐标,P2(x)和P2(y)为点P2在坐标系XOY中的横坐标和纵坐标;In the formula, P0 (x) and P0 (y) are the abscissa and ordinate of point P0 in coordinate system XOY, and P2 (x) and P2 (y) are point P2 in coordinate system XOY The abscissa and ordinate of ;

步骤S25,在XOY坐标系中,利用公式2计算图像fn(x,y)和图像fn+1(x,y)的旋转角度θ:Step S25, in the XOY coordinate system, calculate the rotation angle θ of the image fn (x, y) and image fn+1 (x, y) using formula 2:

θ=arctanQ2(x)-P2(x)Q2(y)-P2(y)-arctanQ0(x)-P0(x)Q0(y)-P0(y)(公式2)θ = a r c t a no Q 2 ( x ) - P 2 ( x ) Q 2 ( the y ) - P 2 ( the y ) - a r c t a no Q 0 ( x ) - P 0 ( x ) Q 0 ( the y ) - P 0 ( the y ) (Formula 2)

式中,Q0(x)和Q0(y)为点Q0在坐标系XOY中的横坐标和纵坐标,Q2(x)和Q2(y)为点Q2在坐标系XOY中的横坐标和纵坐标;In the formula, Q0 (x) and Q0 (y) are the abscissa and ordinate of point Q0 in coordinate system XOY, Q2 (x) and Q2 (y) are point Q2 in coordinate system XOY The abscissa and ordinate of ;

步骤S26,以图像fn(x,y)的边缘为x轴和y轴建立坐标系xoy,在xoy坐标系中以公式3计算图像fn+1(x,y)相对于fn(x,y)的实际偏移量(Δx',Δy'):Step S26, take the edge of the image fn (x, y) as the x-axis and the y-axis to establish a coordinate system xoy, in the xoy coordinate system, calculate the image fn+1 (x, y) relative to fn (x ,y) the actual offset (Δx',Δy'):

Δx′=Δxcosθ-ΔysinθΔy′=Δxsinθ+Δycosθ(公式3)Δx ′ = Δ x cos θ - Δ the y sin θ Δy ′ = Δ x sin θ + Δ the y cos θ (Formula 3)

步骤S27,将车辆行驶过程中获取的任意连续的两幅路面矩形图像依次执行步骤S21至步骤S26,可得到车辆此时相对于基准坐标系的轨迹;根据车辆的初始坐标和轨迹,得到路面匹配定位坐标(x2,y2),Step S27, execute step S21 to step S26 sequentially on any two continuous road surface rectangular images acquired during vehicle driving to obtain the trajectory of the vehicle relative to the reference coordinate system at this time; according to the initial coordinates and trajectory of the vehicle, obtain the road surface matching positioning coordinates (x2 , y2 ),

步骤三,定位坐标融合:Step 3, positioning coordinate fusion:

步骤S30,将INS定位坐标(x1,y1)与路面匹配定位坐标(x2,y2)利用公式4进行融合,得到融合坐标(x0,y0):Step S30, fuse the INS positioning coordinates (x1 , y1 ) with the road surface matching positioning coordinates (x2 , y2 ) using formula 4 to obtain the fusion coordinates (x0 , y0 ):

(x0,y0)=a·(x1,y1)+b·(x2,y2)(公式4)(x0 ,y0 )=a·(x1 ,y1 )+b·(x2 ,y2 ) (Formula 4)

式中,a和b为融合系数,通过回归试验得到,其中a取0.382,b取0.618;In the formula, a and b are fusion coefficients, which are obtained through regression tests, where a is 0.382 and b is 0.618;

步骤S31,利用卡尔曼滤波算法对融合坐标(x0,y0)进行状态估计,得到车辆最终的定位坐标(x,y);Step S31, using the Kalman filter algorithm to perform state estimation on the fused coordinates (x0 , y0 ), to obtain the final positioning coordinates (x, y) of the vehicle;

步骤四,锚节点校对:Step 4, anchor node proofreading:

步骤S41,利用每间隔5km设置在道路上的摄像头,在车辆经过时摄像机捕获车辆图像f(x,y);Step S41, using the camera set on the road every 5km, the camera captures the vehicle image f(x,y) when the vehicle passes by;

步骤S42,对车辆图像进行高斯滤波和匀光处理,得到处理后的图像;Step S42, performing Gaussian filtering and uniform light processing on the vehicle image to obtain a processed image;

步骤S43,将步骤S42处理后得到的图像,与车辆未经过时照相机在同一位置拍摄的背景图像做差,然后对做差后得到的图像进行直方图修正和腐蚀运算处理;Step S43, the image obtained after the processing of step S42 is compared with the background image taken by the camera at the same position when the vehicle has not passed by, and then histogram correction and corrosion operation processing are performed on the image obtained after the difference;

步骤S44,对步骤S43处理后得到的图像进行连通区域扫描,得到车辆图像的连通区域,将连通区域的最小外接矩形中心作为车辆在图像当中的位置;Step S44, scan the connected region of the image obtained after the processing in step S43 to obtain the connected region of the vehicle image, and use the center of the smallest circumscribed rectangle of the connected region as the position of the vehicle in the image;

步骤S45,将车辆在图像当中的位置转换为车辆在现实当中的实际坐标,将此实际坐标对步骤三得到的定位坐标(x,y)进行替换,以消除定位坐标(x,y)的累积误差。Step S45, convert the position of the vehicle in the image into the actual coordinates of the vehicle in reality, and replace the actual coordinates with the positioning coordinates (x, y) obtained in step 3 to eliminate the accumulation of positioning coordinates (x, y) error.

一种GPS盲区下融合多源信息的车辆高精度定位方法的装置,包括安装在车辆上的加速度计和陀螺仪,该装置还包括中央处理模块和安装在车辆上且镜头平行于路面的第一摄像头,在中央处理模块上连接有融合定位模块和Zigbee无线接收模块,所述的加速度计和陀螺仪通过INS定位处理模块与融合定位模块连接,第一摄像头通过路面匹配定位模块与融合定位模块连接;在车辆行驶的道路上每隔5km设置有龙门架,龙门架上安装有第二摄像头和RFID接收器,第二摄像头和RFID接收器上连接有锚节点修正模块,锚节点修正模块上连接有与Zigbee无线接收模块无线通信的Zigbee无线发送模块;各部件分别实现如下功能:A device for a high-precision vehicle positioning method that fuses multi-source information in a GPS blind spot, including an accelerometer and a gyroscope installed on the vehicle, and the device also includes a central processing module and a first sensor that is installed on the vehicle and whose lens is parallel to the road surface The camera is connected with a fusion positioning module and a Zigbee wireless receiving module on the central processing module, the accelerometer and gyroscope are connected with the fusion positioning module through the INS positioning processing module, and the first camera is connected with the fusion positioning module through the road surface matching positioning module ; On the road where the vehicle travels, a gantry is set every 5km, and a second camera and an RFID receiver are installed on the gantry, the second camera and the RFID receiver are connected with an anchor node correction module, and the anchor node correction module is connected with Zigbee wireless transmission module for wireless communication with Zigbee wireless receiving module; each component realizes the following functions respectively:

加速度计采用YC-A150S-M型加速度传感器实时获取车辆加速度信息;The accelerometer adopts the YC-A150S-M type acceleration sensor to obtain the vehicle acceleration information in real time;

陀螺仪采用CMR3100-D01型角速度传感器实时获取车辆角速度信息;The gyroscope uses the CMR3100-D01 angular velocity sensor to obtain vehicle angular velocity information in real time;

INS定位处理模块根据加速度计和陀螺仪获取的车辆加速度信息和角速度信息通过捷连矩阵算法得到车辆的INS定位坐标;The INS positioning processing module obtains the INS positioning coordinates of the vehicle through the strap-down matrix algorithm according to the vehicle acceleration information and angular velocity information obtained by the accelerometer and gyroscope;

第一摄像头采用XC‐103cSonyCCD型视频摄像头实时采集车辆行驶过程中路面的图像;The first camera adopts XC‐103cSonyCCD video camera to collect images of the road surface in real time while the vehicle is driving;

路面匹配定位处理模块通过对第一摄像头采集的路面图像进行处理,得到车辆相对于初始坐标的路面匹配定位坐标;The road surface matching positioning processing module obtains the road surface matching positioning coordinates of the vehicle relative to the initial coordinates by processing the road surface image collected by the first camera;

融合定位模块根据INS定位坐标和路面匹配定位坐标计算出融合坐标,并利用卡尔曼滤波算法对融合坐标进行状态估计,得到车辆最终的定位坐标;The fusion positioning module calculates the fusion coordinates according to the INS positioning coordinates and the road surface matching positioning coordinates, and uses the Kalman filter algorithm to estimate the state of the fusion coordinates to obtain the final positioning coordinates of the vehicle;

第二摄像头采用XC‐103cSonyCCD摄像头,当车辆经过安装第二摄像头的龙门架下方时,采集车辆的图像;The second camera adopts XC-103c Sony CCD camera, when the vehicle passes under the gantry where the second camera is installed, the image of the vehicle is collected;

RFID接收器与车辆上的IC卡配合,车辆经过龙门架时,RFID接收器触发第二摄像头采集车辆图像;The RFID receiver cooperates with the IC card on the vehicle. When the vehicle passes the gantry, the RFID receiver triggers the second camera to collect the vehicle image;

锚节点修正模块对第二摄像头采集的图像进行处理,计算出车辆在图像当中的位置,并转换为现实中的坐标;The anchor node correction module processes the image collected by the second camera, calculates the position of the vehicle in the image, and converts it into real coordinates;

中央处理模块利用锚节点修正模块中得到的车俩现实中的坐标对融合定位模块计算出的车俩最终的定位坐标进行修正,以消除累积误差。The central processing module uses the actual coordinates of the vehicle obtained in the anchor node correction module to correct the final positioning coordinates of the vehicle calculated by the fusion positioning module, so as to eliminate the cumulative error.

本发明融合陀螺仪、加速度计、CCD摄像机、RFID等多源传感器作为信息源,采用卡尔曼滤波算法进行位置信息融合处理,使用锚节点校对算法实现位置信息校对,最终实现车辆的稳定、高效、高精度定位功能。经现场实验应用表明,本发明的GPS盲区下基于多源信息融合的车辆高精度定位方法能有效避免了单个定位算法精度不足问题,同时改善了路面匹配算法适应性、效率缺陷,解决了惯性导航算法累积误差的不足,该方法定位效果较好,适用于楼宇密集的城市道路、山区、隧道等无GPS信号环境。The invention integrates gyroscopes, accelerometers, CCD cameras, RFID and other multi-source sensors as information sources, uses the Kalman filter algorithm to perform position information fusion processing, uses the anchor node proofreading algorithm to realize position information proofreading, and finally realizes the stability, high efficiency and high efficiency of the vehicle. High precision positioning function. Field experiments show that the high-precision vehicle positioning method based on multi-source information fusion in the GPS blind zone of the present invention can effectively avoid the problem of insufficient precision of a single positioning algorithm, improve the adaptability and efficiency defects of the road surface matching algorithm, and solve the problem of inertial navigation Insufficient cumulative error of the algorithm, the positioning effect of this method is better, and it is suitable for environments without GPS signals such as urban roads with dense buildings, mountainous areas, and tunnels.

附图说明Description of drawings

图1为本发明方法的程序流程图;Fig. 1 is the program flowchart of the inventive method;

图2本发明装置的整体结构连接图;The overall structural connection diagram of Fig. 2 device of the present invention;

图3为为第二摄像头的安装位置示意图;3 is a schematic diagram of the installation position of the second camera;

图4为路面匹配定位处理模块的工作流程图;Fig. 4 is the working flowchart of road surface matching positioning processing module;

图5为SIFT匹配算法中匹配点的示意图;Fig. 5 is the schematic diagram of matching point in SIFT matching algorithm;

图6为本发明方法获得的车辆轨迹与车辆实际轨迹的对比图;Fig. 6 is the comparison diagram of the vehicle track obtained by the method of the present invention and the actual track of the vehicle;

下面结合附图对本发明的作进一步的详细说明。The present invention will be described in further detail below in conjunction with the accompanying drawings.

具体实施方式detailed description

一种GPS盲区下融合多源信息的车辆高精度定位方法,如图1所示,包括以下步骤:A high-precision vehicle positioning method for fusing multi-source information under a GPS blind spot, as shown in Figure 1, comprising the following steps:

首先人工设定GPS坐标或者局部的相对坐标初始点从而实现系统INS\路面匹配等的初始化,若采用GPS坐标初始化,则最终输出结果为GPS坐标,若采用局部的相对坐标,则最终输出结果为局部的相对坐标。First, manually set the initial point of GPS coordinates or local relative coordinates to realize the initialization of the system INS\road surface matching, etc. If GPS coordinates are used for initialization, the final output result is GPS coordinates; if local relative coordinates are used, the final output result is Local relative coordinates.

步骤一,INS定位处理:Step 1, INS positioning processing:

步骤S10,获取车辆当前的加速度a和角速度ω,将加速度和角速度作为输入数据,获取加速度采用YC-A150S-M型加速度传感器,获取角速度采用CMR3100-D01型角速度传感器;INS定位处理模块利用加速度信息和角速度信息,通过捷连矩阵算法得到车辆的INS定位坐标(x1,y1);此处获取的定位坐标可以是GPS坐标也可以是相对坐标;Step S10, obtain the current acceleration a and angular velocity ω of the vehicle, use the acceleration and angular velocity as input data, use the YC-A150S-M type acceleration sensor to obtain the acceleration, and use the CMR3100-D01 type angular velocity sensor to obtain the angular velocity; the INS positioning processing module uses the acceleration information and angular velocity information, the INS positioning coordinates (x1 , y1 ) of the vehicle are obtained through the strap-down matrix algorithm; the positioning coordinates obtained here can be GPS coordinates or relative coordinates;

步骤二,路面匹配定位处理:Step 2, road surface matching and positioning processing:

路面匹配定位算法中,路面匹配定位处理模块以安装在车辆上且镜头平行于路面的以第一摄像头拍摄路面信息作为信息源,采用SIFT匹配算法根据路面信息获取定位信息;在采用SIFT匹配算法完成图像匹配后,再计算匹配点之间的偏移量与角度,然后根据正确匹配点的偏移量及角度获取车辆运行轨迹,从而得到车辆实时位置信息。第一摄像头采用XC‐103cSonyCCD型视频摄像头;流程如图4所示,具体步骤如下:In the road surface matching positioning algorithm, the road surface matching positioning processing module uses the road surface information captured by the first camera installed on the vehicle and the lens parallel to the road surface as the information source, and uses the SIFT matching algorithm to obtain positioning information based on the road surface information; After image matching, calculate the offset and angle between the matching points, and then obtain the vehicle running track according to the offset and angle of the correct matching points, so as to obtain the real-time position information of the vehicle. The first camera adopts XC-103c Sony CCD video camera; the process is shown in Figure 4, and the specific steps are as follows:

步骤S20,以系统初始化后获取的第一幅路面矩形图像边缘为x轴和y轴建立基准坐标系,以该图像的第一个像素点位置为坐标原点;由于处理过程中是采取前后两幅图像进行两幅图像相对位置的偏移差来获取车辆的的轨迹,因此系统初始化后,即车辆启动时第一摄像机拍摄的第一幅路面图像为基准图像,建立基准坐标系,以获取车辆相对于基准坐标系的运行轨迹;Step S20, establish a reference coordinate system with the edge of the first rectangular road surface image obtained after system initialization as the x-axis and y-axis, and take the first pixel position of the image as the coordinate origin; The offset difference of the relative positions of the two images is used to obtain the trajectory of the vehicle. Therefore, after the system is initialized, that is, the first road surface image taken by the first camera when the vehicle starts is the reference image, and a reference coordinate system is established to obtain the relative position of the vehicle. The running track in the reference coordinate system;

步骤S21,获取连续两张路面矩形图像:fn(x,y)和fn+1(x,y),且两张图像在像素上有重合区域;本发明中,采用每秒拍摄5张路面图像的速率,以保证在车辆一般运行速度的情况下通过第一摄像头采集的两幅图像在像素上有重合,即由于拍摄间隔短,因此前一幅图像的后半部分会出现在后一幅图像的前半部分,使两幅图像在像素上有重合,以满足SIFT匹配算法的要求;Step S21, obtain two consecutive road surface rectangular images: fn (x, y) and fn+1 (x, y), and the two images have overlapping areas on the pixels; in the present invention, 5 pictures are taken per second The speed of the road image to ensure that the pixels of the two images collected by the first camera overlap at the normal running speed of the vehicle, that is, due to the short shooting interval, the second half of the previous image will appear in the second half of the second image. The first half of the first image, so that the two images overlap in pixels to meet the requirements of the SIFT matching algorithm;

步骤S22,以获取的路面图像fn(x,y)和fn+1(x,y)作为输入数据,根据SIFT匹配算法得到图像fn(x,y)在重合区域内的点P1和点Q1在图像fn+1(x,y)上的匹配点P2和Q2,如图5所示,由于图像fn(x,y)和fn+1(x,y)有重合区域,即图中所示的斜线区域,因此匹配点P2和点P1在空间位置上可以看做一个点,但匹配点P2是在图像fn+1(x,y)上,而点P1是在图像fn(x,y)上的;Step S22, take the obtained road surface images fn (x, y) and fn+1 (x, y) as input data, and obtain the point P1 of the image fn (x, y) in the overlapping area according to the SIFT matching algorithm and the matching points P2 and Q2 of point Q1 on image fn+1 (x, y), as shown in Figure 5, since images fn (x, y) and fn+1 (x, y) There is an overlapping area, that is, the slash area shown in the figure, so the matching point P2 and point P1 can be regarded as a point in the spatial position, but the matching point P2 is in the image fn+1 (x,y) On, and the point P1 is on the image fn (x, y);

步骤S23,在图像fn+1(x,y)上寻找两点P0和Q0,使P0和Q0在图像fn+1(x,y)上的位置与点P1和Q1在图像fn(x,y)上的位置相同;Step S23, find two points P0 and Q0 on the image f n+1 (x, y), so that the positions of P0 and Q0 on the image fn+1 (x, y) are the same as the points P1 and Q1 has the same position on the image fn (x, y);

步骤S24,本发明中采集的图像均为矩形图像,以图像fn+1(x,y)的边缘为x轴和y轴建立坐标系XOY,在XOY坐标系中以公式1计算图像fn+1(x,y)相对于fn(x,y)的线性偏移量(Δx,Δy):Step S24, the images collected in the present invention are all rectangular images, and the coordinate system XOY is established with the edge of the image fn+1 (x, y) as the x-axis and the y-axis, and the imagefn is calculated with formula 1 in the XOY coordinate system The linear offset (Δx,Δy) of+1 (x,y) relative to fn (x,y):

Δx=P0(x)-P2(x)Δy=P0(y)-P2(y)(公式1)Δ x = P 0 ( x ) - P 2 ( x ) Δ the y = P 0 ( the y ) - P 2 ( the y ) (Formula 1)

式中,P0(x)和P0(y)为点P0在坐标系XOY中的横坐标和纵坐标,P2(x)和P2(y)为点P2在坐标系XOY中的横坐标和纵坐标;In the formula, P0 (x) and P0 (y) are the abscissa and ordinate of point P0 in coordinate system XOY, and P2 (x) and P2 (y) are point P2 in coordinate system XOY The abscissa and ordinate of ;

步骤S25,在XOY坐标系中,利用公式2计算图像fn(x,y)和图像fn+1(x,y)的旋转角度θ:Step S25, in the XOY coordinate system, calculate the rotation angle θ of the image fn (x, y) and image fn+1 (x, y) using formula 2:

θ=arctanQ2(x)-P2(x)Q2(y)-P2(y)-arctanQ0(x)-P0(x)Q0(y)-P0(y)(公式2)θ = a r c t a no Q 2 ( x ) - P 2 ( x ) Q 2 ( the y ) - P 2 ( the y ) - a r c t a no Q 0 ( x ) - P 0 ( x ) Q 0 ( the y ) - P 0 ( the y ) (Formula 2)

式中,Q0(x)和Q0(y)为点Q0在坐标系XOY中的横坐标和纵坐标,Q2(x)和Q2(y)为点Q2在坐标系XOY中的横坐标和纵坐标;In the formula, Q0 (x) and Q0 (y) are the abscissa and ordinate of point Q0 in coordinate system XOY, Q2 (x) and Q2 (y) are point Q2 in coordinate system XOY The abscissa and ordinate of ;

步骤S26,以图像fn(x,y)的边缘为x轴和y轴建立坐标系xoy,在xoy坐标系中以公式3计算图像fn+1(x,y)相对于fn(x,y)的实际偏移量(Δx',Δy'):Step S26, take the edge of the image fn (x, y) as the x-axis and the y-axis to establish a coordinate system xoy, in the xoy coordinate system, calculate the image fn+1 (x, y) relative to fn (x ,y) the actual offset (Δx',Δy'):

Δx′=Δxcosθ-ΔysinθΔy′=Δxsinθ+Δycosθ(公式3)Δx ′ = Δ x cos θ - Δ the y sin θ Δy ′ = Δ x sin θ + Δ the y cos θ (Formula 3)

步骤S27,将车辆行驶过程中自初始化后获取的任意连续的两幅路面矩形图像依次执行步骤S21至步骤S26,可得到车辆此时相对于基准坐标系的轨迹;初始化时的基准坐标系的初始坐标是已知的,因此可以根据车辆的初始坐标和轨迹,若初始化为GPS坐标则根据GPS坐标与距离的转换关系计算该算法输出结果,若是相对坐标位置则直接作为该算法输出结果,从而得到路面匹配定位坐标(x2,y2),Step S27, execute step S21 to step S26 sequentially on any two consecutive road surface rectangular images acquired after initialization during the driving process of the vehicle, to obtain the trajectory of the vehicle relative to the reference coordinate system at this time; the initial value of the reference coordinate system during initialization The coordinates are known, so the initial coordinates and trajectory of the vehicle can be used. If it is initialized to GPS coordinates, the output result of the algorithm is calculated according to the conversion relationship between GPS coordinates and distance. If it is the relative coordinate position, it can be directly used as the output result of the algorithm, thus obtaining Road surface matching positioning coordinates (x2 , y2 ),

步骤三,定位坐标融合:Step 3, positioning coordinate fusion:

步骤S30,融合定位模块将INS定位坐标(x1,y1)与路面匹配定位坐标(x2,y2)利用公式4进行融合,得到融合坐标(x0,y0):Step S30, the fusion positioning module fuses the INS positioning coordinates (x1 , y1 ) with the road surface matching positioning coordinates (x2 , y2 ) using formula 4 to obtain the fusion coordinates (x0 , y0 ):

(x0,y0)=a·(x1,y1)+b·(x2,y2)(公式4)(x0 ,y0 )=a·(x1 ,y1 )+b·(x2 ,y2 ) (Formula 4)

式中,a和b为融合系数,通过回归试验得到,其中a取0.382,b取0.618时,二者拟合效果最佳,拟合试验如下:In the formula, a and b are fusion coefficients, which are obtained through regression tests. When a is 0.382 and b is 0.618, the fitting effect of the two is the best. The fitting test is as follows:

在某汽车试验场,采用如上所述的INS定位算法和路面图像定位算法分别对车辆进行定位30次,每次车辆行驶1KM,可得到30组INS定位坐标(x'1,y'1);和30组路面匹配定位坐标(x'2,y'2),车辆的实际坐标(x'0,y'0)已知,该实际坐标即为要求得的融合坐标(x0,y0);车辆的实际坐标与两种定位算法之间存在4‐1关系:In an automobile test field, the vehicle is positioned 30 times using the above-mentioned INS positioning algorithm and road image positioning algorithm, and each time the vehicle travels 1KM, 30 sets of INS positioning coordinates (x'1 , y'1 ) can be obtained; Match the positioning coordinates (x'2 , y'2 ) with 30 sets of road surfaces, the actual coordinates of the vehicle (x'0 , y'0 ) are known, and the actual coordinates are the required fusion coordinates (x0 , y0 ) ; There is a 4‐1 relationship between the actual coordinates of the vehicle and the two localization algorithms:

(x'0,y'0)=a·(x'1,y'1)+b·(x'2,y'2)公式(4‐1)(x'0 ,y'0 )=a·(x'1 ,y'1 )+b·(x'2 ,y'2 ) formula (4‐1)

在上式中,只有系数a和b是未知的,将30组数据分别代入上式,并进行回归分析计算,找出使所有组与实际值(x'0,y'0)误差最小的系数a和b,最终算得回归融合系数a取0.382,b取0.618。In the above formula, only the coefficients a and b are unknown. Substitute 30 groups of data into the above formula, and perform regression analysis and calculation to find the coefficient that minimizes the error between all groups and the actual value (x'0 , y'0 ) a and b, the final calculated regression fusion coefficient a takes 0.382, and b takes 0.618.

步骤S31,利用卡尔曼滤波算法对融合坐标(x0,y0)进行状态估计,卡尔曼滤波算法可根据前面时刻得到的融合坐标,来估计下一刻的坐标,将卡尔曼滤波算法得到的坐标作为车辆最终的定位坐标(x,y);Step S31, use the Kalman filter algorithm to estimate the state of the fusion coordinates (x0 , y0 ), the Kalman filter algorithm can estimate the coordinates at the next moment according to the fusion coordinates obtained at the previous moment, and the coordinates obtained by the Kalman filter algorithm As the vehicle's final positioning coordinates (x, y);

步骤四,锚节点校对:Step 4, anchor node proofreading:

步骤S41,每间隔5km在道路上设置第二摄像头,该摄像头和RFID接收器配合,车辆上装有IC卡,当车辆经过时,RFID接收器获取IC卡信息得知车辆到来,触发第二摄像头在车辆经过时摄像机捕获车辆图像f(x,y);第二摄像头采用XC‐103cSonyCCD型视频摄像头,在道路上每间隔5km设置龙门架,将第二摄像头安装在龙门架上,采集图像;Step S41, set up a second camera on the road at intervals of 5km. The camera cooperates with the RFID receiver, and the vehicle is equipped with an IC card. When the vehicle passes by, the camera captures the vehicle image f(x, y); the second camera adopts XC‐103cSonyCCD video camera, and the gantry is set every 5km on the road, and the second camera is installed on the gantry to collect images;

步骤S42,锚节点修正模块对车辆图像进行高斯滤波和匀光处理,尽可能的消除外界干扰,得到处理后的图像;Step S42, the anchor node correction module performs Gaussian filtering and uniform light processing on the vehicle image, eliminates external interference as much as possible, and obtains the processed image;

步骤S43,将步骤S42处理后得到的图像,与车辆未经过时照相机在同一位置拍摄的背景图像做差,然后对做差后得到的图像进行直方图修正和腐蚀运算处理;可以改善图像的对比度,扩展像素的动态范围;腐蚀运算可以消除小的亮点噪声。In step S43, the image obtained after processing in step S42 is compared with the background image taken by the camera at the same position when the vehicle has not passed by, and then histogram correction and corrosion processing are performed on the image obtained after the difference; the contrast of the image can be improved , to expand the dynamic range of pixels; erosion operation can eliminate small bright spot noise.

步骤S44,对步骤S43处理后得到的图像进行连通区域扫描,得到车辆图像的连通区域,同时获取连通区域的几何特征,包括连通区域承担线段边界点、最小外接矩形、面积、周长及形心等。目标区域面积参数S(·)可以作为一种度量尺度,对于连通区域R(x,y),S(·)定义为该区域中像素数目,即Step S44, scan the connected region of the image obtained after the processing in step S43 to obtain the connected region of the vehicle image, and at the same time obtain the geometric features of the connected region, including the boundary points of the connected region, the minimum circumscribed rectangle, the area, the perimeter and the centroid Wait. The area parameter S(·) of the target region can be used as a measure. For the connected region R(x,y), S(·) is defined as the number of pixels in the region, namely

SS((RR((xx,,ythe y))))==ΣΣ((xx,,ythe y))∈∈RRtt((xx,,ythe y))ff((xx,,ythe y))

其中,f(x,y)为当前处理的图像;将连通区域的最小外接矩形中心作为车辆在图像当中的位置;Among them, f(x, y) is the currently processed image; the center of the smallest circumscribed rectangle of the connected area is used as the position of the vehicle in the image;

步骤S45,将车辆在图像当中的位置转换为车辆在现实当中的实际坐标;由于第二摄像头的位置是固定的,坐标已知,其拍摄的照片中任意一点的坐标也是已知的,因此根据车辆在图像当中的位置可以得到车辆实际坐标;中央处理模块将此实际坐标对步骤三得到的定位坐标(x,y)进行替换,以消除定位坐标(x,y)的累积误差;即,车辆正常行驶过程中,利用步骤S31得到的最终的定位坐标(x,y)作为车辆当前坐标,但为了消除累积误差,本发明中设置了锚节点校正模块,每隔5km进行坐标位置校正,以校正后的车辆实际坐标来替换定位坐标(x,y),可消除定位的累积误差,保持定位的高精度。Step S45, convert the position of the vehicle in the image into the actual coordinates of the vehicle in reality; since the position of the second camera is fixed, the coordinates are known, and the coordinates of any point in the photos taken by it are also known, so according to The position of the vehicle in the image can obtain the actual coordinates of the vehicle; the central processing module replaces the actual coordinates with the positioning coordinates (x, y) obtained in step 3 to eliminate the cumulative error of the positioning coordinates (x, y); that is, the vehicle During normal driving, the final positioning coordinates (x, y) obtained in step S31 are used as the current coordinates of the vehicle, but in order to eliminate the cumulative error, the anchor node correction module is set in the present invention, and coordinate position correction is performed every 5km to correct Replace the positioning coordinates (x, y) with the actual coordinates of the vehicle later, which can eliminate the cumulative error of positioning and maintain the high precision of positioning.

本发明还提供了一种实现上述方法的装置,结构见图2:The present invention also provides a device for realizing the above method, the structure is shown in Fig. 2:

一种GPS盲区下融合多源信息的车辆高精度定位方法的装置,包括安装在车辆上的加速度计和陀螺仪,该装置还包括中央处理模块和安装在车辆上且镜头平行于路面的第一摄像头,在中央处理模块上连接有融合定位模块和Zigbee无线接收模块,所述的加速度计和陀螺仪通过INS定位处理模块与融合定位模块连接,第一摄像头通过路面匹配定位模块与融合定位模块连接;在车辆行驶的道路上每隔5km设置有龙门架,龙门架上安装有第二摄像头和RFID接收器,第二摄像头和RFID接收器上连接有锚节点修正模块,锚节点修正模块上连接有与Zigbee无线接收模块无线通信的Zigbee无线发送模块;各部件分别实现如下功能:A device for a high-precision vehicle positioning method that fuses multi-source information in a GPS blind spot, including an accelerometer and a gyroscope installed on the vehicle, and the device also includes a central processing module and a first sensor that is installed on the vehicle and whose lens is parallel to the road surface The camera is connected with a fusion positioning module and a Zigbee wireless receiving module on the central processing module, the accelerometer and gyroscope are connected with the fusion positioning module through the INS positioning processing module, and the first camera is connected with the fusion positioning module through the road surface matching positioning module ; On the road where the vehicle travels, a gantry is set every 5km, and a second camera and an RFID receiver are installed on the gantry, the second camera and the RFID receiver are connected with an anchor node correction module, and the anchor node correction module is connected with Zigbee wireless transmission module for wireless communication with Zigbee wireless receiving module; each component realizes the following functions respectively:

加速度计采用YC-A150S-M型加速度传感器实时获取车辆加速度信息;The accelerometer adopts the YC-A150S-M type acceleration sensor to obtain the vehicle acceleration information in real time;

陀螺仪采用CMR3100-D01型角速度传感器实时获取车辆角速度信息;The gyroscope uses the CMR3100-D01 angular velocity sensor to obtain vehicle angular velocity information in real time;

INS定位处理模块根据加速度计和陀螺仪获取的车辆加速度信息和角速度信息通过捷连矩阵算法得到车辆的INS定位坐标;The INS positioning processing module obtains the INS positioning coordinates of the vehicle through the strap-down matrix algorithm according to the vehicle acceleration information and angular velocity information obtained by the accelerometer and gyroscope;

第一摄像头采用XC‐103cSonyCCD型视频摄像头实时采集车辆行驶过程中路面的图像;The first camera adopts XC‐103cSonyCCD video camera to collect images of the road surface in real time while the vehicle is driving;

路面匹配定位处理模块通过对第一摄像头采集的路面图像进行处理,得到车辆相对于初始坐标的路面匹配定位坐标;The road surface matching positioning processing module obtains the road surface matching positioning coordinates of the vehicle relative to the initial coordinates by processing the road surface image collected by the first camera;

融合定位模块根据INS定位坐标和路面匹配定位坐标计算出融合坐标,并利用卡尔曼滤波算法对融合坐标进行状态估计,得到车辆最终的定位坐标;The fusion positioning module calculates the fusion coordinates according to the INS positioning coordinates and the road surface matching positioning coordinates, and uses the Kalman filter algorithm to estimate the state of the fusion coordinates to obtain the final positioning coordinates of the vehicle;

第二摄像头采用XC‐103cSonyCCD摄像头,当车辆经过安装第二摄像头的龙门架下方时,采集车辆的图像;The second camera adopts XC-103c Sony CCD camera, when the vehicle passes under the gantry where the second camera is installed, the image of the vehicle is collected;

RFID接收器与车辆上的IC卡配合,车辆经过龙门架时,RFID接收器触发第二摄像头采集车辆图像;The RFID receiver cooperates with the IC card on the vehicle. When the vehicle passes the gantry, the RFID receiver triggers the second camera to collect the vehicle image;

锚节点修正模块对第二摄像头采集的图像进行处理,计算出车辆在图像当中的位置,并转换为现实中的坐标;The anchor node correction module processes the image collected by the second camera, calculates the position of the vehicle in the image, and converts it into real coordinates;

中央处理模块利用锚节点修正模块中得到的车俩现实中的坐标对融合定位模块计算出的车俩最终的定位坐标进行修正,以消除累积误差。The central processing module uses the actual coordinates of the vehicle obtained in the anchor node correction module to correct the final positioning coordinates of the vehicle calculated by the fusion positioning module, so as to eliminate the cumulative error.

实施例:Example:

发明人以本发明方法和GPS定位方法进行了实验,实验过程如下:The inventor has carried out experiment with the inventive method and GPS positioning method, and experimental process is as follows:

发明人在某汽车试验场进行测试,采用如上方法,在跑道上每隔5km设置龙门架和第二摄像头,在车辆上内部安装陀螺仪、加速度计获取角速率、加速度,在车辆外断面处安装第一摄像头拍摄道路路面,采用如上方法进行试验最终测试结果如图6所示。其中标有三角形的线条为采用该方法获取的轨迹,标有圆圈的线为车辆实际轨迹。从图6可以看出,三角线跟圆圈线轨迹图几乎完全吻合,证明定位效果相当理想,算法的实际效果好。The inventor conducted a test at a certain automobile proving ground. Using the method above, a gantry and a second camera were installed on the runway every 5km, and a gyroscope and an accelerometer were installed inside the vehicle to obtain the angular rate and acceleration, and installed on the outer section of the vehicle The first camera shoots the road surface, and the test is carried out using the above method, and the final test result is shown in Figure 6. The line marked with a triangle is the trajectory obtained by this method, and the line marked with a circle is the actual trajectory of the vehicle. It can be seen from Figure 6 that the trajectory diagram of the triangle line and the circle line is almost completely consistent, which proves that the positioning effect is quite ideal and the actual effect of the algorithm is good.

对比例:Comparative example:

发明人分别采用征途ZT410GPS和佳明3790TGPS在某汽车试验场进行测试,该测试场地整体GPS信号良好,平均可连接卫星数量可达到3颗星以上。每次测试1km,分别测试3次,对采用GPS获得的车辆轨迹和车辆实际行驶轨迹进行误差分析,如下表1所示。从表中可看出,本发明方法已可达到GPS定位精度,证明定位效果相当理想,算法的实际效果好。The inventor used Zhengtu ZT410GPS and Garmin 3790TGPS to test at a certain automobile proving ground. The overall GPS signal of the test site is good, and the average number of satellites that can be connected can reach more than 3 satellites. Each test is 1 km, and the test is 3 times, and the error analysis is carried out on the vehicle trajectory obtained by using GPS and the actual driving trajectory of the vehicle, as shown in Table 1 below. As can be seen from the table, the method of the present invention can reach the GPS positioning accuracy, which proves that the positioning effect is quite ideal, and the actual effect of the algorithm is good.

表1误差测试结果Table 1 Error test results

Claims (2)

Translated fromChinese
1.一种GPS盲区下融合多源信息的车辆高精度定位方法,其特征在于,包括以下步骤:1. a vehicle high-precision positioning method that fuses multi-source information under a GPS blind spot, is characterized in that, comprises the following steps:步骤一,INS定位处理:Step 1, INS positioning processing:步骤S10,获取车辆当前的加速度a和角速度ω,将加速度和角速度作为输入数据,根据捷连矩阵算法得到车辆的INS定位坐标(x1,y1);Step S10, obtain the current acceleration a and angular velocity ω of the vehicle, use the acceleration and angular velocity as input data, and obtain the INS positioning coordinates (x1 , y1 ) of the vehicle according to the strap-down matrix algorithm;步骤二,路面匹配定位处理:Step 2, road surface matching and positioning processing:步骤S20,以系统初始化后获取的第一幅路面矩形图像边缘为x轴和y轴建立基准坐标系,记录此时车辆的初始坐标;Step S20, establish a reference coordinate system with the edge of the first rectangular road surface image obtained after system initialization as the x-axis and y-axis, and record the initial coordinates of the vehicle at this time;步骤S21,获取连续两张路面矩形图像:fn(x,y)和fn+1(x,y),且两张图像在像素上有重合区域;Step S21, obtaining two consecutive road surface rectangular images: fn (x, y) and fn+1 (x, y), and the two images have overlapping areas in pixels;步骤S22,以获取的路面图像fn(x,y)和fn+1(x,y)作为输入数据,根据SIFT匹配算法得到图像fn(x,y)在重合区域内的点P1和点Q1在图像fn+1(x,y)上的匹配点P2和Q2Step S22, take the obtained road surface images fn (x, y) and fn+1 (x, y) as input data, and obtain the point P1 of the image fn (x, y) in the overlapping area according to the SIFT matching algorithm Matching points P2 and Q2 of point Q1 on image fn+1 (x, y);步骤S23,在图像fn+1(x,y)上寻找两点P0和Q0,使P0和Q0在图像fn+1(x,y)上的位置与点P1和Q1在图像fn(x,y)上的位置相同;Step S23, find two points P0 and Q0 on the image f n+1 (x, y), so that the positions of P0 and Q0 on the image fn+1 (x, y) are the same as the points P1 and Q1 has the same position on the image fn (x, y);步骤S24,以图像fn+1(x,y)的边缘为x轴和y轴建立坐标系XOY,在XOY坐标系中以公式1计算图像fn+1(x,y)相对于fn(x,y)的线性偏移量(Δx,Δy):Step S24, using the edge of the image fn+1 (x, y) as the x-axis and the y-axis to establish a coordinate system XOY, and calculating the image fn+1 (x, y) relative to fn in the XOY coordinate system with formula 1 Linear offset (Δx,Δy) from (x,y):Δx=P0(x)-P2(x)Δy=P0(y)-P2(y)(公式1)Δ x = P 0 ( x ) - P 2 ( x ) Δ the y = P 0 ( the y ) - P 2 ( the y ) (Formula 1)式中,P0(x)和P0(y)为点P0在坐标系XOY中的横坐标和纵坐标,P2(x)和P2(y)为点P2在坐标系XOY中的横坐标和纵坐标;In the formula, P0 (x) and P0 (y) are the abscissa and ordinate of point P0 in coordinate system XOY, and P2 (x) and P2 (y) are point P2 in coordinate system XOY The abscissa and ordinate of ;步骤S25,在XOY坐标系中,利用公式2计算图像fn(x,y)和图像fn+1(x,y)的旋转角度θ:Step S25, in the XOY coordinate system, calculate the rotation angle θ of the image fn (x, y) and image fn+1 (x, y) using formula 2:θ=arctanQ2(x)-P2(x)Q2(y)-P2(y)-arctanQ0(x)-P0(x)Q0(y)-P0(y)(公式2)θ = a r c t a no Q 2 ( x ) - P 2 ( x ) Q 2 ( the y ) - P 2 ( the y ) - a r c t a no Q 0 ( x ) - P 0 ( x ) Q 0 ( the y ) - P 0 ( the y ) (Formula 2)式中,Q0(x)和Q0(y)为点Q0在坐标系XOY中的横坐标和纵坐标,Q2(x)和Q2(y)为点Q2在坐标系XOY中的横坐标和纵坐标;In the formula, Q0 (x) and Q0 (y) are the abscissa and ordinate of point Q0 in coordinate system XOY, Q2 (x) and Q2 (y) are point Q2 in coordinate system XOY The abscissa and ordinate of ;步骤S26,以图像fn(x,y)的边缘为x轴和y轴建立坐标系xoy,在xoy坐标系中以公式3计算图像fn+1(x,y)相对于fn(x,y)的实际偏移量(Δx',Δy'):Step S26, take the edge of the image fn (x, y) as the x-axis and the y-axis to establish a coordinate system xoy, in the xoy coordinate system, calculate the image fn+1 (x, y) relative to fn (x ,y) the actual offset (Δx',Δy'):Δx′=Δxcosθ-ΔysinθΔy′=Δxsinθ+Δycosθ(公式3)Δ x ′ = Δ x c o the s θ - Δ the y the s i no θ Δ the y ′ = Δ x the s i no θ + Δ the y c o the s θ (Formula 3)步骤S27,将车辆行驶过程中不断获取的连续的两幅路面矩形图像依次执行步骤S21至步骤S26,可得到车辆此时相对于基准坐标系的轨迹;根据车辆的初始坐标和轨迹,得到路面匹配定位坐标(x2,y2),Step S27, execute step S21 to step S26 sequentially on two consecutive road surface rectangular images continuously acquired during vehicle driving, to obtain the trajectory of the vehicle relative to the reference coordinate system at this time; according to the initial coordinates and trajectory of the vehicle, obtain the road surface matching positioning coordinates (x2 , y2 ),步骤三,定位坐标融合:Step 3, positioning coordinate fusion:步骤S30,将INS定位坐标(x1,y1)与路面匹配定位坐标(x2,y2)利用公式4进行融合,得到融合坐标(x0,y0):Step S30, fuse the INS positioning coordinates (x1 , y1 ) with the road surface matching positioning coordinates (x2 , y2 ) using formula 4 to obtain the fusion coordinates (x0 , y0 ):(x0,y0)=a·(x1,y1)+b·(x2,y2)(公式4)(x0 ,y0 )=a·(x1 ,y1 )+b·(x2 ,y2 ) (Formula 4)式中,a和b为融合系数,通过回归试验得到,其中a取0.382,b取0.618;In the formula, a and b are fusion coefficients, which are obtained through regression tests, where a is 0.382 and b is 0.618;步骤S31,利用卡尔曼滤波算法对融合坐标(x0,y0)进行状态估计,得到车辆最终的定位坐标(x,y);Step S31, using the Kalman filter algorithm to perform state estimation on the fused coordinates (x0 , y0 ), to obtain the final positioning coordinates (x, y) of the vehicle;步骤四,锚节点校对:Step 4, anchor node proofreading:步骤S41,利用每间隔5km设置在道路上的摄像头,在车辆经过时摄像机捕获车辆图像f(x,y);Step S41, using the camera set on the road every 5km, the camera captures the vehicle image f(x,y) when the vehicle passes by;步骤S42,对车辆图像进行高斯滤波和匀光处理,得到处理后的图像;Step S42, performing Gaussian filtering and uniform light processing on the vehicle image to obtain a processed image;步骤S43,将步骤S42处理后得到的图像,与车辆未经过时照相机在同一位置拍摄的背景图像做差,然后对做差后得到的图像进行直方图修正和腐蚀运算处理;Step S43, the image obtained after the processing of step S42 is compared with the background image taken by the camera at the same position when the vehicle has not passed by, and then histogram correction and corrosion operation processing are performed on the image obtained after the difference;步骤S44,对步骤S43处理后得到的图像进行连通区域扫描,得到车辆图像的连通区域,将连通区域的最小外接矩形中心作为车辆在图像当中的位置;Step S44, scan the connected region of the image obtained after the processing in step S43 to obtain the connected region of the vehicle image, and use the center of the smallest circumscribed rectangle of the connected region as the position of the vehicle in the image;步骤S45,将车辆在图像当中的位置转换为车辆在现实当中的实际坐标,将此实际坐标对步骤三得到的定位坐标(x,y)进行替换,以消除定位坐标(x,y)的累积误差。Step S45, convert the position of the vehicle in the image into the actual coordinates of the vehicle in reality, and replace the actual coordinates with the positioning coordinates (x, y) obtained in step 3 to eliminate the accumulation of positioning coordinates (x, y) error.2.一种用于实现如权利要求1所述的GPS盲区下融合多源信息的车辆高精度定位方法的装置,包括安装在车辆上的加速度计和陀螺仪,其特征在于,该装置还包括中央处理模块和安装在车辆上且镜头平行于路面的第一摄像头,在中央处理模块上连接有融合定位模块和Zigbee无线接收模块,所述的加速度计和陀螺仪通过INS定位处理模块与融合定位模块连接,第一摄像头通过路面匹配定位模块与融合定位模块连接;在车辆行驶的道路上每隔5km设置有龙门架,龙门架上安装有第二摄像头和RFID接收器,第二摄像头和RFID接收器上连接有锚节点修正模块,锚节点修正模块上连接有与Zigbee无线接收模块无线通信的Zigbee无线发送模块;各部件分别实现如下功能:2. A device for realizing the vehicle high-precision positioning method of merging multi-source information under the GPS blind spot as claimed in claim 1, comprising an accelerometer and a gyroscope installed on the vehicle, characterized in that the device also includes The central processing module and the first camera installed on the vehicle and the lens parallel to the road surface are connected with a fusion positioning module and a Zigbee wireless receiving module on the central processing module. Module connection, the first camera is connected with the fusion positioning module through the road surface matching positioning module; a gantry is set every 5km on the road where the vehicle is driving, and a second camera and RFID receiver are installed on the gantry, and the second camera and RFID receiver An anchor node correction module is connected to the device, and the anchor node correction module is connected with a Zigbee wireless sending module for wireless communication with the Zigbee wireless receiving module; each part realizes the following functions respectively:加速度计采用YC-A150S-M型加速度传感器实时获取车辆加速度信息;The accelerometer adopts the YC-A150S-M type acceleration sensor to obtain the vehicle acceleration information in real time;陀螺仪采用CMR3100-D01型角速度传感器实时获取车辆角速度信息;The gyroscope uses the CMR3100-D01 angular velocity sensor to obtain vehicle angular velocity information in real time;INS定位处理模块根据加速度计和陀螺仪获取的车辆加速度信息和角速度信息通过捷连矩阵算法得到车辆的INS定位坐标;The INS positioning processing module obtains the INS positioning coordinates of the vehicle through the strap-down matrix algorithm according to the vehicle acceleration information and angular velocity information obtained by the accelerometer and gyroscope;第一摄像头采用XC‐103cSonyCCD型视频摄像头实时采集车辆行驶过程中路面的图像;The first camera adopts XC‐103cSonyCCD video camera to collect images of the road surface in real time while the vehicle is driving;路面匹配定位处理模块通过对第一摄像头采集的路面图像进行处理,得到车辆相对于初始坐标的路面匹配定位坐标;The road surface matching positioning processing module obtains the road surface matching positioning coordinates of the vehicle relative to the initial coordinates by processing the road surface image collected by the first camera;融合定位模块根据INS定位坐标和路面匹配定位坐标计算出融合坐标,并利用卡尔曼滤波算法对融合坐标进行状态估计,得到车辆最终的定位坐标;The fusion positioning module calculates the fusion coordinates according to the INS positioning coordinates and the road surface matching positioning coordinates, and uses the Kalman filter algorithm to estimate the state of the fusion coordinates to obtain the final positioning coordinates of the vehicle;第二摄像头采用XC‐103cSonyCCD摄像头,当车辆经过安装第二摄像头的龙门架下方时,采集车辆的图像;The second camera adopts XC-103c Sony CCD camera, when the vehicle passes under the gantry where the second camera is installed, the image of the vehicle is collected;RFID接收器与车辆上的IC卡配合,车辆经过龙门架时,RFID接收器触发第二摄像头采集车辆图像;The RFID receiver cooperates with the IC card on the vehicle. When the vehicle passes the gantry, the RFID receiver triggers the second camera to collect the vehicle image;锚节点修正模块对第二摄像头采集的图像进行处理,计算出车辆在图像当中的位置,并转换为现实中的坐标;The anchor node correction module processes the image collected by the second camera, calculates the position of the vehicle in the image, and converts it into real coordinates;中央处理模块利用锚节点修正模块中得到的车俩现实中的坐标对融合定位模块计算出的车俩最终的定位坐标进行修正,以消除累积误差。The central processing module uses the actual coordinates of the vehicle obtained in the anchor node correction module to correct the final positioning coordinates of the vehicle calculated by the fusion positioning module to eliminate the cumulative error.
CN201310455813.5A2013-09-282013-09-28Vehicle high-precision localization method and the device of multi-source information is merged under GPS blind areaActiveCN103499350B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201310455813.5ACN103499350B (en)2013-09-282013-09-28Vehicle high-precision localization method and the device of multi-source information is merged under GPS blind area

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201310455813.5ACN103499350B (en)2013-09-282013-09-28Vehicle high-precision localization method and the device of multi-source information is merged under GPS blind area

Publications (2)

Publication NumberPublication Date
CN103499350A CN103499350A (en)2014-01-08
CN103499350Btrue CN103499350B (en)2016-01-27

Family

ID=49864581

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201310455813.5AActiveCN103499350B (en)2013-09-282013-09-28Vehicle high-precision localization method and the device of multi-source information is merged under GPS blind area

Country Status (1)

CountryLink
CN (1)CN103499350B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108072366A (en)*2016-11-152018-05-25微宏动力系统(湖州)有限公司A kind of navigation locating method based on auxiliary positioning

Families Citing this family (37)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104007459B (en)*2014-05-302018-01-05北京融智利达科技有限公司A kind of vehicle-mounted integrated positioning device
CN104567902A (en)*2014-11-242015-04-29朱今兰Intelligent urban traffic management system
CN104567890A (en)*2014-11-242015-04-29朱今兰Intelligent assisted vehicle navigation system
CN104501820B (en)*2014-11-242018-12-21朱今兰A kind of intelligent city's Position Fixing Navigation System
CN104457789A (en)*2014-11-262015-03-25深圳市华颖泰科电子技术有限公司Inertial-navigation-based parameter correcting method and device
CN106093994B (en)*2016-05-312019-03-29山东大学A kind of multi-source joint positioning method based on adaptive weighted mixing Kalman filtering
CN106225790B (en)*2016-07-132018-11-02百度在线网络技术(北京)有限公司A kind of determination method and device of unmanned vehicle positioning accuracy
CN106123906A (en)*2016-08-172016-11-16深圳市金立通信设备有限公司A kind of auxiliary navigation method and terminal
CN106403947A (en)*2016-08-292017-02-15无锡卓信信息科技股份有限公司Ship environment intelligent filtering-type fixed-point calibration inertial positioning system
CN106407315B (en)*2016-08-302019-08-16长安大学A kind of vehicle autonomic positioning method based on street view image database
CN107886536A (en)*2016-09-262018-04-06比亚迪股份有限公司Drive recorder localization method, device and drive recorder
CN106601076B (en)*2016-11-112019-06-18长安大学 A car self-training device and method based on strapdown inertial navigation and area scan camera
KR101910256B1 (en)*2016-12-202018-10-22전자부품연구원Lane Detection Method and System for Camera-based Road Curvature Estimation
CN106647755A (en)*2016-12-212017-05-10上海芮魅智能科技有限公司Sweeping robot capable of intelligently building sweeping map in real time
CN106657745B (en)*2016-12-302019-07-23北京航空航天大学A kind of vehicular collecting system of multi-source GPS data and monocular image
CN106840153A (en)*2017-03-142017-06-13千寻位置网络有限公司A kind of high in the clouds calculates the method by speed in tunnel
CN106840156B (en)*2017-03-282019-03-12千寻位置网络有限公司A method of improving mobile phone inertial navigation performance
DE102017211712A1 (en)2017-07-102019-01-10Audi Ag A data generation method for generating and updating a topology map for at least one room of at least one building
CN107478221A (en)*2017-08-112017-12-15黄润芳A kind of high-precision locating method for mobile terminal
CN107861507A (en)*2017-10-132018-03-30上海斐讯数据通信技术有限公司A kind of AGV control methods and system based on inertial navigation correction and SLAM indoor positionings
CN108846598A (en)*2018-03-292018-11-20宏图物流股份有限公司A kind of method and device of vehicle location
CN109166140B (en)*2018-07-272021-10-01长安大学 A method and system for vehicle trajectory estimation based on multi-line lidar
CN111076716B (en)*2018-10-182022-06-03百度在线网络技术(北京)有限公司 Method, apparatus, apparatus, and computer-readable storage medium for vehicle positioning
CN109668555A (en)*2019-01-172019-04-23北京交通大学Vehicle positioning system and localization method in the tunnel combined based on INS and active RFID
CN109905847B (en)*2019-03-052020-11-03长安大学 Collaborative correction system and method for accumulative error of GNSS blind spot intelligent vehicle assisted positioning system
CN109991641A (en)*2019-04-102019-07-09广东工业大学 A method, device and system for obtaining location information of a mobile device
CN110191412B (en)*2019-05-222021-01-26象翌微链科技发展有限公司 A method for correcting vehicle travel route information, and an in-vehicle terminal
CN110031012A (en)*2019-05-272019-07-19爱驰汽车有限公司Method, system, equipment and the storage medium of the high-precision map of automobile real-time matching
CN110580325B (en)*2019-08-282021-01-01武汉大学 A ubiquitous positioning signal multi-source fusion method and system
CN110597252B (en)*2019-09-032021-01-05安徽江淮汽车集团股份有限公司Fusion positioning control method, device and equipment for automatic driving automobile and storage medium
CN111308529A (en)*2019-11-132020-06-19上海天链轨道交通检测技术有限公司Positioning system and positioning method of vehicle-mounted detection equipment
CN110913332A (en)*2019-11-212020-03-24深圳市航天华拓科技有限公司Regional positioning system and method
CN112000100A (en)*2020-08-262020-11-27德鲁动力科技(海南)有限公司Charging system and method for robot
CN113271587B (en)*2021-06-112023-12-26北京白龙马云行科技有限公司Internet of things trusted authentication system for vehicle
CN113347608B (en)*2021-06-112023-05-12焦作大学Internet of things trusted authentication method for vehicle
CN113256984B (en)*2021-06-252021-11-05智道网联科技(北京)有限公司Multi-data-source data fusion processing method and device based on vehicle-road cooperation
CN113160571B (en)*2021-06-282021-10-01智道网联科技(北京)有限公司Multi-data-source data fusion processing method and device based on vehicle-road cooperation

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101598556A (en)*2009-07-152009-12-09北京航空航天大学 A vision/inertial integrated navigation method for unmanned aerial vehicle in unknown environment
CN101793517A (en)*2010-02-092010-08-04北京航空航天大学Online quick method for improving accuracy of attitude determination of airborne platform
CN102435188A (en)*2011-09-152012-05-02南京航空航天大学 A Monocular Vision/Inertial Fully Autonomous Navigation Method for Indoor Environment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101598556A (en)*2009-07-152009-12-09北京航空航天大学 A vision/inertial integrated navigation method for unmanned aerial vehicle in unknown environment
CN101793517A (en)*2010-02-092010-08-04北京航空航天大学Online quick method for improving accuracy of attitude determination of airborne platform
CN102435188A (en)*2011-09-152012-05-02南京航空航天大学 A Monocular Vision/Inertial Fully Autonomous Navigation Method for Indoor Environment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
单目视觉_惯性室内无人机自主导航算法研究;庄曈;《中国优秀硕士学位论文全文数据库工程科技II辑》;20130215(第02期);33-76*
室内近距高精度惯性_视觉融合定姿研究;王康友;《中国优秀硕士学位论文全文数据库信息科技辑》;20130415(第04期);41-64*

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108072366A (en)*2016-11-152018-05-25微宏动力系统(湖州)有限公司A kind of navigation locating method based on auxiliary positioning
CN108072366B (en)*2016-11-152021-01-15微宏动力系统(湖州)有限公司Navigation positioning method based on auxiliary positioning

Also Published As

Publication numberPublication date
CN103499350A (en)2014-01-08

Similar Documents

PublicationPublication DateTitle
CN103499350B (en)Vehicle high-precision localization method and the device of multi-source information is merged under GPS blind area
CN105180933B (en)Mobile robot reckoning update the system and method based on the detection of straight trip crossing
CN114199240B (en) Fusion positioning system and method of two-dimensional code, lidar and IMU without GPS signal
Alonso et al.Accurate global localization using visual odometry and digital maps on urban environments
CN107728175A (en)The automatic driving vehicle navigation and positioning accuracy antidote merged based on GNSS and VO
CN103411621B (en)A kind of vision/INS Combinated navigation method of the optical flow field towards indoor mobile robot
Ng et al.Continuous-time radar-inertial odometry for automotive radars
CN201269758Y (en)Vehicle mounted full automatic detection recording system for traffic signs
CN108759823B (en) Localization and deviation correction method of low-speed autonomous vehicles on designated roads based on image matching
CN105865461B (en)A kind of car position system and method based on Multi-sensor Fusion algorithm
CN108168544B (en)Beidou enhanced inertial navigation efficient fusion lane-level positioning working method and system device
CN107796391A (en)A kind of strapdown inertial navigation system/visual odometry Combinated navigation method
CN107389064A (en)A kind of unmanned vehicle based on inertial navigation becomes channel control method
CN103983263A (en)Inertia/visual integrated navigation method adopting iterated extended Kalman filter and neural network
CN111426320A (en) A Vehicle Autonomous Navigation Method Based on Image Matching/Inertial Navigation/Odometer
CN105841708A (en)Vehicle navigation and positioning track matching method based on path tracing
CN103033836B (en)navigation pointing method of vehicle navigation pointing device
CN104316058B (en)Interacting multiple model adopted WSN-INS combined navigation method for mobile robot
Dai et al.Lidar–inertial integration for rail vehicle localization and mapping in tunnels
Kim et al.Hercules: Heterogeneous radar dataset in complex urban environment for multi-session radar slam
CN116412816A (en)Navigation positioning method, positioning device and positioning system based on data fusion
CN105137468A (en)Photoelectric type automobile continuous navigation data acquiring device and method in GPS blind area environment
CN102359788B (en)Series image target recursive identification method based on platform inertia attitude parameter
Huang et al.Roadside infrastructure assisted LiDAR/inertial-based mapping for intelligent vehicles in urban areas
Zhao et al.L-VIWO: Visual-inertial-wheel odometry based on lane lines

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
CB03Change of inventor or designer information

Inventor after:Zhao Xiangmo

Inventor after:Xu Zhigang

Inventor after:Cheng Xin

Inventor after:Zhang Licheng

Inventor after:Zhou Jingmei

Inventor after:Wang Runmin

Inventor after:Yang Nan

Inventor after:Bai Guozhu

Inventor after:Ren Liang

Inventor before:Zhao Xiangmo

Inventor before:Xu Zhigang

Inventor before:Zhang Licheng

Inventor before:Cheng Xin

Inventor before:Bai Guozhu

Inventor before:Zhou Jingmei

Inventor before:Ren Liang

CORChange of bibliographic data
C14Grant of patent or utility model
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp