Movatterモバイル変換


[0]ホーム

URL:


CN111856441B - Train positioning method based on vision and millimeter wave radar fusion - Google Patents

Train positioning method based on vision and millimeter wave radar fusion
Download PDF

Info

Publication number
CN111856441B
CN111856441BCN202010517233.4ACN202010517233ACN111856441BCN 111856441 BCN111856441 BCN 111856441BCN 202010517233 ACN202010517233 ACN 202010517233ACN 111856441 BCN111856441 BCN 111856441B
Authority
CN
China
Prior art keywords
image
train
key position
features
speed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010517233.4A
Other languages
Chinese (zh)
Other versions
CN111856441A (en
Inventor
余贵珍
王章宇
周彬
徐少清
付子昂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang UniversityfiledCriticalBeihang University
Priority to CN202010517233.4ApriorityCriticalpatent/CN111856441B/en
Publication of CN111856441ApublicationCriticalpatent/CN111856441A/en
Application grantedgrantedCritical
Publication of CN111856441BpublicationCriticalpatent/CN111856441B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention discloses a train positioning method based on vision and millimeter wave radar fusion, which is characterized in that image feature extraction is carried out through vision, global features and local features are fused, wherein the global features are extracted through deep learning, and the local features are extracted through linear detection and linear feature descriptors, so that the feature extraction of images can be more sufficient; when the key position is searched, the global feature similarity measurement and the local feature similarity measurement are respectively carried out, so that the key position detection can be more accurate; the train speed is measured by the millimeter wave radar, the vehicle signal system is not depended, and the train speed is obtained by carrying out reverse speed calculation by the obstacle detected by the millimeter wave radar. According to the invention, the train is positioned by utilizing the vehicle-mounted sensor, road side equipment is not needed, the train positioning cost can be effectively reduced, and the difficulty that the train cannot be positioned in the tunnel and other environments can be effectively overcome by fusing data for positioning.

Description

Translated fromChinese
一种基于视觉与毫米波雷达融合的列车定位方法A train positioning method based on the fusion of vision and millimeter-wave radar

技术领域Technical Field

本发明涉及轨道列车无人驾驶自主环境感知及定位技术领域,尤其涉及一种基于视觉与毫米波雷达融合的列车定位方法。The present invention relates to the technical field of autonomous environment perception and positioning of unmanned rail trains, and in particular to a train positioning method based on the fusion of vision and millimeter wave radar.

背景技术Background Art

随着我国城市规模的高速发展,城市化进程的逐步加快,城镇人口和人均机动车保有量水平急速增加,交通拥堵现象日益严重。具有载客量大、运送效率高、能源消耗低的城市轨道交通已经成为缓解城市交通拥堵问题的必然选择。With the rapid development of my country's urban scale, the gradual acceleration of urbanization, the rapid increase in urban population and per capita motor vehicle ownership, traffic congestion is becoming increasingly serious. Urban rail transit with large passenger capacity, high transportation efficiency and low energy consumption has become an inevitable choice to alleviate urban traffic congestion.

列车无人驾驶系统在列车运行过程中不依赖于驾驶员,可以有效提升列车运行效率及运行安全。列车无人驾驶系统需要实时精准的列车位置,精准的列车位置信息可以为车辆调度和车速控制提供重要的保障。The unmanned train system does not rely on the driver during the operation of the train, which can effectively improve the efficiency and safety of the train operation. The unmanned train system requires real-time and accurate train location information, which can provide important guarantees for vehicle scheduling and speed control.

现阶段主要通过在路侧安装路基设备,如应答器等,进行列车定位,该方法存在布设周期长、布设成本高的缺点。近年来,部分研究人员通过车载传感器,如通过车载北斗定位系统或车载GPS系统,进行列车实时定位,该方法在卫星信号较好时可以很好地进行列车定位,然而,当卫星信号受遮挡或缺失时,该方法无法满足实际应用需求。At present, train positioning is mainly carried out by installing roadbed equipment such as transponders on the roadside. This method has the disadvantages of long deployment period and high deployment cost. In recent years, some researchers have used vehicle-mounted sensors, such as the vehicle-mounted Beidou positioning system or vehicle-mounted GPS system, to carry out real-time train positioning. This method can well locate trains when the satellite signal is good. However, when the satellite signal is blocked or missing, this method cannot meet the actual application needs.

发明内容Summary of the invention

有鉴于此,本发明提供了一种基于视觉与毫米波雷达融合的列车定位方法,用以通过融合视觉图像数据与毫米波雷达数据,实现列车在包含隧道等环境线路中的自主定位。In view of this, the present invention provides a train positioning method based on the fusion of vision and millimeter-wave radar, which is used to achieve autonomous positioning of trains in environmental lines including tunnels by fusing visual image data and millimeter-wave radar data.

本发明提供的一种基于视觉与毫米波雷达融合的列车定位方法,包括如下步骤:The present invention provides a train positioning method based on the fusion of vision and millimeter wave radar, comprising the following steps:

S1:在列车车头位置处安装相机和毫米波雷达传感器,通过安装的相机和毫米波雷达传感器采集整条线路的同步数据;其中,将相机和毫米波雷达传感器采集数据的时间截最近的两帧数据作为同步数据;S1: Install a camera and a millimeter-wave radar sensor at the front of the train, and collect synchronous data of the entire line through the installed camera and millimeter-wave radar sensor; wherein the two frames of data collected by the camera and the millimeter-wave radar sensor are cut off at the latest time as synchronous data;

S2:选取线路中的关键位置,从采集的同步数据中获取关键位置图像,对所述关键位置图像进行特征提取,建立关键位置视觉数据特征库;S2: Select key positions in the line, obtain key position images from the collected synchronous data, extract features from the key position images, and establish a key position visual data feature library;

S3:在列车运行过程中,利用相机实时采集图像,并对实时采集的图像进行特征提取;S3: During the operation of the train, the camera is used to collect images in real time, and features are extracted from the images collected in real time;

S4:将实时采集的图像的特征与所述关键位置视觉数据特征库中图像的特征进行相似度度量,判断列车是否到达所述关键位置视觉数据特征库中图像对应的关键位置;若是;则对列车位置进行校准;若否,则利用毫米波雷达传感器实时测量列车的速度,对测得的速度进行积分,预测列车在两个关键位置区间内的位置。S4: Measure the similarity between the features of the real-time collected image and the features of the image in the key position visual data feature library to determine whether the train has reached the key position corresponding to the image in the key position visual data feature library; if so, calibrate the train position; if not, use the millimeter wave radar sensor to measure the speed of the train in real time, integrate the measured speed, and predict the position of the train within the two key position intervals.

在一种可能的实现方式中,在本发明提供的上述基于视觉与毫米波雷达融合的列车定位方法中,步骤S2,选取线路中的关键位置,从采集的同步数据中获取关键位置图像,对所述关键位置图像进行特征提取,建立关键位置视觉数据特征库,具体包括:In a possible implementation, in the above-mentioned train positioning method based on vision and millimeter wave radar fusion provided by the present invention, step S2, selecting a key position in the line, obtaining a key position image from the collected synchronous data, performing feature extraction on the key position image, and establishing a key position visual data feature library, specifically includes:

S21:选取线路中各站台位置为关键位置,从采集的同步数据中获取关键位置图像;S21: selecting each station position in the line as a key position, and obtaining a key position image from the collected synchronous data;

S22:对获取的每一帧关键位置图像进行全局特征提取,全局特征提取通过卷积神经网络实现;将各帧关键位置图像缩放到同一尺寸,利用一个卷积操作降低每一帧关键位置图像的特征图大小,利用反向残差网络进行特征提取,通过平均池化层得到1280维高维向量;对1280维高维向量进行L2正则化:S22: Perform global feature extraction on each key position image acquired, and the global feature extraction is realized by convolutional neural network; scale the key position images of each frame to the same size, use a convolution operation to reduce the size of the feature map of each key position image, use the reverse residual network to extract features, and obtain a 1280-dimensional high-dimensional vector through the average pooling layer; perform L2 regularization on the 1280-dimensional high-dimensional vector:

Figure BDA0002530553500000021
Figure BDA0002530553500000021

其中,d=1280,表示高维向量的维度;(W1,W2,…Wd)为经过正则化后的高维向量,表示关键位置图像的全局特征;q=1,2,...,d;Wherein, d = 1280, indicating the dimension of the high-dimensional vector; (W1 , W2 , …Wd ) is the high-dimensional vector after regularization, indicating the global features of the key position image; q = 1, 2, …, d;

S23:对获取的每一帧关键位置图像进行局部特征提取,包括直线特征及对应的直线描述子;对每一帧关键位置图像进行平滑处理,利用Sobel算子获取每一帧关键位置图像的梯度图,获取每一帧关键位置图像横向和纵向三邻域区域内的最大像素值,最大像素值的位置为锚点,将所有锚点连接得到每一帧关键位置图像的边缘图,应用最小二乘法对边缘图进行直线拟合检测出直线;应用局部条带描述子对检测的每一条直线进行特征描述,利用每一帧关键位置图像中所有的直线描述子表示每一帧关键位置图像的局部特征;S23: extract local features from each acquired key position image frame, including straight line features and corresponding straight line descriptors; perform smoothing on each key position image frame, use the Sobel operator to obtain a gradient map of each key position image frame, obtain the maximum pixel value in the horizontal and vertical three neighborhood areas of each key position image frame, the position of the maximum pixel value is the anchor point, connect all anchor points to obtain an edge map of each key position image frame, apply the least squares method to perform straight line fitting on the edge map to detect straight lines; apply the local strip descriptor to describe the features of each detected straight line, and use all the straight line descriptors in each key position image frame to represent the local features of each key position image frame;

S24:将提取的每一帧关键位置图像的全局特征和局部特征进行保存;将每个站台提取的高维向量及对应的站台号进行对应,构建所有站台的全局特征库;采用视觉词袋模型对提取的所有直线描述子进行局部特征库构建。S24: Save the global features and local features of each frame of the key position image extracted; match the high-dimensional vector extracted from each station with the corresponding station number to build a global feature library of all stations; use the visual word bag model to construct a local feature library for all extracted straight line descriptors.

在一种可能的实现方式中,在本发明提供的上述基于视觉与毫米波雷达融合的列车定位方法中,步骤S3,在列车运行过程中,利用相机实时采集图像,并对实时采集的图像进行特征提取,具体包括:In a possible implementation, in the above-mentioned train positioning method based on the fusion of vision and millimeter wave radar provided by the present invention, step S3, during the operation of the train, uses a camera to collect images in real time, and extracts features from the real-time collected images, specifically including:

S31:对实时采集的图像进行全局特征提取,全局特征提取通过卷积神经网络实现;将实时采集的各图像缩放到同一尺寸,利用一个卷积操作来降低实时采集的各图像的特征图大小,利用反向残差网络进行特征提取,通过平均池化层得到1280维高维向量;对1280维高维向量进行L2正则化:S31: Perform global feature extraction on the real-time collected images, which is achieved through convolutional neural networks; scale the real-time collected images to the same size, use a convolution operation to reduce the size of the feature map of each real-time collected image, use the reverse residual network to extract features, and obtain a 1280-dimensional high-dimensional vector through an average pooling layer; perform L2 regularization on the 1280-dimensional high-dimensional vector:

Figure BDA0002530553500000031
Figure BDA0002530553500000031

其中,d=1280,表示高维向量的维度;(W1',W2',…Wd')为经过正则化后的高维向量,表示实时采集的图像的全局特征;q=1,2,...,d;Wherein, d = 1280, indicating the dimension of the high-dimensional vector; (W1 ', W2 ', ... Wd ') is the high-dimensional vector after regularization, indicating the global features of the image collected in real time; q = 1, 2, ..., d;

S32:对实时采集的图像进行局部特征提取,包括直线特征及对应的直线描述子;对实时采集的图像进行平滑处理,利用Sobel算子获取实时采集的图像的梯度图,获取实时采集的图像横向和纵向三邻域区域内的最大像素值,最大像素值的位置为锚点,将所有锚点连接得到实时采集的图像的边缘图,应用最小二乘法对边缘图进行直线拟合检测出直线;应用局部条带描述子对检测的每一条直线进行特征描述,利用实时采集的图像中所有的直线描述子表示实时采集的图像的局部特征。S32: extract local features of the real-time acquired image, including straight line features and corresponding straight line descriptors; smooth the real-time acquired image, use the Sobel operator to obtain the gradient map of the real-time acquired image, obtain the maximum pixel value in the three horizontal and vertical neighborhood areas of the real-time acquired image, the position of the maximum pixel value is the anchor point, connect all the anchor points to obtain the edge map of the real-time acquired image, apply the least squares method to perform straight line fitting on the edge map to detect the straight line; apply the local strip descriptor to describe the features of each detected straight line, and use all the straight line descriptors in the real-time acquired image to represent the local features of the real-time acquired image.

在一种可能的实现方式中,在本发明提供的上述基于视觉与毫米波雷达融合的列车定位方法中,步骤S4,将实时采集的图像的特征与所述关键位置视觉数据特征库中图像的特征进行相似度度量,判断列车是否到达所述关键位置视觉数据特征库中图像对应的关键位置;若是;则对列车位置进行校准;若否,则利用毫米波雷达传感器实时测量列车的速度,对测得的速度进行积分,预测列车在两个关键位置区间内的位置,具体包括:In a possible implementation, in the above-mentioned train positioning method based on the fusion of vision and millimeter-wave radar provided by the present invention, step S4 measures the similarity between the features of the real-time collected image and the features of the image in the key position visual data feature library to determine whether the train has reached the key position corresponding to the image in the key position visual data feature library; if so, calibrate the train position; if not, use the millimeter-wave radar sensor to measure the speed of the train in real time, integrate the measured speed, and predict the position of the train within the two key position intervals, specifically including:

S41:对实时采集的图像的全局特征与构建的全局特征库中所有图像的全局特征依次进行相似度度量,计算过程如下:S41: The global features of the real-time collected image and the global features of all images in the constructed global feature library are sequentially measured for similarity. The calculation process is as follows:

Figure BDA0002530553500000041
Figure BDA0002530553500000041

其中,i表示实时采集的第i个图像,

Figure BDA0002530553500000042
表示图像i的全局特征,
Figure BDA0002530553500000043
表示全局特征库中的第j个图像的全局特征,D(i,j)表示图像i的全局特征
Figure BDA0002530553500000044
与全局特征库中第j个图像的全局特征
Figure BDA0002530553500000045
的相似度;·2表示L2范式;Where i represents the i-th image acquired in real time,
Figure BDA0002530553500000042
represents the global features of image i,
Figure BDA0002530553500000043
represents the global features of the jth image in the global feature library, and D(i,j) represents the global features of image i
Figure BDA0002530553500000044
and the global features of the jth image in the global feature library
Figure BDA0002530553500000045
2 represents the L2 paradigm;

S42:判断图像i的全局特征

Figure BDA0002530553500000046
与全局特征库中第j个图像的全局特征
Figure BDA0002530553500000047
的相似度是否小于第一设定阈值;若否,则利用毫米波雷达传感器实时测量列车的速度,对测得的速度进行积分,预测列车在两个关键位置区间内的位置;若是,则执行步骤S43和步骤S44;S42: Determine the global features of image i
Figure BDA0002530553500000046
and the global features of the jth image in the global feature library
Figure BDA0002530553500000047
is less than a first set threshold; if not, the speed of the train is measured in real time using a millimeter wave radar sensor, the measured speed is integrated, and the position of the train within the two key position intervals is predicted; if so, step S43 and step S44 are executed;

S43:对图像i的局部特征wi与构建的局部特征库中所有图像的局部特征依次进行相似度度量,计算过程如下:S43: perform similarity measurement on the local feature wi of image i and the local features of all images in the constructed local feature library in turn, and the calculation process is as follows:

Figure BDA0002530553500000048
Figure BDA0002530553500000048

其中,s(wi-wj)表示图像i的局部特征wi与局部特征库中第j个图像的局部特征wj的相似度,wj表示局部特征库中的第j个图像的局部特征,

Figure BDA0002530553500000049
表示图像i中第k条直线对应的局部特征,
Figure BDA0002530553500000051
表示局部特征库中第j个图像中第k条直线对应的局部特征,N表示图像i中直线的个数;Where s(wi -wj ) represents the similarity between the local feature wi of imagei and the local featurewj of the jth image in the local feature library, andwj represents the local feature of the jth image in the local feature library.
Figure BDA0002530553500000049
represents the local feature corresponding to the kth line in image i,
Figure BDA0002530553500000051
represents the local feature corresponding to the kth straight line in the jth image in the local feature library, and N represents the number of straight lines in image i;

S44:判断图像i的局部特征wi与局部特征库中第j个图像的局部特征wj的相似度是否小于第二设定阈值;若是,则对列车位置进行校准;若否,则利用毫米波雷达传感器实时测量列车的速度,对测得的速度进行积分,预测列车在两个关键位置区间内的位置。S44: Determine whether the similarity between the local feature wi of image i and the local feature wj of the jth image in the local feature library is less than a second set threshold; if so, calibrate the train position; if not, use the millimeter wave radar sensor to measure the speed of the train in real time, integrate the measured speed, and predict the position of the train within the two key position intervals.

在一种可能的实现方式中,在本发明提供的上述基于视觉与毫米波雷达融合的列车定位方法中,步骤S4中,利用毫米波雷达传感器实时测量列车的速度,对测得的速度进行积分,预测列车在两个关键位置区间内的位置,具体包括:In a possible implementation, in the above-mentioned train positioning method based on vision and millimeter wave radar fusion provided by the present invention, in step S4, the speed of the train is measured in real time by using a millimeter wave radar sensor, the measured speed is integrated, and the position of the train in two key position intervals is predicted, which specifically includes:

获取毫米波雷达传感器检测的前向障碍物实时数据:Get real-time data of forward obstacles detected by the millimeter-wave radar sensor:

Figure BDA0002530553500000053
Figure BDA0002530553500000053

其中,xp表示第p个前向障碍物的横向距离,yp表示第p个前向障碍物的纵向距离,vp表示第p个前向障碍物相对列车的速度,n表示前向障碍物的个数;Wherexp represents the lateral distance of the pth forward obstacle,yp represents the longitudinal distance of the pth forward obstacle,vp represents the speed of the pth forward obstacle relative to the train, and n represents the number of forward obstacles;

提取列车的实时速度,对所有前向障碍物相对列车的速度v1、v2…vm进行统计,统计出每个速度下前向障碍物的个数,将数量最多的速度作为静止障碍物相对列车的速度:Extract the real-time speed of the train, count the speeds v1 , v2 …vm of all forward obstacles relative to the train, count the number of forward obstacles at each speed, and take the speed with the largest number as the speed of the stationary obstacle relative to the train:

num(vs)=max{num(v1),num(v2),…num(vm)} (6)num(vs )=max{num(v1 ),num(v2 ),…num(vm )} (6)

其中,vs表示静止障碍物相对列车的速度,num(vs)表示vs速度下前向障碍物的个数,num(v1)表示v1速度下前向障碍物的个数,num(v2)表示v2速度下前向障碍物的个数,num(vm)表示vm速度下前向障碍物的个数,m表示当前帧毫米波雷达数据中总共包含的速度值种类数,m≤n;则列车的速度为-vsWherein,vs represents the speed of the stationary obstacle relative to the train, num(vs ) represents the number of forward obstacles atvs speed, num(v1 ) represents the number of forward obstacles atv1 speed, num(v2 ) represents the number of forward obstacles atv2 speed, num(vm ) represents the number of forward obstacles atvm speed, m represents the total number of speed value types contained in the current frame of millimeter-wave radar data, m≤n; then the speed of the train is-vs ;

对列车的速度进行积分,实现列车位置估计:Integrate the train speed to estimate the train position:

Figure BDA0002530553500000052
Figure BDA0002530553500000052

其中,

Figure BDA0002530553500000061
表示列车在t时刻的位置,pr-1表示t时刻列车位置的上一个关键位置,l表示关键位置的总数量,v(t)表示列车的实时车速,∫v(t)d(t)表示从上一个关键位置开始到t时刻的速度积分。in,
Figure BDA0002530553500000061
represents the position of the train at time t, pr-1 represents the previous key position of the train at time t, l represents the total number of key positions, v(t) represents the real-time speed of the train, and ∫v(t)d(t) represents the speed integral from the previous key position to time t.

本发明提供的上述基于视觉与毫米波雷达融合的列车定位方法,利用车载传感器采集整条线路的图像数据,选取其中关键位置图像进行特征提取后建立特征库,对当前时刻采集的图像进行特征提取后在特征库中进行查询,判断列车当前位置为关键位置还是区间位置,若为区间位置则根据列车速度进行区间位置估计。通过视觉进行图像特征提取,融合全局特征与局部特征,其中全局特征通过深度学习进行提取,局部特征通过直线检测及直线特征描述子进行提取,可以使对图像的特征提取更加充分;在进行关键位置查找时,分别进行全局特征相似度度量和局部特征相似度度量,可以使关键位置检测更精准;通过毫米波雷达进行列车车速测量,不依赖于车辆信号系统,通过毫米波雷达检测的障碍物进行反向车速推算得到列车车速。本发明利用车载传感器进行列车定位,不需依赖于路侧设备,可有效降低列车定位成本,并且,通过融合数据进行定位,可有效克服列车在隧道等环境下无法定位的难题。The above-mentioned train positioning method based on the fusion of vision and millimeter-wave radar provided by the present invention uses on-board sensors to collect image data of the entire line, selects key position images for feature extraction, and then establishes a feature library. After feature extraction of the image collected at the current moment, it is queried in the feature library to determine whether the current position of the train is a key position or an interval position. If it is an interval position, the interval position is estimated according to the train speed. Image feature extraction is performed through vision, and global features and local features are integrated, wherein global features are extracted through deep learning, and local features are extracted through straight line detection and straight line feature descriptors, so that the feature extraction of the image can be more sufficient; when searching for key positions, global feature similarity measurement and local feature similarity measurement are performed respectively, so that key position detection can be more accurate; train speed measurement is performed through millimeter-wave radar, which does not rely on the vehicle signal system, and the train speed is obtained by reverse speed calculation through obstacles detected by millimeter-wave radar. The present invention uses on-board sensors to locate trains, does not need to rely on roadside equipment, can effectively reduce the cost of train positioning, and by fusion data for positioning, it can effectively overcome the problem that trains cannot be positioned in environments such as tunnels.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为本发明提供的一种基于视觉与毫米波雷达融合的列车定位方法的流程图;FIG1 is a flow chart of a train positioning method based on the fusion of vision and millimeter-wave radar provided by the present invention;

图2为本发明实施例1中基于视觉与毫米波雷达融合的列车定位架构图;FIG2 is a diagram of a train positioning architecture based on the fusion of vision and millimeter-wave radar in Example 1 of the present invention;

图3为本发明实施例1中融合全局特征与局部特征进行图像特征提取的示例图;FIG3 is an example diagram of image feature extraction by fusing global features and local features in Embodiment 1 of the present invention;

图4为本发明实施例1中图像全局特征提取示例图;FIG4 is an example diagram of global feature extraction of an image in Embodiment 1 of the present invention;

图5为本发明实施例1中图像局部特征提取示例图。FIG5 is an example diagram of local feature extraction of an image in Embodiment 1 of the present invention.

具体实施方式DETAILED DESCRIPTION

下面将结合本发明实施方式中的附图,对本发明实施方式中的技术方案进行清楚、完整的描述,显然,所描述的实施方式仅仅是作为例示,并非用于限制本发明。The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only for illustration and are not intended to limit the present invention.

本发明提供的一种基于视觉与毫米波雷达融合的列车定位方法,如图1所示,包括如下步骤:The present invention provides a train positioning method based on the fusion of vision and millimeter wave radar, as shown in FIG1 , comprising the following steps:

S1:在列车车头位置处安装相机和毫米波雷达传感器,通过安装的相机和毫米波雷达传感器采集整条线路的同步数据;其中,将相机和毫米波雷达传感器采集数据的时间截最近的两帧数据作为同步数据;S1: Install a camera and a millimeter-wave radar sensor at the front of the train, and collect synchronous data of the entire line through the installed camera and millimeter-wave radar sensor; wherein the two frames of data collected by the camera and millimeter-wave radar sensor at the latest time are used as synchronous data;

具体地,相机的安装位置,要求相机拍摄列车前方区域,且相机安装好后位置固定;毫米波雷达传感器的安装位置,也要求雷达对着列车前方轨道区域;同步数据采集需采集相机和毫米波雷达传感器的同步数据,通过从获取数据的时间戳中选取相机和毫米波雷达传感器时间戳最近的两帧数据作为同步数据;Specifically, the camera installation position requires that the camera shoots the area in front of the train, and the camera is fixed after installation; the millimeter-wave radar sensor installation position also requires that the radar faces the track area in front of the train; synchronous data acquisition requires the acquisition of synchronous data of the camera and the millimeter-wave radar sensor, by selecting the two frames of data with the most recent timestamps of the camera and the millimeter-wave radar sensor as the synchronous data from the timestamps of the acquired data;

S2:选取线路中的关键位置,从采集的同步数据中获取关键位置图像,对关键位置图像进行特征提取,建立关键位置视觉数据特征库;S2: Select key positions in the line, obtain key position images from the collected synchronous data, extract features from the key position images, and establish a key position visual data feature library;

具体地,关键位置图像指的是线路中具有明显特征的区域图像;关键位置图像特征提取包括提取关键位置图像的全局特征和局部特征,其中,全局特征提取通过深度学习来实现,局部特征提取通过检测关键位置图像中的直线,并对检测的直线进行特征描述来实现;关键位置视觉数据特征库的建立是将提取的特征保存起来,便于后续进行查询;Specifically, the key position image refers to the regional image with obvious features in the line; the key position image feature extraction includes extracting the global features and local features of the key position image, wherein the global feature extraction is realized by deep learning, and the local feature extraction is realized by detecting the straight lines in the key position image and describing the features of the detected straight lines; the establishment of the key position visual data feature library is to save the extracted features for subsequent query;

S3:在列车运行过程中,利用相机实时采集图像,并对实时采集的图像进行特征提取;S3: During the operation of the train, the camera is used to collect images in real time, and features are extracted from the images collected in real time;

具体地,对实时采集的图像进行特征提取也包括提取图像中的全局特征和局部特征,与步骤S2中的特征提取类似,在此不做赘述;Specifically, feature extraction of the real-time collected image also includes extracting global features and local features in the image, which is similar to the feature extraction in step S2 and will not be described in detail here;

S4:将实时采集的图像的特征与关键位置视觉数据特征库中图像的特征进行相似度度量,判断列车是否到达关键位置视觉数据特征库中图像对应的关键位置;若是;则对列车位置进行校准;若否,则利用毫米波雷达传感器实时测量列车的速度,对测得的速度进行积分,预测列车在两个关键位置区间内的位置;S4: measure the similarity between the features of the real-time collected image and the features of the image in the key position visual data feature library, and determine whether the train has reached the key position corresponding to the image in the key position visual data feature library; if so, calibrate the train position; if not, use the millimeter wave radar sensor to measure the speed of the train in real time, integrate the measured speed, and predict the position of the train within the two key position intervals;

具体地,通过毫米波雷达传感器测量列车的速度,主要是通过毫米波雷达传感器进行前向障碍物检测,获取障碍物相对于列车的相对速度,在此基础上提取静止障碍物相对列车的速度,从而反求出列车的速度;对列车的速度进行积分,可以获取累积的距离信息,结合上一个关键位置信息,可以推算出列车的当前位置。Specifically, the speed of the train is measured by the millimeter-wave radar sensor, which mainly performs forward obstacle detection through the millimeter-wave radar sensor to obtain the relative speed of the obstacle with respect to the train, and on this basis extracts the speed of the stationary obstacle relative to the train, thereby inversely calculating the speed of the train; by integrating the speed of the train, the accumulated distance information can be obtained, and combined with the previous key position information, the current position of the train can be calculated.

下面通过一个具体的实施例对本发明提供的上述基于视觉与毫米波雷达融合的列车定位方法的具体实施进行详细说明。The specific implementation of the train positioning method based on the fusion of vision and millimeter-wave radar provided by the present invention is described in detail below through a specific embodiment.

实施例1:Embodiment 1:

如图2所示,为基于视觉与毫米波雷达融合的列车定位的架构图,目的是实现列车的实时定位。选取地铁列车,实现其在线路中的定位,其线路包含有16个站,具体实施方法包含以下内容:As shown in Figure 2, it is the architecture diagram of train positioning based on the fusion of vision and millimeter wave radar, the purpose of which is to achieve real-time positioning of the train. A subway train is selected to realize its positioning in the line, which contains 16 stations. The specific implementation method includes the following contents:

第一步:安装相机和毫米波雷达传感器Step 1: Install the camera and mmWave radar sensor

在地铁列车的挡风玻璃处安装工业相机和毫米波雷达传感器,安装的视角对着列车行驶方向,可实现对列车前向环境数据采集。Industrial cameras and millimeter-wave radar sensors are installed on the windshield of subway trains, with the viewing angle facing the direction of the train's travel, to collect environmental data ahead of the train.

第二步:采集同步数据Step 2: Collect synchronous data

利用安装的相机和毫米波雷达传感器进行同步数据采集,采集的数据为需要定位的线路视频数据和毫米波雷达数据。通过安装的相机和毫米波雷达传感器采集整条地铁线路的同步数据,在采集时,通过获取数据的时间戳,选取相机和毫米波雷达时间戳最近的两帧数据作为同步数据。The installed cameras and millimeter-wave radar sensors are used for synchronous data collection. The collected data are the video data and millimeter-wave radar data of the line to be located. The installed cameras and millimeter-wave radar sensors are used to collect synchronous data of the entire subway line. When collecting, the two frames of data with the most recent camera and millimeter-wave radar timestamps are selected as synchronous data by obtaining the timestamps of the data.

第三步:建立关键位置视觉数据特征库Step 3: Establish a key position visual data feature library

利用采集的同步数据进行关键位置视觉数据特征库的建立。建立关键位置视觉数据特征库之前,需选定线路中的关键位置,本实施例选定地铁线路中列车到站停车时的位置(即各站台位置)作为关键位置,将列车到站停车时的图像进行保存,从而获取整条线路中所有站停车时的关键位置图像。基于这些关键位置图像进行关键位置图像特征提取并建立关键位置视觉数据特征库,如图3所示,提取的特征包括全局特征和局部特征,建立过程如下:The collected synchronous data is used to establish a key position visual data feature library. Before establishing the key position visual data feature library, it is necessary to select the key position in the line. In this embodiment, the position of the train when it stops at the station in the subway line (i.e., the position of each platform) is selected as the key position, and the image of the train when it stops at the station is saved, thereby obtaining the key position image of all stations in the entire line when the train stops. Based on these key position images, key position image features are extracted and a key position visual data feature library is established. As shown in Figure 3, the extracted features include global features and local features, and the establishment process is as follows:

(1)对获取的每一帧关键位置图像进行全局特征提取,全局特征提取通过卷积神经网络实现,提取过程如图4所示。(1) Perform global feature extraction on each key position image acquired. The global feature extraction is implemented through a convolutional neural network. The extraction process is shown in Figure 4.

首先,将各帧关键位置图像缩放到224*224的尺寸大小;First, scale the key position images of each frame to a size of 224*224;

随后,利用一个卷积操作降低每一帧关键位置图像的特征图大小;Then, a convolution operation is used to reduce the size of the feature map of each key position image;

接着,利用7个反向残差网络进行特征提取;Next, 7 reverse residual networks are used for feature extraction;

之后,通过平均池化层得到1280维高维向量,可以利用该1280维高维向量表示关键位置图像的全局特征。Afterwards, a 1280-dimensional high-dimensional vector is obtained through the average pooling layer, which can be used to represent the global features of the image at the key position.

(2)由于得到的1280维高维向量分布不均匀,因此,可以对1280维高维向量进行L2正则化,以获取分布更加均匀的高维向量:(2) Since the obtained 1280-dimensional high-dimensional vector is unevenly distributed, the 1280-dimensional high-dimensional vector can be L2 regularized to obtain a high-dimensional vector with a more uniform distribution:

Figure BDA0002530553500000091
Figure BDA0002530553500000091

其中,d=1280,表示高维向量的维度;(W1,W2,…Wd)为经过正则化后的高维向量,表示关键位置图像的全局特征;q=1,2,...,d。Wherein, d=1280, indicating the dimension of the high-dimensional vector; (W1 ,W2 ,…Wd ) is the high-dimensional vector after regularization, indicating the global features of the image at the key position; q=1,2,…,d.

(3)对获取的每一帧关键位置图像进行局部特征提取,包括直线特征及对应的直线描述子。(3) Extract local features of each key position image, including straight line features and corresponding straight line descriptors.

直线特征提取过程,如图5所示:The straight line feature extraction process is shown in Figure 5:

首先,对每一帧关键位置图像进行平滑处理;本实施例应用一个5*5的高斯核函数对如图5中a图所示的关键位置图像进行平滑处理,平滑图像如图5中b图所示;First, each frame of the key position image is smoothed; in this embodiment, a 5*5 Gaussian kernel function is used to smooth the key position image shown in Figure 5 a, and the smoothed image is shown in Figure 5 b;

然后,获取每一帧关键位置图像的梯度图;本实施例利用Sobel算子获取关键位置图像的梯度图,获取的梯度图如图5中c图所示;Then, the gradient map of each frame of the key position image is obtained; in this embodiment, the gradient map of the key position image is obtained using the Sobel operator, and the obtained gradient map is shown in Figure c in Figure 5;

接着,获取每一帧关键位置图像的锚点图像;在图5中c图中通过获取关键位置图像横向和纵向三邻域区域内的最大像素值,最大像素值的位置为锚点,锚点图如图5中d图所示;Next, the anchor point image of each frame of the key position image is obtained; in Figure c of FIG5 , the maximum pixel value in the horizontal and vertical three neighborhood areas of the key position image is obtained, and the position of the maximum pixel value is the anchor point, and the anchor point map is shown in Figure d of FIG5 ;

之后,获取每一帧关键位置图像的边缘图,将图5中d图中的所有锚点连接得到关键位置图像的边缘图,如图5中e图所示;Afterwards, the edge map of each frame of the key position image is obtained, and all anchor points in the figure d in FIG5 are connected to obtain the edge map of the key position image, as shown in the figure e in FIG5;

最后,进行直线检测,应用最小二乘法对图5中e图所示的边缘图进行直线拟合,检测出直线,如图5中f所示。Finally, straight line detection is performed by applying the least square method to perform straight line fitting on the edge map shown in Figure 5e, and a straight line is detected, as shown in Figure 5f.

对检测的直线进行特征描述。本实施例应用局部条带描述子对检测的每一条直线进行特征描述,利用每一帧关键位置图像中所有的直线描述子表示每一帧关键位置图像的局部特征。Describing the features of the detected straight lines: This embodiment uses local strip descriptors to describe the features of each detected straight line, and uses all straight line descriptors in each frame of key position image to represent the local features of each frame of key position image.

(4)将提取的每一帧关键位置图像的全局特征和局部特征进行保存;将每个站台提取的高维向量及对应的站台号进行对应,构建所有站台的全局特征库;采用视觉词袋模型对提取的所有直线描述子进行局部特征库构建。(4) The global features and local features of each frame of the key position image are saved; the high-dimensional vector extracted from each station and the corresponding station number are matched to build a global feature library of all stations; the visual word bag model is used to construct a local feature library for all extracted line descriptors.

第四步:关键位置检测Step 4: Key position detection

列车在线路中运行时,相机实时采集列车前向的图像数据,对实时采集的每一帧图像进行特征提取,提取的特征包括图像的全局特征与局部特征。提取完每一帧图像后,将图像的全局特征与建立的全局特征库进行相似度度量,将图像的局部特征与建立的局部特征库进行相似度度量,从而判断列车是否到达关键位置。When the train is running on the line, the camera collects the image data in the front direction of the train in real time, and extracts features from each frame of the real-time collected image. The extracted features include global features and local features of the image. After extracting each frame of the image, the global features of the image are measured for similarity with the established global feature library, and the local features of the image are measured for similarity with the established local feature library, so as to determine whether the train has reached the key position.

(1)对实时采集的图像进行全局特征提取,全局特征提取通过卷积神经网络实现。将实时采集的各图像缩放到同一尺寸,利用一个卷积操作来降低实时采集的各图像的特征图大小,利用反向残差网络进行特征提取,通过平均池化层得到1280维高维向量;对1280维高维向量进行L2正则化:(1) Perform global feature extraction on the real-time collected images. Global feature extraction is achieved through convolutional neural networks. Each real-time collected image is scaled to the same size, a convolution operation is used to reduce the size of the feature map of each real-time collected image, and a reverse residual network is used for feature extraction. A 1280-dimensional high-dimensional vector is obtained through an average pooling layer; the 1280-dimensional high-dimensional vector is L2 regularized:

Figure BDA0002530553500000101
Figure BDA0002530553500000101

其中,d=1280,表示高维向量的维度;(W1',W2',…Wd')为经过正则化后的高维向量,表示实时采集的图像的全局特征;q=1,2,...,d。上述对相机实时采集的图像进行全局特征提取与第三步(1)(2)中对关键位置图像进行全局特征提取的实施过程类似,在此不做赘述。Wherein, d = 1280, indicating the dimension of the high-dimensional vector; (W1 ', W2 ', ... Wd ') is a high-dimensional vector after regularization, indicating the global features of the real-time acquired image; q = 1, 2, ..., d. The above global feature extraction of the image acquired by the camera in real time is similar to the implementation process of the global feature extraction of the key position image in the third step (1) (2), which will not be repeated here.

(2)对实时采集的图像进行局部特征提取,包括直线特征及对应的直线描述子。对实时采集的图像进行平滑处理,利用Sobel算子获取实时采集的图像的梯度图,获取实时采集的图像横向和纵向三邻域区域内的最大像素值,最大像素值的位置为锚点,将所有锚点连接得到实时采集的图像的边缘图,应用最小二乘法对边缘图进行直线拟合检测出直线;应用局部条带描述子对检测的每一条直线进行特征描述,利用实时采集的图像中所有的直线描述子表示实时采集的图像的局部特征。上述对相机实时采集的图像进行局部特征提取与第三步(3)中对关键位置图像进行局部特征提取的实施过程类似,在此不做赘述。(2) Extract local features of the real-time acquired image, including straight line features and corresponding straight line descriptors. Smooth the real-time acquired image, use the Sobel operator to obtain the gradient map of the real-time acquired image, obtain the maximum pixel value in the horizontal and vertical three neighborhoods of the real-time acquired image, the position of the maximum pixel value is the anchor point, connect all anchor points to obtain the edge map of the real-time acquired image, apply the least squares method to perform straight line fitting on the edge map to detect the straight line; apply the local strip descriptor to describe the features of each detected straight line, and use all the straight line descriptors in the real-time acquired image to represent the local features of the real-time acquired image. The above-mentioned local feature extraction of the image acquired by the camera in real time is similar to the implementation process of local feature extraction of the key position image in the third step (3), which will not be repeated here.

(3)对实时采集的图像的全局特征与构建的全局特征库中所有图像的全局特征依次进行相似度度量,计算过程如下:(3) The global features of the real-time collected image and the global features of all images in the constructed global feature library are measured in turn for similarity. The calculation process is as follows:

Figure BDA0002530553500000111
Figure BDA0002530553500000111

其中,i表示实时采集的第i个图像,

Figure BDA0002530553500000112
表示图像i的全局特征,
Figure BDA0002530553500000113
表示全局特征库中的第j个图像的全局特征,D(i,j)表示图像i的全局特征
Figure BDA0002530553500000114
与全局特征库中第j个图像的全局特征
Figure BDA0002530553500000115
的相似度;||·||2表示L2范式;Where i represents the i-th image acquired in real time,
Figure BDA0002530553500000112
represents the global features of image i,
Figure BDA0002530553500000113
represents the global features of the jth image in the global feature library, and D(i,j) represents the global features of image i
Figure BDA0002530553500000114
and the global features of the jth image in the global feature library
Figure BDA0002530553500000115
Similarity; ||·||2 represents the L2 paradigm;

若图像i的全局特征

Figure BDA0002530553500000119
与全局特征库中第j个图像的全局特征
Figure BDA0002530553500000116
的相似度小于第一设定阈值,则认为列车当前位置可能为某一关键位置,需要进一步地进行局部特征相似度度量。If the global feature of image i
Figure BDA0002530553500000119
and the global features of the jth image in the global feature library
Figure BDA0002530553500000116
If the similarity is less than the first set threshold, it is considered that the current position of the train may be a key position, and further local feature similarity measurement is required.

(4)对图像i的局部特征wi与构建的局部特征库中所有图像的局部特征依次进行相似度度量,计算过程如下:(4) The similarity between the local feature wi of image i and the local features of all images in the constructed local feature library is measured in turn. The calculation process is as follows:

Figure BDA0002530553500000117
Figure BDA0002530553500000117

其中,s(wi-wj)表示图像i的局部特征wi与局部特征库中第j个图像的局部特征wj的相似度,wj表示局部特征库中的第j个图像的局部特征,

Figure BDA0002530553500000118
表示图像i中第k条直线对应的局部特征,
Figure BDA0002530553500000121
表示局部特征库中第j个图像中第k条直线对应的局部特征,N表示图像i中直线的个数;Where s(wi -wj ) represents the similarity between the local feature wi of imagei and the local featurewj of the jth image in the local feature library, andwj represents the local feature of the jth image in the local feature library.
Figure BDA0002530553500000118
represents the local feature corresponding to the kth line in image i,
Figure BDA0002530553500000121
represents the local feature corresponding to the kth straight line in the jth image in the local feature library, and N represents the number of straight lines in image i;

若图像i的局部特征wi与局部特征库中第j个图像的局部特征wj的相似度小于第二设定阈值,则确定列车当前位置为某一关键位置。If the similarity between the local feature wi of image i and the local feature wj of the jth image in the local feature library is less than the second set threshold, the current position of the train is determined to be a key position.

第五步:区间位置估计Step 5: Interval Position Estimation

若列车未到达某一关键位置,则需要对列车在关键位置区间内进行位置估计。列车的区间位置估计过程如下:If the train does not reach a certain key position, it is necessary to estimate the position of the train within the key position interval. The train interval position estimation process is as follows:

(1)获取毫米波雷达传感器检测的前向障碍物实时数据。毫米波雷达数据为:(1) Obtain real-time data of forward obstacles detected by the millimeter-wave radar sensor. The millimeter-wave radar data is:

Figure BDA0002530553500000122
Figure BDA0002530553500000122

其中,xp表示第p个前向障碍物的横向距离,yp表示第p个前向障碍物的纵向距离,vp表示第p个前向障碍物相对列车的速度,n表示前向障碍物的个数;Wherexp represents the lateral distance of the pth forward obstacle,yp represents the longitudinal distance of the pth forward obstacle,vp represents the speed of the pth forward obstacle relative to the train, and n represents the number of forward obstacles;

(2)提取列车的实时速度。对所有前向障碍物相对列车的速度v1、v2…vm进行统计,统计出每个速度下前向障碍物的个数,将数量最多的速度作为静止障碍物相对列车的速度:(2) Extract the real-time speed of the train. Count the speeds v1 , v2 , …vm of all forward obstacles relative to the train, and count the number of forward obstacles at each speed. The speed with the largest number is taken as the speed of the stationary obstacle relative to the train:

num(vs)=max{num(v1),num(v2),…num(vm)} (6)num(vs )=max{num(v1 ),num(v2 ),…num(vm )} (6)

其中,vs表示静止障碍物相对列车的速度,num(vs)表示vs速度下前向障碍物的个数,num(v1)表示v1速度下前向障碍物的个数,num(v2)表示v2速度下前向障碍物的个数,num(vm)表示vm速度下前向障碍物的个数,m表示当前帧毫米波雷达数据中总共包含的速度值种类数,m≤n;由于列车行驶线路中大部分障碍物为静止障碍物,毫米波雷达传感器测量的障碍物大部分为静止物体,且毫米波雷达传感器测量的速度为列车与障碍物之间的相对速度,因此,大部分障碍物为某一速度值时,此速度即为列车相对静止障碍物的速度。上面得到静止障碍物相对列车的速度为vs,则列车的速度为-vsWherevs represents the speed of the stationary obstacle relative to the train, num(vs ) represents the number of forward obstacles atvs speed, num(v1 ) represents the number of forward obstacles atv1 speed, num(v2 ) represents the number of forward obstacles atv2 speed, num(vm ) represents the number of forward obstacles atvm speed, and m represents the total number of speed value types contained in the current frame of millimeter-wave radar data, m≤n; since most obstacles in the train route are stationary obstacles, most obstacles measured by the millimeter-wave radar sensor are stationary objects, and the speed measured by the millimeter-wave radar sensor is the relative speed between the train and the obstacle, therefore, when most obstacles are at a certain speed value, this speed is the speed of the train relative to the stationary obstacle. The speed of the stationary obstacle relative to the train isvs , then the speed of the train is-vs ;

(3)列车位置估计。对列车的速度进行积分,实现列车位置估计:(3) Train position estimation. Integrate the train speed to estimate the train position:

Figure BDA0002530553500000131
Figure BDA0002530553500000131

其中,

Figure BDA0002530553500000132
表示列车在t时刻的位置,pr-1表示t时刻列车位置的上一个关键位置,l表示关键位置的总数量,v(t)表示列车的实时车速,∫v(t)d(t)表示从上一个关键位置开始到t时刻的速度积分。in,
Figure BDA0002530553500000132
represents the position of the train at time t, pr-1 represents the previous key position of the train at time t, l represents the total number of key positions, v(t) represents the real-time speed of the train, and ∫v(t)d(t) represents the speed integral from the previous key position to time t.

本发明提供的上述基于视觉与毫米波雷达融合的列车定位方法,利用车载传感器采集整条线路的图像数据,选取其中关键位置图像进行特征提取后建立特征库,对当前时刻采集的图像进行特征提取后在特征库中进行查询,判断列车当前位置为关键位置还是区间位置,若为区间位置则根据列车速度进行区间位置估计。通过视觉进行图像特征提取,融合全局特征与局部特征,其中全局特征通过深度学习进行提取,局部特征通过直线检测及直线特征描述子进行提取,可以使对图像的特征提取更加充分;在进行关键位置查找时,分别进行全局特征相似度度量和局部特征相似度度量,可以使关键位置检测更精准;通过毫米波雷达进行列车车速测量,不依赖于车辆信号系统,通过毫米波雷达检测的障碍物进行反向车速推算得到列车车速。本发明利用车载传感器进行列车定位,不需依赖于路侧设备,可有效降低列车定位成本,并且,通过融合数据进行定位,可有效克服列车在隧道等环境下无法定位的难题。The above-mentioned train positioning method based on the fusion of vision and millimeter-wave radar provided by the present invention uses on-board sensors to collect image data of the entire line, selects key position images for feature extraction, and then establishes a feature library. After feature extraction of the image collected at the current moment, it is queried in the feature library to determine whether the current position of the train is a key position or an interval position. If it is an interval position, the interval position is estimated according to the train speed. Image feature extraction is performed through vision, and global features and local features are integrated, wherein global features are extracted through deep learning, and local features are extracted through straight line detection and straight line feature descriptors, so that the feature extraction of the image can be more sufficient; when searching for key positions, global feature similarity measurement and local feature similarity measurement are performed respectively, so that key position detection can be more accurate; train speed measurement is performed through millimeter-wave radar, which does not rely on the vehicle signal system, and the train speed is obtained by reverse speed calculation through obstacles detected by millimeter-wave radar. The present invention uses on-board sensors to locate trains, does not need to rely on roadside equipment, can effectively reduce the cost of train positioning, and by fusion data for positioning, it can effectively overcome the problem that trains cannot be positioned in environments such as tunnels.

显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。Obviously, those skilled in the art can make various changes and modifications to the present invention without departing from the spirit and scope of the present invention. Thus, if these modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include these modifications and variations.

Claims (4)

Translated fromChinese
1.一种基于视觉与毫米波雷达融合的列车定位方法,其特征在于,包括如下步骤:1. A train positioning method based on the fusion of vision and millimeter wave radar, characterized in that it includes the following steps:S1:在列车车头位置处安装相机和毫米波雷达传感器,通过安装的相机和毫米波雷达传感器采集整条线路的同步数据;其中,将相机和毫米波雷达传感器采集数据的时间截最近的两帧数据作为同步数据;S1: Install a camera and a millimeter-wave radar sensor at the front of the train, and collect synchronous data of the entire line through the installed camera and millimeter-wave radar sensor; wherein the two frames of data collected by the camera and the millimeter-wave radar sensor are cut off at the latest time as synchronous data;S2:选取线路中的关键位置,从采集的同步数据中获取关键位置图像,对所述关键位置图像进行特征提取,建立关键位置视觉数据特征库;S2: Select key positions in the line, obtain key position images from the collected synchronous data, extract features from the key position images, and establish a key position visual data feature library;S3:在列车运行过程中,利用相机实时采集图像,并对实时采集的图像进行特征提取;S3: During the operation of the train, the camera is used to collect images in real time, and features are extracted from the images collected in real time;S4:将实时采集的图像的特征与所述关键位置视觉数据特征库中图像的特征进行相似度度量,判断列车是否到达所述关键位置视觉数据特征库中图像对应的关键位置;若是;则对列车位置进行校准;若否,则利用毫米波雷达传感器实时测量列车的速度,对测得的速度进行积分,预测列车在两个关键位置区间内的位置;S4: measuring the similarity between the features of the real-time collected image and the features of the image in the key position visual data feature library, and judging whether the train has reached the key position corresponding to the image in the key position visual data feature library; if so, calibrating the train position; if not, measuring the speed of the train in real time using a millimeter wave radar sensor, integrating the measured speed, and predicting the position of the train within the two key position intervals;步骤S2,选取线路中的关键位置,从采集的同步数据中获取关键位置图像,对所述关键位置图像进行特征提取,建立关键位置视觉数据特征库,具体包括:Step S2, selecting a key position in the line, obtaining a key position image from the collected synchronous data, performing feature extraction on the key position image, and establishing a key position visual data feature library, specifically including:S21:选取线路中各站台位置为关键位置,从采集的同步数据中获取关键位置图像;S21: selecting each station position in the line as a key position, and obtaining a key position image from the collected synchronous data;S22:对获取的每一帧关键位置图像进行全局特征提取,全局特征提取通过卷积神经网络实现;将各帧关键位置图像缩放到同一尺寸,利用一个卷积操作降低每一帧关键位置图像的特征图大小,利用反向残差网络进行特征提取,通过平均池化层得到1280维高维向量;对1280维高维向量进行L2正则化:S22: Perform global feature extraction on each key position image acquired, and the global feature extraction is realized by convolutional neural network; scale the key position images of each frame to the same size, use a convolution operation to reduce the size of the feature map of each key position image, use the reverse residual network to extract features, and obtain a 1280-dimensional high-dimensional vector through the average pooling layer; perform L2 regularization on the 1280-dimensional high-dimensional vector:
Figure FDA0004102614920000011
Figure FDA0004102614920000011
其中,d=1280,表示高维向量的维度;(W1,W2,…Wd)为经过正则化后的高维向量,表示关键位置图像的全局特征;q=1,2,...,d;Wherein, d = 1280, indicating the dimension of the high-dimensional vector; (W1 , W2 , …Wd ) is the high-dimensional vector after regularization, indicating the global features of the key position image; q = 1, 2, …, d;S23:对获取的每一帧关键位置图像进行局部特征提取,包括直线特征及对应的直线描述子;对每一帧关键位置图像进行平滑处理,利用Sobel算子获取每一帧关键位置图像的梯度图,获取每一帧关键位置图像横向和纵向三邻域区域内的最大像素值,最大像素值的位置为锚点,将所有锚点连接得到每一帧关键位置图像的边缘图,应用最小二乘法对边缘图进行直线拟合检测出直线;应用局部条带描述子对检测的每一条直线进行特征描述,利用每一帧关键位置图像中所有的直线描述子表示每一帧关键位置图像的局部特征;S23: extract local features from each acquired key position image frame, including straight line features and corresponding straight line descriptors; perform smoothing on each key position image frame, use the Sobel operator to obtain a gradient map of each key position image frame, obtain the maximum pixel value in the horizontal and vertical three neighborhood areas of each key position image frame, the position of the maximum pixel value is the anchor point, connect all anchor points to obtain an edge map of each key position image frame, apply the least squares method to perform straight line fitting on the edge map to detect straight lines; apply the local strip descriptor to describe the features of each detected straight line, and use all the straight line descriptors in each key position image frame to represent the local features of each key position image frame;S24:将提取的每一帧关键位置图像的全局特征和局部特征进行保存;将每个站台提取的高维向量及对应的站台号进行对应,构建所有站台的全局特征库;采用视觉词袋模型对提取的所有直线描述子进行局部特征库构建。S24: Save the global features and local features of each frame of the key position image extracted; match the high-dimensional vector extracted from each station with the corresponding station number to build a global feature library of all stations; use the visual word bag model to construct a local feature library for all extracted straight line descriptors.2.如权利要求1所述的基于视觉与毫米波雷达融合的列车定位方法,其特征在于,步骤S3,在列车运行过程中,利用相机实时采集图像,并对实时采集的图像进行特征提取,具体包括:2. The train positioning method based on vision and millimeter wave radar fusion as claimed in claim 1 is characterized in that, in step S3, during the operation of the train, a camera is used to collect images in real time, and feature extraction is performed on the real-time collected images, specifically comprising:S31:对实时采集的图像进行全局特征提取,全局特征提取通过卷积神经网络实现;将实时采集的各图像缩放到同一尺寸,利用一个卷积操作来降低实时采集的各图像的特征图大小,利用反向残差网络进行特征提取,通过平均池化层得到1280维高维向量;对1280维高维向量进行L2正则化:S31: Perform global feature extraction on the real-time collected images, which is achieved through convolutional neural networks; scale the real-time collected images to the same size, use a convolution operation to reduce the size of the feature map of each real-time collected image, use the reverse residual network to extract features, and obtain a 1280-dimensional high-dimensional vector through an average pooling layer; perform L2 regularization on the 1280-dimensional high-dimensional vector:
Figure FDA0004102614920000021
Figure FDA0004102614920000021
其中,d=1280,表示高维向量的维度;(W1',W′2,…W′d)为经过正则化后的高维向量,表示实时采集的图像的全局特征;q=1,2,...,d;Wherein, d=1280, indicating the dimension of the high-dimensional vector; (W1 ',W′2 ,…W′d ) is the high-dimensional vector after regularization, indicating the global features of the image collected in real time; q=1,2,...,d;S32:对实时采集的图像进行局部特征提取,包括直线特征及对应的直线描述子;对实时采集的图像进行平滑处理,利用Sobel算子获取实时采集的图像的梯度图,获取实时采集的图像横向和纵向三邻域区域内的最大像素值,最大像素值的位置为锚点,将所有锚点连接得到实时采集的图像的边缘图,应用最小二乘法对边缘图进行直线拟合检测出直线;应用局部条带描述子对检测的每一条直线进行特征描述,利用实时采集的图像中所有的直线描述子表示实时采集的图像的局部特征。S32: extract local features of the real-time acquired image, including straight line features and corresponding straight line descriptors; smooth the real-time acquired image, use the Sobel operator to obtain the gradient map of the real-time acquired image, obtain the maximum pixel value in the three horizontal and vertical neighborhood areas of the real-time acquired image, the position of the maximum pixel value is the anchor point, connect all the anchor points to obtain the edge map of the real-time acquired image, apply the least squares method to perform straight line fitting on the edge map to detect the straight line; apply the local strip descriptor to describe the features of each detected straight line, and use all the straight line descriptors in the real-time acquired image to represent the local features of the real-time acquired image.
3.如权利要求1所述的基于视觉与毫米波雷达融合的列车定位方法,其特征在于,步骤S4,将实时采集的图像的特征与所述关键位置视觉数据特征库中图像的特征进行相似度度量,判断列车是否到达所述关键位置视觉数据特征库中图像对应的关键位置;若是;则对列车位置进行校准;若否,则利用毫米波雷达传感器实时测量列车的速度,对测得的速度进行积分,预测列车在两个关键位置区间内的位置,具体包括:3. The train positioning method based on the fusion of vision and millimeter-wave radar as claimed in claim 1 is characterized in that, in step S4, the features of the real-time collected image are measured for similarity with the features of the image in the key position visual data feature library to determine whether the train has reached the key position corresponding to the image in the key position visual data feature library; if so, the train position is calibrated; if not, the millimeter-wave radar sensor is used to measure the speed of the train in real time, the measured speed is integrated, and the position of the train within the two key position intervals is predicted, specifically including:S41:对实时采集的图像的全局特征与构建的全局特征库中所有图像的全局特征依次进行相似度度量,计算过程如下:S41: The global features of the real-time collected image and the global features of all images in the constructed global feature library are sequentially measured for similarity. The calculation process is as follows:
Figure FDA0004102614920000031
Figure FDA0004102614920000031
其中,i表示实时采集的第i个图像,
Figure FDA0004102614920000032
表示图像i的全局特征,
Figure FDA0004102614920000033
表示全局特征库中的第j个图像的全局特征,D(i,j)表示图像i的全局特征
Figure FDA0004102614920000034
与全局特征库中第j个图像的全局特征
Figure FDA0004102614920000035
的相似度;||·||2表示L2范式;
Where i represents the i-th image acquired in real time,
Figure FDA0004102614920000032
represents the global features of image i,
Figure FDA0004102614920000033
represents the global features of the jth image in the global feature library, and D(i,j) represents the global features of image i
Figure FDA0004102614920000034
and the global features of the jth image in the global feature library
Figure FDA0004102614920000035
Similarity; ||·||2 represents the L2 paradigm;
S42:判断图像i的全局特征
Figure FDA0004102614920000039
与全局特征库中第j个图像的全局特征
Figure FDA0004102614920000036
的相似度是否小于第一设定阈值;若否,则利用毫米波雷达传感器实时测量列车的速度,对测得的速度进行积分,预测列车在两个关键位置区间内的位置;若是,则执行步骤S43和步骤S44;
S42: Determine the global features of image i
Figure FDA0004102614920000039
and the global features of the jth image in the global feature library
Figure FDA0004102614920000036
is less than a first set threshold; if not, using a millimeter wave radar sensor to measure the speed of the train in real time, integrating the measured speed, and predicting the position of the train within the two key position intervals; if so, executing steps S43 and S44;
S43:对图像i的局部特征wi与构建的局部特征库中所有图像的局部特征依次进行相似度度量,计算过程如下:S43: perform similarity measurement on the local feature wi of image i and the local features of all images in the constructed local feature library in turn, and the calculation process is as follows:
Figure FDA0004102614920000037
Figure FDA0004102614920000037
其中,s(wi-wj)表示图像i的局部特征wi与局部特征库中第j个图像的局部特征wj的相似度,wj表示局部特征库中的第j个图像的局部特征,
Figure FDA0004102614920000038
表示图像i中第k条直线对应的局部特征,
Figure FDA0004102614920000041
表示局部特征库中第j个图像中第k条直线对应的局部特征,N表示图像i中直线的个数;
Where s(wi -wj ) represents the similarity between the local feature wi of imagei and the local featurewj of the jth image in the local feature library, andwj represents the local feature of the jth image in the local feature library.
Figure FDA0004102614920000038
represents the local feature corresponding to the kth line in image i,
Figure FDA0004102614920000041
represents the local feature corresponding to the kth straight line in the jth image in the local feature library, and N represents the number of straight lines in image i;
S44:判断图像i的局部特征wi与局部特征库中第j个图像的局部特征wj的相似度是否小于第二设定阈值;若是,则对列车位置进行校准;若否,则利用毫米波雷达传感器实时测量列车的速度,对测得的速度进行积分,预测列车在两个关键位置区间内的位置。S44: Determine whether the similarity between the local feature wi of image i and the local feature wj of the jth image in the local feature library is less than a second set threshold; if so, calibrate the train position; if not, use the millimeter wave radar sensor to measure the speed of the train in real time, integrate the measured speed, and predict the position of the train within the two key position intervals.
4.如权利要求1所述的基于视觉与毫米波雷达融合的列车定位方法,其特征在于,步骤S4中,利用毫米波雷达传感器实时测量列车的速度,对测得的速度进行积分,预测列车在两个关键位置区间内的位置,具体包括:4. The train positioning method based on vision and millimeter wave radar fusion as claimed in claim 1 is characterized in that, in step S4, the speed of the train is measured in real time by a millimeter wave radar sensor, the measured speed is integrated, and the position of the train in two key position intervals is predicted, which specifically includes:获取毫米波雷达传感器检测的前向障碍物实时数据:Get real-time data of forward obstacles detected by the millimeter-wave radar sensor:
Figure FDA0004102614920000042
Figure FDA0004102614920000042
其中,xp表示第p个前向障碍物的横向距离,yp表示第p个前向障碍物的纵向距离,vp表示第p个前向障碍物相对列车的速度,n表示前向障碍物的个数;Wherexp represents the lateral distance of the pth forward obstacle,yp represents the longitudinal distance of the pth forward obstacle,vp represents the speed of the pth forward obstacle relative to the train, and n represents the number of forward obstacles;提取列车的实时速度,对所有前向障碍物相对列车的速度v1、v2…vm进行统计,统计出每个速度下前向障碍物的个数,将数量最多的速度作为静止障碍物相对列车的速度:Extract the real-time speed of the train, count the speeds v1 , v2 …vm of all forward obstacles relative to the train, count the number of forward obstacles at each speed, and take the speed with the largest number as the speed of the stationary obstacle relative to the train:num(vs)=max{num(v1),num(v2),…num(vm)} (6)num(vs )=max{num(v1 ),num(v2 ),…num(vm )} (6)其中,vs表示静止障碍物相对列车的速度,num(vs)表示vs速度下前向障碍物的个数,num(v1)表示v1速度下前向障碍物的个数,num(v2)表示v2速度下前向障碍物的个数,num(vm)表示vm速度下前向障碍物的个数,m表示当前帧毫米波雷达数据中总共包含的速度值种类数,m≤n;列车的速度为-vsWherein,vs represents the speed of the stationary obstacle relative to the train, num(vs ) represents the number of forward obstacles atvs speed, num(v1 ) represents the number of forward obstacles atv1 speed, num(v2 ) represents the number of forward obstacles atv2 speed, num(vm ) represents the number of forward obstacles atvm speed, m represents the total number of speed value types contained in the current frame of millimeter-wave radar data, m≤n; the speed of the train is-vs ;对列车的速度进行积分,实现列车位置估计:Integrate the train speed to estimate the train position:
Figure FDA0004102614920000043
Figure FDA0004102614920000043
其中,
Figure FDA0004102614920000044
表示列车在t时刻的位置,pr-1表示t时刻列车位置的上一个关键位置,l表示关键位置的总数量,v(t)表示列车的实时车速,∫v(t)d(t)表示从上一个关键位置开始到t时刻的速度积分。
in,
Figure FDA0004102614920000044
represents the position of the train at time t, pr-1 represents the previous key position of the train at time t, l represents the total number of key positions, v(t) represents the real-time speed of the train, and ∫v(t)d(t) represents the speed integral from the previous key position to time t.
CN202010517233.4A2020-06-092020-06-09Train positioning method based on vision and millimeter wave radar fusionActiveCN111856441B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010517233.4ACN111856441B (en)2020-06-092020-06-09Train positioning method based on vision and millimeter wave radar fusion

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010517233.4ACN111856441B (en)2020-06-092020-06-09Train positioning method based on vision and millimeter wave radar fusion

Publications (2)

Publication NumberPublication Date
CN111856441A CN111856441A (en)2020-10-30
CN111856441Btrue CN111856441B (en)2023-04-25

Family

ID=72987309

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010517233.4AActiveCN111856441B (en)2020-06-092020-06-09Train positioning method based on vision and millimeter wave radar fusion

Country Status (1)

CountryLink
CN (1)CN111856441B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112528763B (en)*2020-11-242024-06-21浙江华锐捷技术有限公司Target detection method, electronic equipment and computer storage medium
CN113189583B (en)*2021-04-262022-07-01天津大学Time-space synchronization millimeter wave radar and visual information fusion method
CN116433883A (en)*2021-12-282023-07-14上海泽高电子工程技术股份有限公司Rail vehicle positioning method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107545239A (en)*2017-07-062018-01-05南京理工大学A kind of deck detection method matched based on Car license recognition with vehicle characteristics
CN108983219A (en)*2018-08-172018-12-11北京航空航天大学A kind of image information of traffic scene and the fusion method and system of radar information
CN109031304A (en)*2018-06-062018-12-18上海国际汽车城(集团)有限公司Vehicle positioning method in view-based access control model and the tunnel of millimetre-wave radar map feature
CN109947097A (en)*2019-03-062019-06-28东南大学 A robot positioning method and navigation application based on vision and laser fusion
CN110135485A (en)*2019-05-052019-08-16浙江大学 Object recognition and positioning method and system based on fusion of monocular camera and millimeter wave radar
CN110398731A (en)*2019-07-112019-11-01北京埃福瑞科技有限公司 Train speed measuring system and method
CN110415297A (en)*2019-07-122019-11-05北京三快在线科技有限公司Localization method, device and unmanned equipment
CN110587597A (en)*2019-08-012019-12-20深圳市银星智能科技股份有限公司SLAM closed loop detection method and detection system based on laser radar

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2018080609A2 (en)*2016-07-292018-05-03Remote Sensing Solutions, Inc.Mobile radar for visualizing topography

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107545239A (en)*2017-07-062018-01-05南京理工大学A kind of deck detection method matched based on Car license recognition with vehicle characteristics
CN109031304A (en)*2018-06-062018-12-18上海国际汽车城(集团)有限公司Vehicle positioning method in view-based access control model and the tunnel of millimetre-wave radar map feature
CN108983219A (en)*2018-08-172018-12-11北京航空航天大学A kind of image information of traffic scene and the fusion method and system of radar information
CN109947097A (en)*2019-03-062019-06-28东南大学 A robot positioning method and navigation application based on vision and laser fusion
CN110135485A (en)*2019-05-052019-08-16浙江大学 Object recognition and positioning method and system based on fusion of monocular camera and millimeter wave radar
CN110398731A (en)*2019-07-112019-11-01北京埃福瑞科技有限公司 Train speed measuring system and method
CN110415297A (en)*2019-07-122019-11-05北京三快在线科技有限公司Localization method, device and unmanned equipment
CN110587597A (en)*2019-08-012019-12-20深圳市银星智能科技股份有限公司SLAM closed loop detection method and detection system based on laser radar

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Automotive radar and camera fusion using Generative Adversarial Networks;VladimirLekic 等;《Computer Vision and Image Understanding》;20190731;第184卷;第1-8页*
基于视觉和毫米波雷达的车道级定位方法;赵翔 等;《上海交通大学学报》;20180131;第33-38页*

Also Published As

Publication numberPublication date
CN111856441A (en)2020-10-30

Similar Documents

PublicationPublication DateTitle
CN111554088B (en)Multifunctional V2X intelligent roadside base station system
CN109084786B (en)Map data processing method
CN108256413B (en)Passable area detection method and device, storage medium and electronic equipment
CN111856441B (en)Train positioning method based on vision and millimeter wave radar fusion
Angel et al.Methods of analyzing traffic imagery collected from aerial platforms
CN111275960A (en)Traffic road condition analysis method, system and camera
KR20180046798A (en)Method and apparatus for real time traffic information provision
CN106997688A (en)Parking position detecting method based on multi-sensor information fusion
KR101735557B1 (en)System and Method for Collecting Traffic Information Using Real time Object Detection
KR102245580B1 (en)Control server for estimating traffic density using adas probe data
WO2022041706A1 (en)Positioning method, positioning system, and vehicle
EP3550538B1 (en)Information processing apparatus, information processing method, and program
CN117612127B (en)Scene generation method and device, storage medium and electronic equipment
CN111914691A (en)Rail transit vehicle positioning method and system
CN104392612A (en)Urban traffic state monitoring method based on novel detection vehicles
CN111183464B (en)System and method for estimating saturation flow of signal intersection based on vehicle trajectory data
CN104422426A (en)Method and apparatus for providing vehicle navigation information within elevated road region
CN115980754A (en)Vehicle detection and tracking method fusing sensor information
US11753018B2 (en)Lane-type and roadway hypotheses determinations in a road model
CN111619589B (en)Automatic driving control method for complex environment
US20210048819A1 (en)Apparatus and method for determining junction
CN119169590B (en) Perception model evaluation method and device, storage medium and electronic device
CN114216469B (en)Method for updating high-precision map, intelligent base station and storage medium
CN114882702A (en)Road congestion movement detection early warning system and method based on light-vision fusion
CN110210324A (en)A kind of road target quickly detects method for early warning and system

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp