




技术领域Technical Field
本发明涉及轨道列车无人驾驶自主环境感知及定位技术领域,尤其涉及一种基于视觉与毫米波雷达融合的列车定位方法。The present invention relates to the technical field of autonomous environment perception and positioning of unmanned rail trains, and in particular to a train positioning method based on the fusion of vision and millimeter wave radar.
背景技术Background Art
随着我国城市规模的高速发展,城市化进程的逐步加快,城镇人口和人均机动车保有量水平急速增加,交通拥堵现象日益严重。具有载客量大、运送效率高、能源消耗低的城市轨道交通已经成为缓解城市交通拥堵问题的必然选择。With the rapid development of my country's urban scale, the gradual acceleration of urbanization, the rapid increase in urban population and per capita motor vehicle ownership, traffic congestion is becoming increasingly serious. Urban rail transit with large passenger capacity, high transportation efficiency and low energy consumption has become an inevitable choice to alleviate urban traffic congestion.
列车无人驾驶系统在列车运行过程中不依赖于驾驶员,可以有效提升列车运行效率及运行安全。列车无人驾驶系统需要实时精准的列车位置,精准的列车位置信息可以为车辆调度和车速控制提供重要的保障。The unmanned train system does not rely on the driver during the operation of the train, which can effectively improve the efficiency and safety of the train operation. The unmanned train system requires real-time and accurate train location information, which can provide important guarantees for vehicle scheduling and speed control.
现阶段主要通过在路侧安装路基设备,如应答器等,进行列车定位,该方法存在布设周期长、布设成本高的缺点。近年来,部分研究人员通过车载传感器,如通过车载北斗定位系统或车载GPS系统,进行列车实时定位,该方法在卫星信号较好时可以很好地进行列车定位,然而,当卫星信号受遮挡或缺失时,该方法无法满足实际应用需求。At present, train positioning is mainly carried out by installing roadbed equipment such as transponders on the roadside. This method has the disadvantages of long deployment period and high deployment cost. In recent years, some researchers have used vehicle-mounted sensors, such as the vehicle-mounted Beidou positioning system or vehicle-mounted GPS system, to carry out real-time train positioning. This method can well locate trains when the satellite signal is good. However, when the satellite signal is blocked or missing, this method cannot meet the actual application needs.
发明内容Summary of the invention
有鉴于此,本发明提供了一种基于视觉与毫米波雷达融合的列车定位方法,用以通过融合视觉图像数据与毫米波雷达数据,实现列车在包含隧道等环境线路中的自主定位。In view of this, the present invention provides a train positioning method based on the fusion of vision and millimeter-wave radar, which is used to achieve autonomous positioning of trains in environmental lines including tunnels by fusing visual image data and millimeter-wave radar data.
本发明提供的一种基于视觉与毫米波雷达融合的列车定位方法,包括如下步骤:The present invention provides a train positioning method based on the fusion of vision and millimeter wave radar, comprising the following steps:
S1:在列车车头位置处安装相机和毫米波雷达传感器,通过安装的相机和毫米波雷达传感器采集整条线路的同步数据;其中,将相机和毫米波雷达传感器采集数据的时间截最近的两帧数据作为同步数据;S1: Install a camera and a millimeter-wave radar sensor at the front of the train, and collect synchronous data of the entire line through the installed camera and millimeter-wave radar sensor; wherein the two frames of data collected by the camera and the millimeter-wave radar sensor are cut off at the latest time as synchronous data;
S2:选取线路中的关键位置,从采集的同步数据中获取关键位置图像,对所述关键位置图像进行特征提取,建立关键位置视觉数据特征库;S2: Select key positions in the line, obtain key position images from the collected synchronous data, extract features from the key position images, and establish a key position visual data feature library;
S3:在列车运行过程中,利用相机实时采集图像,并对实时采集的图像进行特征提取;S3: During the operation of the train, the camera is used to collect images in real time, and features are extracted from the images collected in real time;
S4:将实时采集的图像的特征与所述关键位置视觉数据特征库中图像的特征进行相似度度量,判断列车是否到达所述关键位置视觉数据特征库中图像对应的关键位置;若是;则对列车位置进行校准;若否,则利用毫米波雷达传感器实时测量列车的速度,对测得的速度进行积分,预测列车在两个关键位置区间内的位置。S4: Measure the similarity between the features of the real-time collected image and the features of the image in the key position visual data feature library to determine whether the train has reached the key position corresponding to the image in the key position visual data feature library; if so, calibrate the train position; if not, use the millimeter wave radar sensor to measure the speed of the train in real time, integrate the measured speed, and predict the position of the train within the two key position intervals.
在一种可能的实现方式中,在本发明提供的上述基于视觉与毫米波雷达融合的列车定位方法中,步骤S2,选取线路中的关键位置,从采集的同步数据中获取关键位置图像,对所述关键位置图像进行特征提取,建立关键位置视觉数据特征库,具体包括:In a possible implementation, in the above-mentioned train positioning method based on vision and millimeter wave radar fusion provided by the present invention, step S2, selecting a key position in the line, obtaining a key position image from the collected synchronous data, performing feature extraction on the key position image, and establishing a key position visual data feature library, specifically includes:
S21:选取线路中各站台位置为关键位置,从采集的同步数据中获取关键位置图像;S21: selecting each station position in the line as a key position, and obtaining a key position image from the collected synchronous data;
S22:对获取的每一帧关键位置图像进行全局特征提取,全局特征提取通过卷积神经网络实现;将各帧关键位置图像缩放到同一尺寸,利用一个卷积操作降低每一帧关键位置图像的特征图大小,利用反向残差网络进行特征提取,通过平均池化层得到1280维高维向量;对1280维高维向量进行L2正则化:S22: Perform global feature extraction on each key position image acquired, and the global feature extraction is realized by convolutional neural network; scale the key position images of each frame to the same size, use a convolution operation to reduce the size of the feature map of each key position image, use the reverse residual network to extract features, and obtain a 1280-dimensional high-dimensional vector through the average pooling layer; perform L2 regularization on the 1280-dimensional high-dimensional vector:
其中,d=1280,表示高维向量的维度;(W1,W2,…Wd)为经过正则化后的高维向量,表示关键位置图像的全局特征;q=1,2,...,d;Wherein, d = 1280, indicating the dimension of the high-dimensional vector; (W1 , W2 , …Wd ) is the high-dimensional vector after regularization, indicating the global features of the key position image; q = 1, 2, …, d;
S23:对获取的每一帧关键位置图像进行局部特征提取,包括直线特征及对应的直线描述子;对每一帧关键位置图像进行平滑处理,利用Sobel算子获取每一帧关键位置图像的梯度图,获取每一帧关键位置图像横向和纵向三邻域区域内的最大像素值,最大像素值的位置为锚点,将所有锚点连接得到每一帧关键位置图像的边缘图,应用最小二乘法对边缘图进行直线拟合检测出直线;应用局部条带描述子对检测的每一条直线进行特征描述,利用每一帧关键位置图像中所有的直线描述子表示每一帧关键位置图像的局部特征;S23: extract local features from each acquired key position image frame, including straight line features and corresponding straight line descriptors; perform smoothing on each key position image frame, use the Sobel operator to obtain a gradient map of each key position image frame, obtain the maximum pixel value in the horizontal and vertical three neighborhood areas of each key position image frame, the position of the maximum pixel value is the anchor point, connect all anchor points to obtain an edge map of each key position image frame, apply the least squares method to perform straight line fitting on the edge map to detect straight lines; apply the local strip descriptor to describe the features of each detected straight line, and use all the straight line descriptors in each key position image frame to represent the local features of each key position image frame;
S24:将提取的每一帧关键位置图像的全局特征和局部特征进行保存;将每个站台提取的高维向量及对应的站台号进行对应,构建所有站台的全局特征库;采用视觉词袋模型对提取的所有直线描述子进行局部特征库构建。S24: Save the global features and local features of each frame of the key position image extracted; match the high-dimensional vector extracted from each station with the corresponding station number to build a global feature library of all stations; use the visual word bag model to construct a local feature library for all extracted straight line descriptors.
在一种可能的实现方式中,在本发明提供的上述基于视觉与毫米波雷达融合的列车定位方法中,步骤S3,在列车运行过程中,利用相机实时采集图像,并对实时采集的图像进行特征提取,具体包括:In a possible implementation, in the above-mentioned train positioning method based on the fusion of vision and millimeter wave radar provided by the present invention, step S3, during the operation of the train, uses a camera to collect images in real time, and extracts features from the real-time collected images, specifically including:
S31:对实时采集的图像进行全局特征提取,全局特征提取通过卷积神经网络实现;将实时采集的各图像缩放到同一尺寸,利用一个卷积操作来降低实时采集的各图像的特征图大小,利用反向残差网络进行特征提取,通过平均池化层得到1280维高维向量;对1280维高维向量进行L2正则化:S31: Perform global feature extraction on the real-time collected images, which is achieved through convolutional neural networks; scale the real-time collected images to the same size, use a convolution operation to reduce the size of the feature map of each real-time collected image, use the reverse residual network to extract features, and obtain a 1280-dimensional high-dimensional vector through an average pooling layer; perform L2 regularization on the 1280-dimensional high-dimensional vector:
其中,d=1280,表示高维向量的维度;(W1',W2',…Wd')为经过正则化后的高维向量,表示实时采集的图像的全局特征;q=1,2,...,d;Wherein, d = 1280, indicating the dimension of the high-dimensional vector; (W1 ', W2 ', ... Wd ') is the high-dimensional vector after regularization, indicating the global features of the image collected in real time; q = 1, 2, ..., d;
S32:对实时采集的图像进行局部特征提取,包括直线特征及对应的直线描述子;对实时采集的图像进行平滑处理,利用Sobel算子获取实时采集的图像的梯度图,获取实时采集的图像横向和纵向三邻域区域内的最大像素值,最大像素值的位置为锚点,将所有锚点连接得到实时采集的图像的边缘图,应用最小二乘法对边缘图进行直线拟合检测出直线;应用局部条带描述子对检测的每一条直线进行特征描述,利用实时采集的图像中所有的直线描述子表示实时采集的图像的局部特征。S32: extract local features of the real-time acquired image, including straight line features and corresponding straight line descriptors; smooth the real-time acquired image, use the Sobel operator to obtain the gradient map of the real-time acquired image, obtain the maximum pixel value in the three horizontal and vertical neighborhood areas of the real-time acquired image, the position of the maximum pixel value is the anchor point, connect all the anchor points to obtain the edge map of the real-time acquired image, apply the least squares method to perform straight line fitting on the edge map to detect the straight line; apply the local strip descriptor to describe the features of each detected straight line, and use all the straight line descriptors in the real-time acquired image to represent the local features of the real-time acquired image.
在一种可能的实现方式中,在本发明提供的上述基于视觉与毫米波雷达融合的列车定位方法中,步骤S4,将实时采集的图像的特征与所述关键位置视觉数据特征库中图像的特征进行相似度度量,判断列车是否到达所述关键位置视觉数据特征库中图像对应的关键位置;若是;则对列车位置进行校准;若否,则利用毫米波雷达传感器实时测量列车的速度,对测得的速度进行积分,预测列车在两个关键位置区间内的位置,具体包括:In a possible implementation, in the above-mentioned train positioning method based on the fusion of vision and millimeter-wave radar provided by the present invention, step S4 measures the similarity between the features of the real-time collected image and the features of the image in the key position visual data feature library to determine whether the train has reached the key position corresponding to the image in the key position visual data feature library; if so, calibrate the train position; if not, use the millimeter-wave radar sensor to measure the speed of the train in real time, integrate the measured speed, and predict the position of the train within the two key position intervals, specifically including:
S41:对实时采集的图像的全局特征与构建的全局特征库中所有图像的全局特征依次进行相似度度量,计算过程如下:S41: The global features of the real-time collected image and the global features of all images in the constructed global feature library are sequentially measured for similarity. The calculation process is as follows:
其中,i表示实时采集的第i个图像,表示图像i的全局特征,表示全局特征库中的第j个图像的全局特征,D(i,j)表示图像i的全局特征与全局特征库中第j个图像的全局特征的相似度;·2表示L2范式;Where i represents the i-th image acquired in real time, represents the global features of image i, represents the global features of the jth image in the global feature library, and D(i,j) represents the global features of image i and the global features of the jth image in the global feature library2 represents the L2 paradigm;
S42:判断图像i的全局特征与全局特征库中第j个图像的全局特征的相似度是否小于第一设定阈值;若否,则利用毫米波雷达传感器实时测量列车的速度,对测得的速度进行积分,预测列车在两个关键位置区间内的位置;若是,则执行步骤S43和步骤S44;S42: Determine the global features of image i and the global features of the jth image in the global feature library is less than a first set threshold; if not, the speed of the train is measured in real time using a millimeter wave radar sensor, the measured speed is integrated, and the position of the train within the two key position intervals is predicted; if so, step S43 and step S44 are executed;
S43:对图像i的局部特征wi与构建的局部特征库中所有图像的局部特征依次进行相似度度量,计算过程如下:S43: perform similarity measurement on the local feature wi of image i and the local features of all images in the constructed local feature library in turn, and the calculation process is as follows:
其中,s(wi-wj)表示图像i的局部特征wi与局部特征库中第j个图像的局部特征wj的相似度,wj表示局部特征库中的第j个图像的局部特征,表示图像i中第k条直线对应的局部特征,表示局部特征库中第j个图像中第k条直线对应的局部特征,N表示图像i中直线的个数;Where s(wi -wj ) represents the similarity between the local feature wi of imagei and the local featurewj of the jth image in the local feature library, andwj represents the local feature of the jth image in the local feature library. represents the local feature corresponding to the kth line in image i, represents the local feature corresponding to the kth straight line in the jth image in the local feature library, and N represents the number of straight lines in image i;
S44:判断图像i的局部特征wi与局部特征库中第j个图像的局部特征wj的相似度是否小于第二设定阈值;若是,则对列车位置进行校准;若否,则利用毫米波雷达传感器实时测量列车的速度,对测得的速度进行积分,预测列车在两个关键位置区间内的位置。S44: Determine whether the similarity between the local feature wi of image i and the local feature wj of the jth image in the local feature library is less than a second set threshold; if so, calibrate the train position; if not, use the millimeter wave radar sensor to measure the speed of the train in real time, integrate the measured speed, and predict the position of the train within the two key position intervals.
在一种可能的实现方式中,在本发明提供的上述基于视觉与毫米波雷达融合的列车定位方法中,步骤S4中,利用毫米波雷达传感器实时测量列车的速度,对测得的速度进行积分,预测列车在两个关键位置区间内的位置,具体包括:In a possible implementation, in the above-mentioned train positioning method based on vision and millimeter wave radar fusion provided by the present invention, in step S4, the speed of the train is measured in real time by using a millimeter wave radar sensor, the measured speed is integrated, and the position of the train in two key position intervals is predicted, which specifically includes:
获取毫米波雷达传感器检测的前向障碍物实时数据:Get real-time data of forward obstacles detected by the millimeter-wave radar sensor:
其中,xp表示第p个前向障碍物的横向距离,yp表示第p个前向障碍物的纵向距离,vp表示第p个前向障碍物相对列车的速度,n表示前向障碍物的个数;Wherexp represents the lateral distance of the pth forward obstacle,yp represents the longitudinal distance of the pth forward obstacle,vp represents the speed of the pth forward obstacle relative to the train, and n represents the number of forward obstacles;
提取列车的实时速度,对所有前向障碍物相对列车的速度v1、v2…vm进行统计,统计出每个速度下前向障碍物的个数,将数量最多的速度作为静止障碍物相对列车的速度:Extract the real-time speed of the train, count the speeds v1 , v2 …vm of all forward obstacles relative to the train, count the number of forward obstacles at each speed, and take the speed with the largest number as the speed of the stationary obstacle relative to the train:
num(vs)=max{num(v1),num(v2),…num(vm)} (6)num(vs )=max{num(v1 ),num(v2 ),…num(vm )} (6)
其中,vs表示静止障碍物相对列车的速度,num(vs)表示vs速度下前向障碍物的个数,num(v1)表示v1速度下前向障碍物的个数,num(v2)表示v2速度下前向障碍物的个数,num(vm)表示vm速度下前向障碍物的个数,m表示当前帧毫米波雷达数据中总共包含的速度值种类数,m≤n;则列车的速度为-vs;Wherein,vs represents the speed of the stationary obstacle relative to the train, num(vs ) represents the number of forward obstacles atvs speed, num(v1 ) represents the number of forward obstacles atv1 speed, num(v2 ) represents the number of forward obstacles atv2 speed, num(vm ) represents the number of forward obstacles atvm speed, m represents the total number of speed value types contained in the current frame of millimeter-wave radar data, m≤n; then the speed of the train is-vs ;
对列车的速度进行积分,实现列车位置估计:Integrate the train speed to estimate the train position:
其中,表示列车在t时刻的位置,pr-1表示t时刻列车位置的上一个关键位置,l表示关键位置的总数量,v(t)表示列车的实时车速,∫v(t)d(t)表示从上一个关键位置开始到t时刻的速度积分。in, represents the position of the train at time t, pr-1 represents the previous key position of the train at time t, l represents the total number of key positions, v(t) represents the real-time speed of the train, and ∫v(t)d(t) represents the speed integral from the previous key position to time t.
本发明提供的上述基于视觉与毫米波雷达融合的列车定位方法,利用车载传感器采集整条线路的图像数据,选取其中关键位置图像进行特征提取后建立特征库,对当前时刻采集的图像进行特征提取后在特征库中进行查询,判断列车当前位置为关键位置还是区间位置,若为区间位置则根据列车速度进行区间位置估计。通过视觉进行图像特征提取,融合全局特征与局部特征,其中全局特征通过深度学习进行提取,局部特征通过直线检测及直线特征描述子进行提取,可以使对图像的特征提取更加充分;在进行关键位置查找时,分别进行全局特征相似度度量和局部特征相似度度量,可以使关键位置检测更精准;通过毫米波雷达进行列车车速测量,不依赖于车辆信号系统,通过毫米波雷达检测的障碍物进行反向车速推算得到列车车速。本发明利用车载传感器进行列车定位,不需依赖于路侧设备,可有效降低列车定位成本,并且,通过融合数据进行定位,可有效克服列车在隧道等环境下无法定位的难题。The above-mentioned train positioning method based on the fusion of vision and millimeter-wave radar provided by the present invention uses on-board sensors to collect image data of the entire line, selects key position images for feature extraction, and then establishes a feature library. After feature extraction of the image collected at the current moment, it is queried in the feature library to determine whether the current position of the train is a key position or an interval position. If it is an interval position, the interval position is estimated according to the train speed. Image feature extraction is performed through vision, and global features and local features are integrated, wherein global features are extracted through deep learning, and local features are extracted through straight line detection and straight line feature descriptors, so that the feature extraction of the image can be more sufficient; when searching for key positions, global feature similarity measurement and local feature similarity measurement are performed respectively, so that key position detection can be more accurate; train speed measurement is performed through millimeter-wave radar, which does not rely on the vehicle signal system, and the train speed is obtained by reverse speed calculation through obstacles detected by millimeter-wave radar. The present invention uses on-board sensors to locate trains, does not need to rely on roadside equipment, can effectively reduce the cost of train positioning, and by fusion data for positioning, it can effectively overcome the problem that trains cannot be positioned in environments such as tunnels.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本发明提供的一种基于视觉与毫米波雷达融合的列车定位方法的流程图;FIG1 is a flow chart of a train positioning method based on the fusion of vision and millimeter-wave radar provided by the present invention;
图2为本发明实施例1中基于视觉与毫米波雷达融合的列车定位架构图;FIG2 is a diagram of a train positioning architecture based on the fusion of vision and millimeter-wave radar in Example 1 of the present invention;
图3为本发明实施例1中融合全局特征与局部特征进行图像特征提取的示例图;FIG3 is an example diagram of image feature extraction by fusing global features and local features in Embodiment 1 of the present invention;
图4为本发明实施例1中图像全局特征提取示例图;FIG4 is an example diagram of global feature extraction of an image in Embodiment 1 of the present invention;
图5为本发明实施例1中图像局部特征提取示例图。FIG5 is an example diagram of local feature extraction of an image in Embodiment 1 of the present invention.
具体实施方式DETAILED DESCRIPTION
下面将结合本发明实施方式中的附图,对本发明实施方式中的技术方案进行清楚、完整的描述,显然,所描述的实施方式仅仅是作为例示,并非用于限制本发明。The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only for illustration and are not intended to limit the present invention.
本发明提供的一种基于视觉与毫米波雷达融合的列车定位方法,如图1所示,包括如下步骤:The present invention provides a train positioning method based on the fusion of vision and millimeter wave radar, as shown in FIG1 , comprising the following steps:
S1:在列车车头位置处安装相机和毫米波雷达传感器,通过安装的相机和毫米波雷达传感器采集整条线路的同步数据;其中,将相机和毫米波雷达传感器采集数据的时间截最近的两帧数据作为同步数据;S1: Install a camera and a millimeter-wave radar sensor at the front of the train, and collect synchronous data of the entire line through the installed camera and millimeter-wave radar sensor; wherein the two frames of data collected by the camera and millimeter-wave radar sensor at the latest time are used as synchronous data;
具体地,相机的安装位置,要求相机拍摄列车前方区域,且相机安装好后位置固定;毫米波雷达传感器的安装位置,也要求雷达对着列车前方轨道区域;同步数据采集需采集相机和毫米波雷达传感器的同步数据,通过从获取数据的时间戳中选取相机和毫米波雷达传感器时间戳最近的两帧数据作为同步数据;Specifically, the camera installation position requires that the camera shoots the area in front of the train, and the camera is fixed after installation; the millimeter-wave radar sensor installation position also requires that the radar faces the track area in front of the train; synchronous data acquisition requires the acquisition of synchronous data of the camera and the millimeter-wave radar sensor, by selecting the two frames of data with the most recent timestamps of the camera and the millimeter-wave radar sensor as the synchronous data from the timestamps of the acquired data;
S2:选取线路中的关键位置,从采集的同步数据中获取关键位置图像,对关键位置图像进行特征提取,建立关键位置视觉数据特征库;S2: Select key positions in the line, obtain key position images from the collected synchronous data, extract features from the key position images, and establish a key position visual data feature library;
具体地,关键位置图像指的是线路中具有明显特征的区域图像;关键位置图像特征提取包括提取关键位置图像的全局特征和局部特征,其中,全局特征提取通过深度学习来实现,局部特征提取通过检测关键位置图像中的直线,并对检测的直线进行特征描述来实现;关键位置视觉数据特征库的建立是将提取的特征保存起来,便于后续进行查询;Specifically, the key position image refers to the regional image with obvious features in the line; the key position image feature extraction includes extracting the global features and local features of the key position image, wherein the global feature extraction is realized by deep learning, and the local feature extraction is realized by detecting the straight lines in the key position image and describing the features of the detected straight lines; the establishment of the key position visual data feature library is to save the extracted features for subsequent query;
S3:在列车运行过程中,利用相机实时采集图像,并对实时采集的图像进行特征提取;S3: During the operation of the train, the camera is used to collect images in real time, and features are extracted from the images collected in real time;
具体地,对实时采集的图像进行特征提取也包括提取图像中的全局特征和局部特征,与步骤S2中的特征提取类似,在此不做赘述;Specifically, feature extraction of the real-time collected image also includes extracting global features and local features in the image, which is similar to the feature extraction in step S2 and will not be described in detail here;
S4:将实时采集的图像的特征与关键位置视觉数据特征库中图像的特征进行相似度度量,判断列车是否到达关键位置视觉数据特征库中图像对应的关键位置;若是;则对列车位置进行校准;若否,则利用毫米波雷达传感器实时测量列车的速度,对测得的速度进行积分,预测列车在两个关键位置区间内的位置;S4: measure the similarity between the features of the real-time collected image and the features of the image in the key position visual data feature library, and determine whether the train has reached the key position corresponding to the image in the key position visual data feature library; if so, calibrate the train position; if not, use the millimeter wave radar sensor to measure the speed of the train in real time, integrate the measured speed, and predict the position of the train within the two key position intervals;
具体地,通过毫米波雷达传感器测量列车的速度,主要是通过毫米波雷达传感器进行前向障碍物检测,获取障碍物相对于列车的相对速度,在此基础上提取静止障碍物相对列车的速度,从而反求出列车的速度;对列车的速度进行积分,可以获取累积的距离信息,结合上一个关键位置信息,可以推算出列车的当前位置。Specifically, the speed of the train is measured by the millimeter-wave radar sensor, which mainly performs forward obstacle detection through the millimeter-wave radar sensor to obtain the relative speed of the obstacle with respect to the train, and on this basis extracts the speed of the stationary obstacle relative to the train, thereby inversely calculating the speed of the train; by integrating the speed of the train, the accumulated distance information can be obtained, and combined with the previous key position information, the current position of the train can be calculated.
下面通过一个具体的实施例对本发明提供的上述基于视觉与毫米波雷达融合的列车定位方法的具体实施进行详细说明。The specific implementation of the train positioning method based on the fusion of vision and millimeter-wave radar provided by the present invention is described in detail below through a specific embodiment.
实施例1:Embodiment 1:
如图2所示,为基于视觉与毫米波雷达融合的列车定位的架构图,目的是实现列车的实时定位。选取地铁列车,实现其在线路中的定位,其线路包含有16个站,具体实施方法包含以下内容:As shown in Figure 2, it is the architecture diagram of train positioning based on the fusion of vision and millimeter wave radar, the purpose of which is to achieve real-time positioning of the train. A subway train is selected to realize its positioning in the line, which contains 16 stations. The specific implementation method includes the following contents:
第一步:安装相机和毫米波雷达传感器Step 1: Install the camera and mmWave radar sensor
在地铁列车的挡风玻璃处安装工业相机和毫米波雷达传感器,安装的视角对着列车行驶方向,可实现对列车前向环境数据采集。Industrial cameras and millimeter-wave radar sensors are installed on the windshield of subway trains, with the viewing angle facing the direction of the train's travel, to collect environmental data ahead of the train.
第二步:采集同步数据Step 2: Collect synchronous data
利用安装的相机和毫米波雷达传感器进行同步数据采集,采集的数据为需要定位的线路视频数据和毫米波雷达数据。通过安装的相机和毫米波雷达传感器采集整条地铁线路的同步数据,在采集时,通过获取数据的时间戳,选取相机和毫米波雷达时间戳最近的两帧数据作为同步数据。The installed cameras and millimeter-wave radar sensors are used for synchronous data collection. The collected data are the video data and millimeter-wave radar data of the line to be located. The installed cameras and millimeter-wave radar sensors are used to collect synchronous data of the entire subway line. When collecting, the two frames of data with the most recent camera and millimeter-wave radar timestamps are selected as synchronous data by obtaining the timestamps of the data.
第三步:建立关键位置视觉数据特征库Step 3: Establish a key position visual data feature library
利用采集的同步数据进行关键位置视觉数据特征库的建立。建立关键位置视觉数据特征库之前,需选定线路中的关键位置,本实施例选定地铁线路中列车到站停车时的位置(即各站台位置)作为关键位置,将列车到站停车时的图像进行保存,从而获取整条线路中所有站停车时的关键位置图像。基于这些关键位置图像进行关键位置图像特征提取并建立关键位置视觉数据特征库,如图3所示,提取的特征包括全局特征和局部特征,建立过程如下:The collected synchronous data is used to establish a key position visual data feature library. Before establishing the key position visual data feature library, it is necessary to select the key position in the line. In this embodiment, the position of the train when it stops at the station in the subway line (i.e., the position of each platform) is selected as the key position, and the image of the train when it stops at the station is saved, thereby obtaining the key position image of all stations in the entire line when the train stops. Based on these key position images, key position image features are extracted and a key position visual data feature library is established. As shown in Figure 3, the extracted features include global features and local features, and the establishment process is as follows:
(1)对获取的每一帧关键位置图像进行全局特征提取,全局特征提取通过卷积神经网络实现,提取过程如图4所示。(1) Perform global feature extraction on each key position image acquired. The global feature extraction is implemented through a convolutional neural network. The extraction process is shown in Figure 4.
首先,将各帧关键位置图像缩放到224*224的尺寸大小;First, scale the key position images of each frame to a size of 224*224;
随后,利用一个卷积操作降低每一帧关键位置图像的特征图大小;Then, a convolution operation is used to reduce the size of the feature map of each key position image;
接着,利用7个反向残差网络进行特征提取;Next, 7 reverse residual networks are used for feature extraction;
之后,通过平均池化层得到1280维高维向量,可以利用该1280维高维向量表示关键位置图像的全局特征。Afterwards, a 1280-dimensional high-dimensional vector is obtained through the average pooling layer, which can be used to represent the global features of the image at the key position.
(2)由于得到的1280维高维向量分布不均匀,因此,可以对1280维高维向量进行L2正则化,以获取分布更加均匀的高维向量:(2) Since the obtained 1280-dimensional high-dimensional vector is unevenly distributed, the 1280-dimensional high-dimensional vector can be L2 regularized to obtain a high-dimensional vector with a more uniform distribution:
其中,d=1280,表示高维向量的维度;(W1,W2,…Wd)为经过正则化后的高维向量,表示关键位置图像的全局特征;q=1,2,...,d。Wherein, d=1280, indicating the dimension of the high-dimensional vector; (W1 ,W2 ,…Wd ) is the high-dimensional vector after regularization, indicating the global features of the image at the key position; q=1,2,…,d.
(3)对获取的每一帧关键位置图像进行局部特征提取,包括直线特征及对应的直线描述子。(3) Extract local features of each key position image, including straight line features and corresponding straight line descriptors.
直线特征提取过程,如图5所示:The straight line feature extraction process is shown in Figure 5:
首先,对每一帧关键位置图像进行平滑处理;本实施例应用一个5*5的高斯核函数对如图5中a图所示的关键位置图像进行平滑处理,平滑图像如图5中b图所示;First, each frame of the key position image is smoothed; in this embodiment, a 5*5 Gaussian kernel function is used to smooth the key position image shown in Figure 5 a, and the smoothed image is shown in Figure 5 b;
然后,获取每一帧关键位置图像的梯度图;本实施例利用Sobel算子获取关键位置图像的梯度图,获取的梯度图如图5中c图所示;Then, the gradient map of each frame of the key position image is obtained; in this embodiment, the gradient map of the key position image is obtained using the Sobel operator, and the obtained gradient map is shown in Figure c in Figure 5;
接着,获取每一帧关键位置图像的锚点图像;在图5中c图中通过获取关键位置图像横向和纵向三邻域区域内的最大像素值,最大像素值的位置为锚点,锚点图如图5中d图所示;Next, the anchor point image of each frame of the key position image is obtained; in Figure c of FIG5 , the maximum pixel value in the horizontal and vertical three neighborhood areas of the key position image is obtained, and the position of the maximum pixel value is the anchor point, and the anchor point map is shown in Figure d of FIG5 ;
之后,获取每一帧关键位置图像的边缘图,将图5中d图中的所有锚点连接得到关键位置图像的边缘图,如图5中e图所示;Afterwards, the edge map of each frame of the key position image is obtained, and all anchor points in the figure d in FIG5 are connected to obtain the edge map of the key position image, as shown in the figure e in FIG5;
最后,进行直线检测,应用最小二乘法对图5中e图所示的边缘图进行直线拟合,检测出直线,如图5中f所示。Finally, straight line detection is performed by applying the least square method to perform straight line fitting on the edge map shown in Figure 5e, and a straight line is detected, as shown in Figure 5f.
对检测的直线进行特征描述。本实施例应用局部条带描述子对检测的每一条直线进行特征描述,利用每一帧关键位置图像中所有的直线描述子表示每一帧关键位置图像的局部特征。Describing the features of the detected straight lines: This embodiment uses local strip descriptors to describe the features of each detected straight line, and uses all straight line descriptors in each frame of key position image to represent the local features of each frame of key position image.
(4)将提取的每一帧关键位置图像的全局特征和局部特征进行保存;将每个站台提取的高维向量及对应的站台号进行对应,构建所有站台的全局特征库;采用视觉词袋模型对提取的所有直线描述子进行局部特征库构建。(4) The global features and local features of each frame of the key position image are saved; the high-dimensional vector extracted from each station and the corresponding station number are matched to build a global feature library of all stations; the visual word bag model is used to construct a local feature library for all extracted line descriptors.
第四步:关键位置检测Step 4: Key position detection
列车在线路中运行时,相机实时采集列车前向的图像数据,对实时采集的每一帧图像进行特征提取,提取的特征包括图像的全局特征与局部特征。提取完每一帧图像后,将图像的全局特征与建立的全局特征库进行相似度度量,将图像的局部特征与建立的局部特征库进行相似度度量,从而判断列车是否到达关键位置。When the train is running on the line, the camera collects the image data in the front direction of the train in real time, and extracts features from each frame of the real-time collected image. The extracted features include global features and local features of the image. After extracting each frame of the image, the global features of the image are measured for similarity with the established global feature library, and the local features of the image are measured for similarity with the established local feature library, so as to determine whether the train has reached the key position.
(1)对实时采集的图像进行全局特征提取,全局特征提取通过卷积神经网络实现。将实时采集的各图像缩放到同一尺寸,利用一个卷积操作来降低实时采集的各图像的特征图大小,利用反向残差网络进行特征提取,通过平均池化层得到1280维高维向量;对1280维高维向量进行L2正则化:(1) Perform global feature extraction on the real-time collected images. Global feature extraction is achieved through convolutional neural networks. Each real-time collected image is scaled to the same size, a convolution operation is used to reduce the size of the feature map of each real-time collected image, and a reverse residual network is used for feature extraction. A 1280-dimensional high-dimensional vector is obtained through an average pooling layer; the 1280-dimensional high-dimensional vector is L2 regularized:
其中,d=1280,表示高维向量的维度;(W1',W2',…Wd')为经过正则化后的高维向量,表示实时采集的图像的全局特征;q=1,2,...,d。上述对相机实时采集的图像进行全局特征提取与第三步(1)(2)中对关键位置图像进行全局特征提取的实施过程类似,在此不做赘述。Wherein, d = 1280, indicating the dimension of the high-dimensional vector; (W1 ', W2 ', ... Wd ') is a high-dimensional vector after regularization, indicating the global features of the real-time acquired image; q = 1, 2, ..., d. The above global feature extraction of the image acquired by the camera in real time is similar to the implementation process of the global feature extraction of the key position image in the third step (1) (2), which will not be repeated here.
(2)对实时采集的图像进行局部特征提取,包括直线特征及对应的直线描述子。对实时采集的图像进行平滑处理,利用Sobel算子获取实时采集的图像的梯度图,获取实时采集的图像横向和纵向三邻域区域内的最大像素值,最大像素值的位置为锚点,将所有锚点连接得到实时采集的图像的边缘图,应用最小二乘法对边缘图进行直线拟合检测出直线;应用局部条带描述子对检测的每一条直线进行特征描述,利用实时采集的图像中所有的直线描述子表示实时采集的图像的局部特征。上述对相机实时采集的图像进行局部特征提取与第三步(3)中对关键位置图像进行局部特征提取的实施过程类似,在此不做赘述。(2) Extract local features of the real-time acquired image, including straight line features and corresponding straight line descriptors. Smooth the real-time acquired image, use the Sobel operator to obtain the gradient map of the real-time acquired image, obtain the maximum pixel value in the horizontal and vertical three neighborhoods of the real-time acquired image, the position of the maximum pixel value is the anchor point, connect all anchor points to obtain the edge map of the real-time acquired image, apply the least squares method to perform straight line fitting on the edge map to detect the straight line; apply the local strip descriptor to describe the features of each detected straight line, and use all the straight line descriptors in the real-time acquired image to represent the local features of the real-time acquired image. The above-mentioned local feature extraction of the image acquired by the camera in real time is similar to the implementation process of local feature extraction of the key position image in the third step (3), which will not be repeated here.
(3)对实时采集的图像的全局特征与构建的全局特征库中所有图像的全局特征依次进行相似度度量,计算过程如下:(3) The global features of the real-time collected image and the global features of all images in the constructed global feature library are measured in turn for similarity. The calculation process is as follows:
其中,i表示实时采集的第i个图像,表示图像i的全局特征,表示全局特征库中的第j个图像的全局特征,D(i,j)表示图像i的全局特征与全局特征库中第j个图像的全局特征的相似度;||·||2表示L2范式;Where i represents the i-th image acquired in real time, represents the global features of image i, represents the global features of the jth image in the global feature library, and D(i,j) represents the global features of image i and the global features of the jth image in the global feature library Similarity; ||·||2 represents the L2 paradigm;
若图像i的全局特征与全局特征库中第j个图像的全局特征的相似度小于第一设定阈值,则认为列车当前位置可能为某一关键位置,需要进一步地进行局部特征相似度度量。If the global feature of image i and the global features of the jth image in the global feature library If the similarity is less than the first set threshold, it is considered that the current position of the train may be a key position, and further local feature similarity measurement is required.
(4)对图像i的局部特征wi与构建的局部特征库中所有图像的局部特征依次进行相似度度量,计算过程如下:(4) The similarity between the local feature wi of image i and the local features of all images in the constructed local feature library is measured in turn. The calculation process is as follows:
其中,s(wi-wj)表示图像i的局部特征wi与局部特征库中第j个图像的局部特征wj的相似度,wj表示局部特征库中的第j个图像的局部特征,表示图像i中第k条直线对应的局部特征,表示局部特征库中第j个图像中第k条直线对应的局部特征,N表示图像i中直线的个数;Where s(wi -wj ) represents the similarity between the local feature wi of imagei and the local featurewj of the jth image in the local feature library, andwj represents the local feature of the jth image in the local feature library. represents the local feature corresponding to the kth line in image i, represents the local feature corresponding to the kth straight line in the jth image in the local feature library, and N represents the number of straight lines in image i;
若图像i的局部特征wi与局部特征库中第j个图像的局部特征wj的相似度小于第二设定阈值,则确定列车当前位置为某一关键位置。If the similarity between the local feature wi of image i and the local feature wj of the jth image in the local feature library is less than the second set threshold, the current position of the train is determined to be a key position.
第五步:区间位置估计Step 5: Interval Position Estimation
若列车未到达某一关键位置,则需要对列车在关键位置区间内进行位置估计。列车的区间位置估计过程如下:If the train does not reach a certain key position, it is necessary to estimate the position of the train within the key position interval. The train interval position estimation process is as follows:
(1)获取毫米波雷达传感器检测的前向障碍物实时数据。毫米波雷达数据为:(1) Obtain real-time data of forward obstacles detected by the millimeter-wave radar sensor. The millimeter-wave radar data is:
其中,xp表示第p个前向障碍物的横向距离,yp表示第p个前向障碍物的纵向距离,vp表示第p个前向障碍物相对列车的速度,n表示前向障碍物的个数;Wherexp represents the lateral distance of the pth forward obstacle,yp represents the longitudinal distance of the pth forward obstacle,vp represents the speed of the pth forward obstacle relative to the train, and n represents the number of forward obstacles;
(2)提取列车的实时速度。对所有前向障碍物相对列车的速度v1、v2…vm进行统计,统计出每个速度下前向障碍物的个数,将数量最多的速度作为静止障碍物相对列车的速度:(2) Extract the real-time speed of the train. Count the speeds v1 , v2 , …vm of all forward obstacles relative to the train, and count the number of forward obstacles at each speed. The speed with the largest number is taken as the speed of the stationary obstacle relative to the train:
num(vs)=max{num(v1),num(v2),…num(vm)} (6)num(vs )=max{num(v1 ),num(v2 ),…num(vm )} (6)
其中,vs表示静止障碍物相对列车的速度,num(vs)表示vs速度下前向障碍物的个数,num(v1)表示v1速度下前向障碍物的个数,num(v2)表示v2速度下前向障碍物的个数,num(vm)表示vm速度下前向障碍物的个数,m表示当前帧毫米波雷达数据中总共包含的速度值种类数,m≤n;由于列车行驶线路中大部分障碍物为静止障碍物,毫米波雷达传感器测量的障碍物大部分为静止物体,且毫米波雷达传感器测量的速度为列车与障碍物之间的相对速度,因此,大部分障碍物为某一速度值时,此速度即为列车相对静止障碍物的速度。上面得到静止障碍物相对列车的速度为vs,则列车的速度为-vs;Wherevs represents the speed of the stationary obstacle relative to the train, num(vs ) represents the number of forward obstacles atvs speed, num(v1 ) represents the number of forward obstacles atv1 speed, num(v2 ) represents the number of forward obstacles atv2 speed, num(vm ) represents the number of forward obstacles atvm speed, and m represents the total number of speed value types contained in the current frame of millimeter-wave radar data, m≤n; since most obstacles in the train route are stationary obstacles, most obstacles measured by the millimeter-wave radar sensor are stationary objects, and the speed measured by the millimeter-wave radar sensor is the relative speed between the train and the obstacle, therefore, when most obstacles are at a certain speed value, this speed is the speed of the train relative to the stationary obstacle. The speed of the stationary obstacle relative to the train isvs , then the speed of the train is-vs ;
(3)列车位置估计。对列车的速度进行积分,实现列车位置估计:(3) Train position estimation. Integrate the train speed to estimate the train position:
其中,表示列车在t时刻的位置,pr-1表示t时刻列车位置的上一个关键位置,l表示关键位置的总数量,v(t)表示列车的实时车速,∫v(t)d(t)表示从上一个关键位置开始到t时刻的速度积分。in, represents the position of the train at time t, pr-1 represents the previous key position of the train at time t, l represents the total number of key positions, v(t) represents the real-time speed of the train, and ∫v(t)d(t) represents the speed integral from the previous key position to time t.
本发明提供的上述基于视觉与毫米波雷达融合的列车定位方法,利用车载传感器采集整条线路的图像数据,选取其中关键位置图像进行特征提取后建立特征库,对当前时刻采集的图像进行特征提取后在特征库中进行查询,判断列车当前位置为关键位置还是区间位置,若为区间位置则根据列车速度进行区间位置估计。通过视觉进行图像特征提取,融合全局特征与局部特征,其中全局特征通过深度学习进行提取,局部特征通过直线检测及直线特征描述子进行提取,可以使对图像的特征提取更加充分;在进行关键位置查找时,分别进行全局特征相似度度量和局部特征相似度度量,可以使关键位置检测更精准;通过毫米波雷达进行列车车速测量,不依赖于车辆信号系统,通过毫米波雷达检测的障碍物进行反向车速推算得到列车车速。本发明利用车载传感器进行列车定位,不需依赖于路侧设备,可有效降低列车定位成本,并且,通过融合数据进行定位,可有效克服列车在隧道等环境下无法定位的难题。The above-mentioned train positioning method based on the fusion of vision and millimeter-wave radar provided by the present invention uses on-board sensors to collect image data of the entire line, selects key position images for feature extraction, and then establishes a feature library. After feature extraction of the image collected at the current moment, it is queried in the feature library to determine whether the current position of the train is a key position or an interval position. If it is an interval position, the interval position is estimated according to the train speed. Image feature extraction is performed through vision, and global features and local features are integrated, wherein global features are extracted through deep learning, and local features are extracted through straight line detection and straight line feature descriptors, so that the feature extraction of the image can be more sufficient; when searching for key positions, global feature similarity measurement and local feature similarity measurement are performed respectively, so that key position detection can be more accurate; train speed measurement is performed through millimeter-wave radar, which does not rely on the vehicle signal system, and the train speed is obtained by reverse speed calculation through obstacles detected by millimeter-wave radar. The present invention uses on-board sensors to locate trains, does not need to rely on roadside equipment, can effectively reduce the cost of train positioning, and by fusion data for positioning, it can effectively overcome the problem that trains cannot be positioned in environments such as tunnels.
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。Obviously, those skilled in the art can make various changes and modifications to the present invention without departing from the spirit and scope of the present invention. Thus, if these modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include these modifications and variations.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010517233.4ACN111856441B (en) | 2020-06-09 | 2020-06-09 | Train positioning method based on vision and millimeter wave radar fusion |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010517233.4ACN111856441B (en) | 2020-06-09 | 2020-06-09 | Train positioning method based on vision and millimeter wave radar fusion |
| Publication Number | Publication Date |
|---|---|
| CN111856441A CN111856441A (en) | 2020-10-30 |
| CN111856441Btrue CN111856441B (en) | 2023-04-25 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010517233.4AActiveCN111856441B (en) | 2020-06-09 | 2020-06-09 | Train positioning method based on vision and millimeter wave radar fusion |
| Country | Link |
|---|---|
| CN (1) | CN111856441B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112528763B (en)* | 2020-11-24 | 2024-06-21 | 浙江华锐捷技术有限公司 | Target detection method, electronic equipment and computer storage medium |
| CN113189583B (en)* | 2021-04-26 | 2022-07-01 | 天津大学 | Time-space synchronization millimeter wave radar and visual information fusion method |
| CN116433883A (en)* | 2021-12-28 | 2023-07-14 | 上海泽高电子工程技术股份有限公司 | Rail vehicle positioning method and device |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107545239A (en)* | 2017-07-06 | 2018-01-05 | 南京理工大学 | A kind of deck detection method matched based on Car license recognition with vehicle characteristics |
| CN108983219A (en)* | 2018-08-17 | 2018-12-11 | 北京航空航天大学 | A kind of image information of traffic scene and the fusion method and system of radar information |
| CN109031304A (en)* | 2018-06-06 | 2018-12-18 | 上海国际汽车城(集团)有限公司 | Vehicle positioning method in view-based access control model and the tunnel of millimetre-wave radar map feature |
| CN109947097A (en)* | 2019-03-06 | 2019-06-28 | 东南大学 | A robot positioning method and navigation application based on vision and laser fusion |
| CN110135485A (en)* | 2019-05-05 | 2019-08-16 | 浙江大学 | Object recognition and positioning method and system based on fusion of monocular camera and millimeter wave radar |
| CN110398731A (en)* | 2019-07-11 | 2019-11-01 | 北京埃福瑞科技有限公司 | Train speed measuring system and method |
| CN110415297A (en)* | 2019-07-12 | 2019-11-05 | 北京三快在线科技有限公司 | Localization method, device and unmanned equipment |
| CN110587597A (en)* | 2019-08-01 | 2019-12-20 | 深圳市银星智能科技股份有限公司 | SLAM closed loop detection method and detection system based on laser radar |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2018080609A2 (en)* | 2016-07-29 | 2018-05-03 | Remote Sensing Solutions, Inc. | Mobile radar for visualizing topography |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107545239A (en)* | 2017-07-06 | 2018-01-05 | 南京理工大学 | A kind of deck detection method matched based on Car license recognition with vehicle characteristics |
| CN109031304A (en)* | 2018-06-06 | 2018-12-18 | 上海国际汽车城(集团)有限公司 | Vehicle positioning method in view-based access control model and the tunnel of millimetre-wave radar map feature |
| CN108983219A (en)* | 2018-08-17 | 2018-12-11 | 北京航空航天大学 | A kind of image information of traffic scene and the fusion method and system of radar information |
| CN109947097A (en)* | 2019-03-06 | 2019-06-28 | 东南大学 | A robot positioning method and navigation application based on vision and laser fusion |
| CN110135485A (en)* | 2019-05-05 | 2019-08-16 | 浙江大学 | Object recognition and positioning method and system based on fusion of monocular camera and millimeter wave radar |
| CN110398731A (en)* | 2019-07-11 | 2019-11-01 | 北京埃福瑞科技有限公司 | Train speed measuring system and method |
| CN110415297A (en)* | 2019-07-12 | 2019-11-05 | 北京三快在线科技有限公司 | Localization method, device and unmanned equipment |
| CN110587597A (en)* | 2019-08-01 | 2019-12-20 | 深圳市银星智能科技股份有限公司 | SLAM closed loop detection method and detection system based on laser radar |
| Title |
|---|
| Automotive radar and camera fusion using Generative Adversarial Networks;VladimirLekic 等;《Computer Vision and Image Understanding》;20190731;第184卷;第1-8页* |
| 基于视觉和毫米波雷达的车道级定位方法;赵翔 等;《上海交通大学学报》;20180131;第33-38页* |
| Publication number | Publication date |
|---|---|
| CN111856441A (en) | 2020-10-30 |
| Publication | Publication Date | Title |
|---|---|---|
| CN111554088B (en) | Multifunctional V2X intelligent roadside base station system | |
| CN109084786B (en) | Map data processing method | |
| CN108256413B (en) | Passable area detection method and device, storage medium and electronic equipment | |
| CN111856441B (en) | Train positioning method based on vision and millimeter wave radar fusion | |
| Angel et al. | Methods of analyzing traffic imagery collected from aerial platforms | |
| CN111275960A (en) | Traffic road condition analysis method, system and camera | |
| KR20180046798A (en) | Method and apparatus for real time traffic information provision | |
| CN106997688A (en) | Parking position detecting method based on multi-sensor information fusion | |
| KR101735557B1 (en) | System and Method for Collecting Traffic Information Using Real time Object Detection | |
| KR102245580B1 (en) | Control server for estimating traffic density using adas probe data | |
| WO2022041706A1 (en) | Positioning method, positioning system, and vehicle | |
| EP3550538B1 (en) | Information processing apparatus, information processing method, and program | |
| CN117612127B (en) | Scene generation method and device, storage medium and electronic equipment | |
| CN111914691A (en) | Rail transit vehicle positioning method and system | |
| CN104392612A (en) | Urban traffic state monitoring method based on novel detection vehicles | |
| CN111183464B (en) | System and method for estimating saturation flow of signal intersection based on vehicle trajectory data | |
| CN104422426A (en) | Method and apparatus for providing vehicle navigation information within elevated road region | |
| CN115980754A (en) | Vehicle detection and tracking method fusing sensor information | |
| US11753018B2 (en) | Lane-type and roadway hypotheses determinations in a road model | |
| CN111619589B (en) | Automatic driving control method for complex environment | |
| US20210048819A1 (en) | Apparatus and method for determining junction | |
| CN119169590B (en) | Perception model evaluation method and device, storage medium and electronic device | |
| CN114216469B (en) | Method for updating high-precision map, intelligent base station and storage medium | |
| CN114882702A (en) | Road congestion movement detection early warning system and method based on light-vision fusion | |
| CN110210324A (en) | A kind of road target quickly detects method for early warning and system |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |