


技术领域technical field
本发明涉及一种行人目标跟踪方法,尤其是涉及一种基于多激光雷达的行人目标跟踪方法。The invention relates to a pedestrian target tracking method, in particular to a pedestrian target tracking method based on multiple laser radars.
本发明涉及一种基于多激光雷达的行人目标跟踪方法,适用于在各种公共场合中实现对多个行人在站立,步行,慢跑等运动模式下的跟踪。The invention relates to a pedestrian target tracking method based on multiple laser radars, which is suitable for tracking a plurality of pedestrians in standing, walking, jogging and other motion modes in various public places.
背景技术Background technique
在地铁,车站,百货大楼等人流密集的公共场合中,行人流量、运动方向等统计信息能够帮助工程设计方进行更合理的出入口布局,设计合理的安全紧急预案等,具有巨大的商业价值。想要自动获得这些实时统计信息,就需要对场景中所有行人目标进行准确的跟踪。目前的目标跟踪技术主要基于摄像机等视觉传感器,通过一定的图像处理算法对图像中的运动目标进行跟踪。然而基于图像处理算法的跟踪技术都对环境光照的变化比较敏感,无法适应全天候跟踪的要求。同时一个摄像机覆盖的视野有限,虽然可以靠多个摄像机实现大范围覆盖,但跨相机的目标跟踪实现困难,尚处于实验阶段。相比之下,激光雷达能直接获取目标的距离信息,而且不受光照变化的影响。随着应用的逐渐广泛,价格也逐步降低。In crowded public places such as subways, stations, and department stores, statistical information such as pedestrian flow and movement direction can help engineering designers make more reasonable entrance and exit layouts, design reasonable safety emergency plans, etc., and have huge commercial value. Automatically obtaining these real-time statistics requires accurate tracking of all pedestrian objects in the scene. The current target tracking technology is mainly based on visual sensors such as cameras, and tracks moving targets in images through certain image processing algorithms. However, tracking technologies based on image processing algorithms are sensitive to changes in ambient light and cannot meet the requirements of all-weather tracking. At the same time, the field of view covered by one camera is limited. Although multiple cameras can be used to achieve large-scale coverage, it is difficult to achieve cross-camera target tracking and is still in the experimental stage. In contrast, lidar can directly obtain the distance information of the target and is not affected by changes in illumination. With the gradual widespread application, the price is gradually reduced.
发明内容Contents of the invention
本发明的目的在于提供一种基于多激光雷达的行人目标跟踪方法,不但跟踪范围大,准确度高,算法速度快,而且能兼容行人的各种运动模式,包括站立,步行和慢跑等,能十分有效地检测出行人的位置并实时跟踪行人的运动轨迹。The purpose of the present invention is to provide a pedestrian target tracking method based on multiple laser radars, which not only has a large tracking range, high accuracy, and fast algorithm speed, but also can be compatible with various motion modes of pedestrians, including standing, walking and jogging, etc. It is very effective in detecting the position of pedestrians and tracking the trajectory of pedestrians in real time.
本发明采用的技术方案是包括以下步骤:The technical solution adopted in the present invention comprises the following steps:
(1)雷达位置的设置和系统标定;(1) Radar position setting and system calibration;
(2)数据采集和融合;(2) Data collection and fusion;
(3)运动点的聚类和脚步点的配对;(3) Clustering of movement points and pairing of footstep points;
(4)基于人体模型和卡尔曼滤波的目标跟踪。(4) Target tracking based on human body model and Kalman filter.
所述步骤(1)雷达位置的设置和系统标定:即将多个雷达设置在场景的不同位置,设置位置的选择原则是:(a)使目标在场景中的任何位置都不容易被遮挡;(b)在场景边缘均匀布置激光雷达,以提高目标被检测到的可能性,雷达扫描平面距离地面高度23~25cm,此高度保证雷达在大多数情况下都能检测并获得行人两只脚的距离数据;对多个雷达的标定是通过如下方法实现的:在场景中设置一些棒状物体,设从N个雷达中得到的第i个棒状物体的坐标为:The setting and system calibration of described step (1) radar position: promptly a plurality of radars are arranged in the different positions of scene, the selection principle of setting position is: (a) make the target not easily be blocked in any position in the scene; ( b) Evenly arrange the lidar at the edge of the scene to increase the possibility of the target being detected. The height of the radar scanning plane from the ground is 23-25cm. This height ensures that the radar can detect and obtain the distance between the two feet of pedestrians in most cases Data; the calibration of multiple radars is achieved by the following method: set some rod-shaped objects in the scene, and set the coordinates of the i-th rod-shaped object obtained from N radars as:
并假设以第1个雷达的坐标为参照坐标,在二维平面上设第n个雷达坐标到第1个雷达坐标的转移矩阵Rn和平移量Tn为:And assuming that the coordinates of the first radar are used as the reference coordinates, the transfer matrix Rn and the translation amount Tn from the coordinates of the nth radar to the coordinates of the first radar are set on the two-dimensional plane as:
则得到第i个点从第n个雷达坐标到第1个雷达坐标的转换方程:Then the transformation equation of the i-th point from the n-th radar coordinate to the first radar coordinate is obtained:
当i>3时转换矩阵和平移量唯一确定,通过标定将不同雷达坐标系下得到的数据转换到同一个雷达坐标下,式中:x表示点的横坐标值,y表示点的纵坐标值。When i>3, the conversion matrix and the translation amount are uniquely determined, and the data obtained in different radar coordinate systems are converted to the same radar coordinate through calibration. In the formula: x represents the abscissa value of the point, and y represents the ordinate value of the point .
所述步骤(2)数据采集和融合:即通过网络集线器将多个雷达采集的数据传输到一个计算机主机上,同时记录采集的时间;多个雷达之间的数据采集由统一的外同步触发,通过采集50帧以上帧数的数据先自动提取出每个雷达的背景数据,背景数据的提取方法采用了直方图法,即统计所采集帧中每个方向上数据点的距离信息,并做直方图统计,出现频率最高的即为该方向上的背景点数据,保存该背景数据,然后将同一时间采集到的各个方向上的距离数据与背景数据比较,若距离差值小于所设定的阈值量则认为是背景点而去除,剩下的即是场景中的前景点即运动点;将同一时刻各个雷达获得的前景点通过标定参数进行坐标转换,即得到在统一全局坐标系下该时刻采集到的融合后的运动数据点。Described step (2) data collection and fusion: namely by the network hub, the data collected by a plurality of radars is transmitted to a host computer, and the time of collection is recorded simultaneously; The data collection between a plurality of radars is triggered by unified external synchronization, By collecting more than 50 frames of data, the background data of each radar is automatically extracted first. The background data extraction method adopts the histogram method, that is, the distance information of the data points in each direction in the collected frames is counted, and the histogram is made. Figure statistics, the background point data in the direction with the highest frequency of occurrence, save the background data, and then compare the distance data in each direction collected at the same time with the background data, if the distance difference is less than the set threshold The amount is considered as the background point and removed, and the rest is the foreground point in the scene, that is, the moving point; the coordinate conversion of the foreground points obtained by each radar at the same time is carried out through the calibration parameters, that is, the acquisition at that time in the unified global coordinate system is obtained. The fused motion data points obtained.
所述步骤(3)运动点的聚类和脚步点的配对:即对融合得到的运动数据点,对相互距离小于第一距离阈值的点进行聚类,并用类内点的中心作为候选脚步的位置,然后对脚步候选点进行时域和空域上的匹配,以确定某两个候选脚步点是否属于同一个行人,有两种情况:(a)在卡尔曼滤波获得的已有目标的两个脚步点预测位置附近搜索匹配,若本帧中的两个候选脚步点分别和两个预测位置小于第二距离阈值,则判断两个候选脚步点为已有目标的观测点;(b)若当前候选脚步点未能在已有目标中找到匹配,则属于新目标或者噪声干扰点;然后在当前帧中用第三距离阈值搜索未匹配的脚步候选点,若找到则将找到的匹配点联合一起成为属于同一个新目标的两个脚步点。The pairing of the clustering of described step (3) motion point and footstep point: promptly to the motion data point that fusion obtains, the point that mutual distance is less than the first distance threshold is clustered, and uses the center of the point in the class as the candidate footstep position, and then match the footstep candidate points in time domain and air domain to determine whether two candidate footstep points belong to the same pedestrian. There are two situations: (a) two existing targets obtained by Kalman filtering Search and match near the predicted position of the footstep point. If the two candidate footstep points in this frame and the two predicted positions are less than the second distance threshold, it is judged that the two candidate footstep points are the observation points of the existing target; (b) if the current If the candidate footstep points cannot find a match in the existing target, it belongs to a new target or a noise interference point; then use the third distance threshold to search for unmatched footstep candidate points in the current frame, and if found, combine the found matching points together Become two footsteps belonging to the same new goal.
所述步骤(4)基于人体运动模型和卡尔曼滤波的目标跟踪:即对行人建立了运动模型,并根据这个模型采用卡尔曼滤波进行目标跟踪:Described step (4) is based on the target tracking of human body motion model and Kalman filtering: promptly set up motion model to pedestrian, and adopt Kalman filtering to carry out target tracking according to this model:
1)行人模型的建立1) Establishment of pedestrian model
研究表明,行人的状态分为:站立、步行、慢跑,行人在步行和慢跑的运动过程中总是有一个支撑脚,一个摆动脚,并且两者周期性互换,所不同的是步行时支撑脚总是不离地,并且处于静止状态;而慢跑时支撑脚会离地并且会在周期内有运动距离,基于以上提到的站立、步行、慢跑三种行人状态的建模分别如下:Studies have shown that the state of pedestrians is divided into: standing, walking, and jogging. During walking and jogging, pedestrians always have a supporting foot and a swinging foot, and the two are periodically exchanged. The feet are always on the ground and are in a static state; while jogging, the supporting feet will be off the ground and have a movement distance in the cycle. Based on the above-mentioned three pedestrian states of standing, walking, and jogging, the modeling is as follows:
站立模式:没有生理作用力的驱动,左右脚的速度为零,加速度也为零;force=0,vL,k=0,vR,k=0,aL,k=0,aR,k=0,(1)Standing mode: driven without physiological force, the speed of the left and right feet is zero, and the acceleration is also zero; force=0, vL, k = 0, vR, k = 0, aL, k = 0, aR, k = 0, (1)
步行和慢跑运动模式:支撑脚具有一个恒定不变的速度,摆动脚具有一个恒定不变的生理作用力,分别在前半周期和后半周期产生一个正向和负向的大小恒定不变的生理作用力,从而对运动脚形成加速度;Walking and jogging motion patterns: the supporting foot has a constant velocity and the swinging foot has a constant physiological force, producing a positive and negative physiological force of constant magnitude in the first and second half cycles, respectively. force, thereby forming an acceleration on the moving foot;
以T为两帧之间的时间间隔,右脚为支撑脚的前半周期内:Take T as the time interval between two frames, and the right foot as the first half cycle of the supporting foot:
force=F,vL,k=vL,k-1+A*t,vR,k=V,aL,k=A,aR,k=0,(2)force=F, vL, k = vL, k-1 + A*t, vR, k = V, aL, k = A, aR, k = 0, (2)
右脚为支撑脚的后半周期内:During the second half of the cycle with the right foot as the supporting foot:
force=-F,vL,k=vL,k-1+A*t,vR,k=V,aL,k=-A,aR,k=0,(3)force=-F, vL, k = vL, k-1 + A*t, vR, k = V, aL, k = -A, aR, k = 0, (3)
左脚为支撑脚的前半周期内:During the first half of the cycle with the left foot as the supporting foot:
force=F,vL,k=V,vR,k=vR,k-1+A*t aL,k=0,aR,k=A,(4)force=F, vL, k = V, vR, k = vR, k-1 +A*t aL, k = 0, aR, k = A, (4)
左脚为支撑脚的后半周期内:During the second half of the cycle with the left foot as the supporting foot:
force=-F,vL,k=V,vR,k=vR,k-1+A*t aL,k=0,aR,k=-A,(5)force=-F, vL, k = V, vR, k = vR, k-1 +A*t aL, k = 0, aR, k =-A, (5)
式中force表示生理作用力,F表示行人实际产生的生理作用力,v表示运动速度,V表示行人F作用力下产生的速度,a表示加速度,A表示行人在F作用力下产生的加速度,L表示左脚,R表示右脚,k表示第k个时刻;In the formula, force represents the physiological force, F represents the actual physiological force of the pedestrian, v represents the speed of movement, V represents the speed of the pedestrian under the force of F, a represents the acceleration, A represents the acceleration of the pedestrian under the force of F, L represents the left foot, R represents the right foot, and k represents the kth moment;
当V=0时,行人即处在步行状态下,当V>0时,行人即处在慢跑状态下;When V=0, the pedestrian is in the walking state; when V>0, the pedestrian is in the jogging state;
2)卡尔曼滤波跟踪2) Kalman filter tracking
把行人的位置P和速度V当作状态变量,加速度A当作系统的输入量,则得到系统的状态方程为:Taking the pedestrian's position P and velocity V as the state variables, and the acceleration A as the input of the system, the state equation of the system is obtained as:
其中S表示状态变量,i表示行人标记,Ω表示状态模型误差;where S represents the state variable, i represents the pedestrian marker, and Ω represents the state model error;
用Φ表示状态转移矩阵,Ψ表示加速度向速度和位移量转换的转换矩阵,当雷达扫描频率为37.5Hz时,T为0.026秒;则有:Use Φ to represent the state transition matrix, and Ψ to represent the conversion matrix from acceleration to velocity and displacement. When the radar scanning frequency is 37.5Hz, T is 0.026 seconds; then:
系统观测方程如下:The system observation equation is as follows:
其中M为观测向量,H为S到M的转换矩阵,ζ为观测误差;Where M is the observation vector, H is the conversion matrix from S to M, and ζ is the observation error;
方程(10)和方程(11)为卡尔曼滤波的状态方程和观测方程;采用标准卡尔曼滤波技术来对状态向量进行连续实时最优估计,获得每个采样时刻下各个行人目标的估计位置和方差;Equation (10) and equation (11) are the state equation and observation equation of the Kalman filter; the standard Kalman filter technique is used to perform continuous real-time optimal estimation of the state vector, and the estimated position and variance;
3)速度和加速度的更新3) Update of speed and acceleration
定义摆动脚从开始摆动到转换为支撑脚为一个周期;在一个周期内,摆动脚的速度和支撑脚的速度由卡尔曼滤波自动更新,摆动脚的加速度则每过一个周期更新一次,以上个周期内的平均加速度值更新,作为下个周期的加速度输入量,方向始终由摆动脚指向支撑脚;假设周期内摆动脚运动的距离为s,时间为t,则加速度更新公式如下:Define a cycle from the start of the swinging foot to the transition to the supporting foot; within a cycle, the speed of the swinging foot and the speed of the supporting foot are automatically updated by the Kalman filter, and the acceleration of the swinging foot is updated every cycle. The average acceleration value in a cycle is updated as the acceleration input in the next cycle, and the direction is always from the swinging foot to the supporting foot; assuming that the swinging foot moves within a cycle with a distance of s and a time of t, the acceleration update formula is as follows:
本发明的具有的有益效果在于:The beneficial effect that the present invention has is:
本发明以激光雷达为传感器,克服了图像传感器的跟踪方法中存在的易受光照变化影响的缺点,可靠性得到很大提高;不但跟踪范围大,算法速度快,跟踪数量多,而且能兼容行人的各种运动模式,包括站立、步行、慢跑,能十分有效地检测出行人的位置并实时跟踪行人的运动轨迹。本发明适用于场景中的主要运动目标为行人的场合。The invention uses laser radar as the sensor, which overcomes the shortcomings of the image sensor tracking method that is easily affected by light changes, and greatly improves the reliability; not only the tracking range is large, the algorithm speed is fast, the number of tracking is large, and it is compatible with pedestrians Various motion modes, including standing, walking, and jogging, can effectively detect the position of pedestrians and track the trajectory of pedestrians in real time. The present invention is applicable to occasions where the main moving object in the scene is pedestrians.
附图说明Description of drawings
图1是雷达系统布置示意图。Figure 1 is a schematic diagram of the radar system layout.
图2是行人运动模型示意图。Figure 2 is a schematic diagram of a pedestrian motion model.
图3是行人目标跟踪算法流程图。Figure 3 is a flow chart of the pedestrian target tracking algorithm.
具体实施方式Detailed ways
以下结合附图和实施实例对本发明作进一步的说明。The present invention will be further described below in conjunction with the accompanying drawings and implementation examples.
本发明的实施方案主要包括以下步骤:Embodiments of the present invention mainly include the following steps:
(1)雷达位置的设置和系统标定;(1) Radar position setting and system calibration;
(2)数据采集和融合;(2) Data collection and fusion;
(3)运动点的聚类和脚步点的配对;(3) Clustering of movement points and pairing of footstep points;
(4)基于人体模型和卡尔曼滤波的目标跟踪。(4) Target tracking based on human body model and Kalman filter.
所述步骤(1)雷达位置的设置和系统标定:即将多个雷达设置在场景的不同位置,如图1所示,通过网络集线器将不同雷达上采集的数据传输到同一台计算机上,进行数据处理。设置位置的选择原则是:(a)使目标在场景中的任何位置都不容易被遮挡;(b)在场景边缘均匀布置激光雷达,以提高目标被检测到的可能性,雷达扫描平面距离地面高度23~25cm,此高度保证雷达在大多数情况下都能检测并获得行人两只脚的距离数据,该高度过高则激光容易被行人所携带的物体遮挡,过低则无法稳定获得跑步状态下的脚步距离数据。对多个雷达的标定是通过如下方法实现的:在场景中设置一些棒状物体,设从N个雷达中得到的第i个棒状物体的坐标为:The setting of described step (1) radar position and system calibration: be about to a plurality of radars are set in the different positions of scene, as shown in Figure 1, the data collected on different radars is transmitted to the same computer by network hub, carry out data deal with. The selection principle for setting the position is: (a) make the target not easily occluded at any position in the scene; (b) uniformly arrange the lidar at the edge of the scene to improve the possibility of the target being detected, and the radar scanning plane is farther from the ground The height is 23-25cm. This height ensures that the radar can detect and obtain the distance data between the two feet of pedestrians in most cases. If the height is too high, the laser will be easily blocked by objects carried by pedestrians. If the height is too low, the running state cannot be obtained stably. Step distance data under. The calibration of multiple radars is achieved by the following method: set some rod-shaped objects in the scene, and set the coordinates of the i-th rod-shaped object obtained from N radars as:
并假设以第1个雷达的坐标为参照坐标,在二维平面上设第n个雷达坐标到第1个雷达坐标的转移矩阵Rn和平移量Tn为:And assuming that the coordinates of the first radar are used as the reference coordinates, the transfer matrix Rn and the translation amount Tn from the coordinates of the nth radar to the coordinates of the first radar are set on the two-dimensional plane as:
则得到第i个点从第n个雷达坐标到第1个雷达坐标的转换方程:Then the transformation equation of the i-th point from the n-th radar coordinate to the first radar coordinate is obtained:
当i>3时转换矩阵和平移量唯一确定,通过标定将不同雷达坐标系下得到的数据转换到同一个雷达坐标下,式中:x表示点的横坐标值,y表示点的纵坐标值。When i>3, the conversion matrix and the translation amount are uniquely determined, and the data obtained in different radar coordinate systems are converted to the same radar coordinate through calibration. In the formula: x represents the abscissa value of the point, and y represents the ordinate value of the point .
所述步骤(2)数据采集和融合:即通过网络集线器将多个雷达采集的数据传输到一个计算机主机上,同时记录采集的时间,如图1所示;多个雷达之间的数据采集由统一的外同步触发。实验开始时,通过采集50帧以上帧数的数据先自动提取出每个雷达的背景数据,背景数据的提取方法采用了直方图法,即统计所采集帧中每个方向上数据点的距离信息,并做直方图统计,出现频率最高的即为该方向上的背景点数据,保存该背景数据,然后继续实验,将同一时间采集到的各个方向上的距离数据与背景数据比较,若距离差值小于所设定的阈值量则认为是背景点而去除,剩下的即是场景中的前景点即运动点;将同一时刻各个雷达获得的前景点通过标定参数R和T进行坐标转换,即得到在统一全局坐标系下该时刻采集到的融合后的运动数据点。Described step (2) data collection and fusion: promptly by network hub, the data that a plurality of radars collects is transmitted on a computer host computer, record the time of collection simultaneously, as shown in Figure 1; Data collection between a plurality of radars is by Unified external synchronous triggering. At the beginning of the experiment, the background data of each radar was automatically extracted by collecting more than 50 frames of data. The background data extraction method used the histogram method, that is, the distance information of the data points in each direction in the collected frames was counted. , and do histogram statistics, the background point data with the highest frequency is the background point data in this direction, save the background data, and then continue the experiment, compare the distance data collected at the same time in each direction with the background data, if the distance is poor If the value is less than the set threshold value, it is considered as the background point and removed, and the rest is the foreground point in the scene, that is, the moving point; the coordinate conversion of the foreground points obtained by each radar at the same time is carried out through the calibration parameters R and T, namely The fused motion data points collected at this moment in the unified global coordinate system are obtained.
所述步骤(3)运动点的聚类和脚步点的配对:即对融合得到的运动数据点,将相互距离小于第一距离阈值的点进行聚类,并用类内点的中心作为候选脚步的位置,如图1所示,雷达在一个脚上扫描得到的点往往不止一个,这些点通过聚类的方法找出中心点以代表所扫描的脚步。然后对脚步候选点进行时域和空域上的匹配,时域匹配是指以当前帧中的候选点为搜索中心搜索前帧中的候选点;空域匹配即在当前帧中搜索可以匹配的候选点,以确定某两个候选脚步点是否属于同一个行人,有两种情况:(a)在卡尔曼滤波获得的已有目标的两个脚步点预测位置附近搜索匹配,若本帧中的两个候选脚步点分别和两个预测位置距离小于第二距离阈值,则判断两个候选脚步点为已有目标的观测点;(b)若当前候选脚步点未能在已有目标中找到匹配,则属于新目标或者噪声干扰点;然后在当前帧中用第三距离阈值搜索未匹配的脚步候选点,若找到则将找到的匹配点联合一起成为属于同一个新目标的两个脚步点。The pairing of the clustering of described step (3) motion point and footstep point: promptly to the motion data point that fusion obtains, the point that mutual distance is less than the first distance threshold value is clustered, and uses the center of the point in the class as the candidate footstep Position, as shown in Figure 1, the radar often scans more than one point on a foot, and these points are clustered to find the center point to represent the scanned footsteps. Then match the footstep candidate points in time domain and space domain. Time domain matching refers to searching for candidate points in the previous frame with the candidate points in the current frame as the search center; spatial domain matching means searching for matching candidate points in the current frame. , to determine whether a certain two candidate footsteps belong to the same pedestrian, there are two situations: (a) search for matches near the predicted positions of two footsteps of existing targets obtained by Kalman filtering, if two If the distance between the candidate footstep points and the two predicted positions is less than the second distance threshold, then it is judged that the two candidate footstep points are the observation points of the existing target; (b) if the current candidate footstep point fails to find a match in the existing target, then Belong to a new target or noise interference point; then use the third distance threshold to search for unmatched footstep candidate points in the current frame, and if found, combine the found matching points into two footstep points belonging to the same new target.
所述步骤(4)基于人体运动模型和卡尔曼滤波的目标跟踪:即对行人建立运动模型,并根据这个模型采用卡尔曼滤波进行目标跟踪:Described step (4) is based on the target tracking of human body motion model and Kalman filtering: promptly set up motion model to pedestrian, and adopt Kalman filtering to carry out target tracking according to this model:
1)行人模型的建立1) Establishment of pedestrian model
研究表明,行人的状态分为:站立、步行、慢跑、快跑,慢跑是指跑步速度小于3m/秒的跑步状态,由于快跑(跑步速度大于3m/秒的跑步)时情况极为复杂,较难建模,这里暂不考虑;行人在步行和慢跑的运动过程中总是有一个支撑脚,一个摆动脚,并且两者周期性互换,所不同的是步行时支撑脚总是不离地,并且处于静止状态;而慢跑时支撑脚会离地并且会在周期内有运动距离,基于以上提到的站立、步行、慢跑三种行人状态的建模分别如下:Studies have shown that the states of pedestrians are divided into: standing, walking, jogging, and fast running. Jogging refers to the running state with a running speed of less than 3m/s. Since the situation of fast running (running with a running speed greater than 3m/s) is extremely complicated, it is relatively difficult to Difficult to model, not considered here; pedestrians always have a supporting foot and a swinging foot during walking and jogging, and the two are periodically exchanged, the difference is that the supporting foot always stays on the ground when walking, And it is in a static state; while jogging, the supporting foot will be off the ground and will have a movement distance in the cycle. Based on the above-mentioned three pedestrian states of standing, walking, and jogging, the modeling is as follows:
站立模式:没有生理作用力的驱动,左右脚的速度为零,加速度也为零;force=0,vL,k=0,vR,k=0,aL,k=0,aR,k=0,(1)Standing mode: driven without physiological force, the speed of the left and right feet is zero, and the acceleration is also zero; force=0, vL, k = 0, vR, k = 0, aL, k = 0, aR, k = 0, (1)
步行和慢跑运动模式:支撑脚具有一个恒定不变的速度,摆动脚具有一个恒定不变的生理作用力,分别在前半周期(摆动脚靠近支撑脚的过程)和后半周期(摆动脚远离支撑脚的过程)产生一个正向和负向的大小恒定不变的生理作用力,从而对运动脚形成加速度;Walking and jogging motion patterns: the supporting foot has a constant speed, and the swinging foot has a constant physiological force, respectively in the first half cycle (the process of swinging the foot close to the supporting foot) and the second half cycle (swinging the foot away from the support The process of the foot) produces a positive and negative physiological force with a constant magnitude, thereby forming an acceleration on the moving foot;
以T为两帧之间的时间间隔,右脚为支撑脚的前半周期内:Take T as the time interval between two frames, and the right foot as the first half cycle of the supporting foot:
force=F,vL,k=vL,k-1+A*t,vR,k=V,aL,k=A,aR,k=0,(2)force=F, vL, k = vL, k-1 + A*t, vR, k = V, aL, k = A, aR, k = 0, (2)
右脚为支撑脚的后半周期内:During the second half of the cycle with the right foot as the supporting foot:
force=-F,vL,k=vL,k-1+A*t,vR,k=V,aL,k=-A,aR,k=0,(3)force=-F, vL, k = vL, k-1 + A*t, vR, k = V, aL, k = -A, aR, k = 0, (3)
左脚为支撑脚的前半周期内:During the first half of the cycle with the left foot as the supporting foot:
force=F,vL,k=V,vR,k=vR,k-1+A*t aL,k=0,aR,k=A,(4)force=F, vL, k = V, vR, k = vR, k-1 +A*t aL, k = 0, aR, k = A, (4)
左脚为支撑脚的后半周期内:In the second half of the cycle with the left foot as the supporting foot:
force=-F,vL,k=V,vR,k=vR,k-1+A*t aL,k=0,aR,k=-A,(5)force=-F, vL, k = V, vR, k = vR, k-1 +A*t aL, k = 0, aR, k =-A, (5)
式中force表示生理作用力,F表示行人实际产生的生理作用力,v表示运动速度,V表示行人F作用力下产生的速度,a表示加速度,A表示行人在F作用力下产生的加速度,L表示左脚,R表示右脚,k表示第k个时刻;In the formula, force represents the physiological force, F represents the actual physiological force of the pedestrian, v represents the speed of movement, V represents the speed of the pedestrian under the force of F, a represents the acceleration, A represents the acceleration of the pedestrian under the force of F, L represents the left foot, R represents the right foot, and k represents the kth moment;
当V=0时,行人即处在步行状态下,当V>0时,行人即处在慢跑状态下;如图2所示,运动过程中加速度标量值有明显的周期恒定地变化,加速度的方向则始终由摆动脚指向支撑脚,而支撑脚的速度则是匀速递增或匀速递减。When V=0, the pedestrian is in the walking state; when V>0, the pedestrian is in the jogging state; The direction of the swinging foot is always directed from the swinging foot to the supporting foot, and the speed of the supporting foot is increasing or decreasing at a constant speed.
2)卡尔曼滤波跟踪2) Kalman filter tracking
把行人的位置P和速度V当作状态变量,加速度A当作系统的输入量,则得到系统的状态方程为:Taking the pedestrian's position P and velocity V as the state variables, and the acceleration A as the input of the system, the state equation of the system is obtained as:
其中S表示状态变量,i表示行人标记,Ω表示状态模型误差;where S represents the state variable, i represents the pedestrian mark, and Ω represents the state model error;
用Φ表示状态转移矩阵,Ψ表示加速度向速度和位移量转换的转换矩阵,当雷达扫描频率为37.5Hz时,T为0.026秒;则有:Use Φ to represent the state transition matrix, and Ψ to represent the conversion matrix from acceleration to velocity and displacement. When the radar scanning frequency is 37.5Hz, T is 0.026 seconds; then:
系统观测方程如下:The system observation equation is as follows:
其中M为观测向量,H为S到M的转换矩阵,ζ为观测误差;Where M is the observation vector, H is the conversion matrix from S to M, and ζ is the observation error;
方程(10)和方程(11)为卡尔曼滤波的状态方程和观测方程;采用标准卡尔曼滤波技术来对状态向量进行连续实时最优估计,获得每个采样时刻下各个行人目标的估计位置和方差;Equation (10) and equation (11) are the state equation and observation equation of the Kalman filter; the standard Kalman filter technology is used to perform continuous real-time optimal estimation of the state vector, and the estimated position and variance;
3)速度和加速度的更新3) Update of speed and acceleration
定义摆动脚从开始摆动到转换为支撑脚为一个周期;在一个周期内,摆动脚的速度和支撑脚的速度由卡尔曼滤波自动更新,摆动脚的加速度则每过一个周期更新一次,以上个周期内的平均加速度值更新,作为下个周期的加速度输入量,方向始终由摆动脚指向支撑脚;假设周期内摆动脚运动的距离为s,时间为t,则加速度更新公式如下:Define a cycle from the start of the swinging foot to the transition to the supporting foot; within a cycle, the speed of the swinging foot and the speed of the supporting foot are automatically updated by the Kalman filter, and the acceleration of the swinging foot is updated every cycle. The average acceleration value in a cycle is updated as the acceleration input in the next cycle, and the direction is always from the swinging foot to the supporting foot; assuming that the swinging foot moves within a cycle with a distance of s and a time of t, the acceleration update formula is as follows:
如图3所示,当每一帧数据经过融合和聚类以后,便开始对已有跟踪目标(记录在变量中)进行模型预测,首先以已有跟踪目标的预测位置为搜索中心在当前帧以一定阈值范围搜索本帧中的脚步候选点作为观测点,如果搜索成功即找到了跟踪目标在本帧中的观测点,则进行滤波,得到滤波结果作为跟踪目标的当前位置,同时更新速度、加速度(如果一个周期刚好结束);如果跟踪失败即没有找到脚步候选点作为观测点,则将模型预测结果作为跟踪目标的当前位置,并将此类跟踪目标标记为失跟踪,进行计数,当失跟踪帧数达到20帧时,我们认为此跟踪目标跟踪失败或者已经离开目标场所,停止跟踪。对于没有被识别为已有跟踪目标的观测点的脚步候选点,我们认为其可能是新的行人或者是噪声,在不能立刻确定其真实性的情况下我们采用简单的跟踪手段,对这些点进行一定帧数内的跟踪。由于雷达扫描频率高,相邻点之间的运动距离较小,所以只要以候选点为中心设定一个合适的阈值搜索每一时刻的同类点就可以找到新的候选点,如果没找到,则很可能这个候选点只是噪声,如果找到了则继续跟踪,确定其为真实的行人后进行卡尔曼滤波跟踪,如此重复以上的步骤,直到所有的跟踪目标都离开目标场所,本实验结束。As shown in Figure 3, after each frame of data is fused and clustered, the model prediction of the existing tracking target (recorded in the variable) is started. First, the predicted position of the existing tracking target is used as the search center in the current frame Use a certain threshold range to search for the footstep candidate points in this frame as observation points. If the search is successful, you will find the observation point of the tracking target in this frame, then perform filtering, and obtain the filtering result as the current position of the tracking target. At the same time, update the speed, Acceleration (if a period is just over); if the tracking fails, that is, no footstep candidate points are found as observation points, the model prediction result is used as the current position of the tracking target, and such tracking targets are marked as out of track, and counted. When the number of tracking frames reaches 20 frames, we consider that the tracking target has failed to track or has left the target site, and stop tracking. For the footstep candidate points that have not been identified as the observation points of the existing tracking targets, we think that they may be new pedestrians or noises. If the authenticity cannot be determined immediately, we use simple tracking methods to track these points. Tracking within a certain number of frames. Due to the high scanning frequency of the radar, the movement distance between adjacent points is small, so as long as a suitable threshold is set centering on the candidate point to search for similar points at each moment, a new candidate point can be found. If not found, then It is very likely that this candidate point is just noise. If it is found, continue to track. After confirming that it is a real pedestrian, perform Kalman filter tracking. Repeat the above steps until all tracking targets leave the target place, and the experiment ends.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN2011101071373ACN102253391B (en) | 2011-04-19 | 2011-04-19 | A Pedestrian Target Tracking Method Based on Multi-LiDAR |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN2011101071373ACN102253391B (en) | 2011-04-19 | 2011-04-19 | A Pedestrian Target Tracking Method Based on Multi-LiDAR |
| Publication Number | Publication Date |
|---|---|
| CN102253391Atrue CN102253391A (en) | 2011-11-23 |
| CN102253391B CN102253391B (en) | 2012-11-28 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN2011101071373AExpired - Fee RelatedCN102253391B (en) | 2011-04-19 | 2011-04-19 | A Pedestrian Target Tracking Method Based on Multi-LiDAR |
| Country | Link |
|---|---|
| CN (1) | CN102253391B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103325115A (en)* | 2013-06-14 | 2013-09-25 | 上海交通大学 | Pedestrian counting monitoring method based on head top camera |
| CN104063740A (en)* | 2013-03-21 | 2014-09-24 | 日电(中国)有限公司 | Office entity group identification system, method and apparatus |
| CN104915628A (en)* | 2014-03-14 | 2015-09-16 | 株式会社理光 | Pedestrian movement prediction method and device by carrying out scene modeling based on vehicle-mounted camera |
| CN105182283A (en)* | 2015-08-17 | 2015-12-23 | 周口师范学院 | Passive radar fixed target time domain direction-finding method |
| CN105652895A (en)* | 2014-11-12 | 2016-06-08 | 沈阳新松机器人自动化股份有限公司 | Mobile robot human body tracking system and tracking method based on laser sensor |
| CN105866782A (en)* | 2016-04-04 | 2016-08-17 | 上海大学 | Moving target detection system based on laser radar and moving target detection method thereof |
| US9436877B2 (en) | 2013-04-19 | 2016-09-06 | Polaris Sensor Technologies, Inc. | Pedestrian right of way monitoring and reporting system and method |
| CN106468772A (en)* | 2016-09-23 | 2017-03-01 | 南京特艺科技有限公司 | A kind of multistation radar human body tracing method based on range Doppler measurement |
| CN107664762A (en)* | 2016-07-29 | 2018-02-06 | 佳能株式会社 | Message processing device, the method and storage medium for detecting existing people around it |
| CN108345004A (en)* | 2018-02-09 | 2018-07-31 | 弗徕威智能机器人科技(上海)有限公司 | A kind of human body follower method of mobile robot |
| CN108596117A (en)* | 2018-04-28 | 2018-09-28 | 河北工业大学 | A kind of scene monitoring method based on scanning laser range finder array |
| CN109581302A (en)* | 2018-12-12 | 2019-04-05 | 北京润科通用技术有限公司 | A kind of trailer-mounted radar data tracking method and system |
| WO2019091448A1 (en)* | 2017-11-10 | 2019-05-16 | 长城汽车股份有限公司 | Method and device for tracking movable target |
| CN109870692A (en)* | 2019-04-16 | 2019-06-11 | 浙江力邦合信智能制动系统股份有限公司 | A kind of radar viewing system and data processing method |
| WO2020015748A1 (en)* | 2018-07-20 | 2020-01-23 | Suteng Innovation Technology Co., Ltd. | Systems and methods for lidar detection |
| CN111048208A (en)* | 2019-12-28 | 2020-04-21 | 哈尔滨工业大学(威海) | A LiDAR-based walking health detection method for the elderly living alone indoors |
| CN111476822A (en)* | 2020-04-08 | 2020-07-31 | 浙江大学 | Laser radar target detection and motion tracking method based on scene flow |
| CN112084372A (en)* | 2020-09-14 | 2020-12-15 | 北京数衍科技有限公司 | Pedestrian track updating method and device |
| CN112099620A (en)* | 2020-08-11 | 2020-12-18 | 中国人民解放军军事科学院国防科技创新研究院 | Combat collaboration system and method for soldier and team combat |
| CN112684430A (en)* | 2020-12-23 | 2021-04-20 | 哈尔滨工业大学(威海) | Indoor old person walking health detection method and system, storage medium and terminal |
| CN112926514A (en)* | 2021-03-26 | 2021-06-08 | 哈尔滨工业大学(威海) | Multi-target detection and tracking method, system, storage medium and application |
| CN114185059A (en)* | 2021-11-08 | 2022-03-15 | 哈尔滨工业大学(威海) | Multi-radar fusion-based multi-person tracking system, method, medium and terminal |
| CN114545437A (en)* | 2022-01-27 | 2022-05-27 | 华南师范大学 | Human intrusion detection method and security system based on lidar |
| CN114609642A (en)* | 2020-12-08 | 2022-06-10 | 山东新松工业软件研究院股份有限公司 | Target tracking method and system based on multi-line laser |
| CN115761959A (en)* | 2022-11-01 | 2023-03-07 | 中船重工(武汉)凌久高科有限公司 | An intelligent anti-tailgating method for access control system based on lidar |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100104199A1 (en)* | 2008-04-24 | 2010-04-29 | Gm Global Technology Operations, Inc. | Method for detecting a clear path of travel for a vehicle enhanced by object detection |
| CN101872068A (en)* | 2009-04-02 | 2010-10-27 | 通用汽车环球科技运作公司 | Daytime pedestrian on the full-windscreen head-up display detects |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100104199A1 (en)* | 2008-04-24 | 2010-04-29 | Gm Global Technology Operations, Inc. | Method for detecting a clear path of travel for a vehicle enhanced by object detection |
| CN101872068A (en)* | 2009-04-02 | 2010-10-27 | 通用汽车环球科技运作公司 | Daytime pedestrian on the full-windscreen head-up display detects |
| Title |
|---|
| 于金霞等: "基于激光雷达的移动机器人运动目标检测与跟踪", 《电子器件》* |
| 王荣本等: "智能车辆安全辅助驾驶技术研究近况", 《公路交通科技》* |
| 许言午等: "行人检测系统研究新进展及关键技术展望", 《电子学报》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104063740A (en)* | 2013-03-21 | 2014-09-24 | 日电(中国)有限公司 | Office entity group identification system, method and apparatus |
| CN104063740B (en)* | 2013-03-21 | 2017-11-17 | 日电(中国)有限公司 | Office's group of entities identifying system, method and device |
| US9436877B2 (en) | 2013-04-19 | 2016-09-06 | Polaris Sensor Technologies, Inc. | Pedestrian right of way monitoring and reporting system and method |
| CN103325115B (en)* | 2013-06-14 | 2016-08-10 | 上海交通大学 | A kind of method of monitoring people counting based on overhead camera head |
| CN103325115A (en)* | 2013-06-14 | 2013-09-25 | 上海交通大学 | Pedestrian counting monitoring method based on head top camera |
| CN104915628A (en)* | 2014-03-14 | 2015-09-16 | 株式会社理光 | Pedestrian movement prediction method and device by carrying out scene modeling based on vehicle-mounted camera |
| CN104915628B (en)* | 2014-03-14 | 2018-09-25 | 株式会社理光 | The method and apparatus that scene modeling based on in-vehicle camera carries out movement pedestrian's prediction |
| CN105652895A (en)* | 2014-11-12 | 2016-06-08 | 沈阳新松机器人自动化股份有限公司 | Mobile robot human body tracking system and tracking method based on laser sensor |
| CN105182283A (en)* | 2015-08-17 | 2015-12-23 | 周口师范学院 | Passive radar fixed target time domain direction-finding method |
| CN105182283B (en)* | 2015-08-17 | 2017-08-25 | 周口师范学院 | A kind of passive radar fixes target time domain direction-finding method |
| CN105866782B (en)* | 2016-04-04 | 2018-08-17 | 上海大学 | A kind of moving object detection system and method based on laser radar |
| CN105866782A (en)* | 2016-04-04 | 2016-08-17 | 上海大学 | Moving target detection system based on laser radar and moving target detection method thereof |
| CN107664762B (en)* | 2016-07-29 | 2022-05-10 | 佳能株式会社 | Information processing apparatus, method of detecting presence of person around the same, and storage medium |
| CN107664762A (en)* | 2016-07-29 | 2018-02-06 | 佳能株式会社 | Message processing device, the method and storage medium for detecting existing people around it |
| CN106468772B (en)* | 2016-09-23 | 2018-10-23 | 南京特艺科技有限公司 | A kind of multistation radar human body tracing method measured based on distance-Doppler |
| CN106468772A (en)* | 2016-09-23 | 2017-03-01 | 南京特艺科技有限公司 | A kind of multistation radar human body tracing method based on range Doppler measurement |
| WO2019091448A1 (en)* | 2017-11-10 | 2019-05-16 | 长城汽车股份有限公司 | Method and device for tracking movable target |
| US12181560B2 (en) | 2017-11-10 | 2024-12-31 | Great Wall Motor Company Limited | Method and device for tracking a movable target |
| CN108345004A (en)* | 2018-02-09 | 2018-07-31 | 弗徕威智能机器人科技(上海)有限公司 | A kind of human body follower method of mobile robot |
| CN108596117A (en)* | 2018-04-28 | 2018-09-28 | 河北工业大学 | A kind of scene monitoring method based on scanning laser range finder array |
| WO2020015748A1 (en)* | 2018-07-20 | 2020-01-23 | Suteng Innovation Technology Co., Ltd. | Systems and methods for lidar detection |
| US11768272B2 (en)* | 2018-07-20 | 2023-09-26 | Suteng Innovation Technology Co., Ltd. | Systems and methods for LiDAR detection |
| US20200355796A1 (en)* | 2018-07-20 | 2020-11-12 | Suteng Innovation Technology Co., Ltd. | Systems and methods for lidar detection |
| CN109581302A (en)* | 2018-12-12 | 2019-04-05 | 北京润科通用技术有限公司 | A kind of trailer-mounted radar data tracking method and system |
| CN109870692A (en)* | 2019-04-16 | 2019-06-11 | 浙江力邦合信智能制动系统股份有限公司 | A kind of radar viewing system and data processing method |
| CN109870692B (en)* | 2019-04-16 | 2023-10-20 | 浙江力邦合信智能制动系统股份有限公司 | Radar looking around system and data processing method |
| CN111048208A (en)* | 2019-12-28 | 2020-04-21 | 哈尔滨工业大学(威海) | A LiDAR-based walking health detection method for the elderly living alone indoors |
| CN111476822A (en)* | 2020-04-08 | 2020-07-31 | 浙江大学 | Laser radar target detection and motion tracking method based on scene flow |
| CN112099620A (en)* | 2020-08-11 | 2020-12-18 | 中国人民解放军军事科学院国防科技创新研究院 | Combat collaboration system and method for soldier and team combat |
| CN112084372A (en)* | 2020-09-14 | 2020-12-15 | 北京数衍科技有限公司 | Pedestrian track updating method and device |
| CN112084372B (en)* | 2020-09-14 | 2024-01-26 | 北京数衍科技有限公司 | Pedestrian track updating method and device |
| CN114609642A (en)* | 2020-12-08 | 2022-06-10 | 山东新松工业软件研究院股份有限公司 | Target tracking method and system based on multi-line laser |
| CN112684430A (en)* | 2020-12-23 | 2021-04-20 | 哈尔滨工业大学(威海) | Indoor old person walking health detection method and system, storage medium and terminal |
| CN112926514A (en)* | 2021-03-26 | 2021-06-08 | 哈尔滨工业大学(威海) | Multi-target detection and tracking method, system, storage medium and application |
| CN114185059A (en)* | 2021-11-08 | 2022-03-15 | 哈尔滨工业大学(威海) | Multi-radar fusion-based multi-person tracking system, method, medium and terminal |
| CN114545437A (en)* | 2022-01-27 | 2022-05-27 | 华南师范大学 | Human intrusion detection method and security system based on lidar |
| CN115761959A (en)* | 2022-11-01 | 2023-03-07 | 中船重工(武汉)凌久高科有限公司 | An intelligent anti-tailgating method for access control system based on lidar |
| Publication number | Publication date |
|---|---|
| CN102253391B (en) | 2012-11-28 |
| Publication | Publication Date | Title |
|---|---|---|
| CN102253391B (en) | A Pedestrian Target Tracking Method Based on Multi-LiDAR | |
| US20240303924A1 (en) | Visual-inertial positional awareness for autonomous and non-autonomous tracking | |
| CN110335337B (en) | An end-to-end semi-supervised generative adversarial network-based approach to visual odometry | |
| US10366508B1 (en) | Visual-inertial positional awareness for autonomous and non-autonomous device | |
| US10410328B1 (en) | Visual-inertial positional awareness for autonomous and non-autonomous device | |
| CN115937810A (en) | A sensor fusion method based on binocular camera guidance | |
| CN111781608B (en) | Moving target detection method and system based on FMCW laser radar | |
| CN109934848B (en) | A method for precise positioning of moving objects based on deep learning | |
| CN112025729B (en) | Multifunctional intelligent medical service robot system based on ROS | |
| KR102053802B1 (en) | Method of locating a sensor and related apparatus | |
| CN112634451A (en) | Outdoor large-scene three-dimensional mapping method integrating multiple sensors | |
| CN103279791B (en) | Based on pedestrian's computing method of multiple features | |
| CN109684921A (en) | A kind of road edge identification and tracking based on three-dimensional laser radar | |
| CN105760846B (en) | Target detection and localization method and system based on depth data | |
| WO2016205951A1 (en) | A system and a method for tracking mobile objects using cameras and tag devices | |
| CN116352722A (en) | Multi-sensor fused mine inspection rescue robot and control method thereof | |
| CN112541938A (en) | Pedestrian speed measuring method, system, medium and computing device | |
| CN112833892A (en) | Semantic mapping method based on track alignment | |
| CN106447698A (en) | Multi-pedestrian tracking method and system based on distance sensor | |
| Meissner et al. | Real-time detection and tracking of pedestrians at intersections using a network of laserscanners | |
| CN106408593A (en) | Video-based vehicle tracking method and device | |
| Xia et al. | [Retracted] Gesture Tracking and Recognition Algorithm for Dynamic Human Motion Using Multimodal Deep Learning | |
| KR102824305B1 (en) | Method and System for change detection and automatic updating of road marking in HD map through IPM image and HD map fitting | |
| CN109961476A (en) | Underground parking lot positioning method based on vision | |
| CN109194927A (en) | Vehicle-mounted target tracking holder camera apparatus based on deep learning |
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| C14 | Grant of patent or utility model | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee | Granted publication date:20121128 Termination date:20180419 | |
| CF01 | Termination of patent right due to non-payment of annual fee |