技术领域technical field
本发明属于图像处理技术领域,更具体地,涉及一种遮挡环境下的目标检测跟踪方法。The invention belongs to the technical field of image processing, and more particularly, relates to a target detection and tracking method in an occlusion environment.
背景技术Background technique
红外运动目标跟踪技术在智能监控、安防等军事和民用领域都有着广泛的应用,现实跟踪场景会受多种因素干扰,其中复杂环境所造成的遮挡干扰容易对跟踪方法造成较大的跟踪误差,导致跟踪目标丢失。Infrared moving target tracking technology has a wide range of applications in military and civilian fields such as intelligent monitoring, security, etc. The real tracking scene will be interfered by many factors. Among them, the occlusion interference caused by the complex environment is likely to cause large tracking errors to the tracking method. lead to the loss of the tracking target.
针对目标被遮挡的问题,通常的处理手段是在目标被遮挡后根据目标运动轨迹的历史状态对目标出现的位置进行预测,当目标逐渐脱离遮挡,算法回归正常的跟踪过程。例如基于卡尔曼滤波轨迹预测方法,依据运动存在的惯性由已知运动目标的位置坐标、速度与加速度信息预估下一时刻目标运动状态。以贝叶斯理论为基础的粒子滤波算法将跟踪问题转化为优化问题,在不断迭代大量离散随机采样粒子的过程中逐步逼近后验概率最大值,该算法能够在目标发生遮挡时利用未被遮挡的粒子信息进行目标位置的估计。然而轨迹预测方法受目标历史信息可靠性影响较大,遮挡时刻的预测性能决定能否在后续帧对目标进行准确的定位。另一方面在存在长时间部分遮挡时,仅利用目标单一特性容易造成较大的误差以致丢失目标,目标模型依赖于目标的再检测,长时间的遮挡导致目标模板更新不及时,当目标逐步复现时,易造成模板不匹配,导致跟踪失败。For the problem that the target is occluded, the usual processing method is to predict the position of the target according to the historical state of the target motion trajectory after the target is occluded. When the target gradually leaves the occlusion, the algorithm returns to the normal tracking process. For example, based on the Kalman filter trajectory prediction method, the motion state of the target at the next moment is estimated from the position coordinates, velocity and acceleration information of the known moving target according to the inertia existing in the motion. The particle filter algorithm based on Bayesian theory transforms the tracking problem into an optimization problem, and gradually approaches the maximum posterior probability in the process of iterating a large number of discrete random sampling particles. The particle information is used to estimate the target position. However, the trajectory prediction method is greatly affected by the reliability of the historical information of the target, and the prediction performance of the occlusion moment determines whether the target can be accurately positioned in the subsequent frames. On the other hand, when there is partial occlusion for a long time, only using a single feature of the target can easily cause large errors and cause the target to be lost. The target model depends on the re-detection of the target. The long-term occlusion leads to the untimely update of the target template. At present, it is easy to cause template mismatch, resulting in tracking failure.
发明内容SUMMARY OF THE INVENTION
针对现有技术的缺陷,本发明的目的在于提供一种遮挡环境下的目标检测跟踪方法,旨在解决现有技术中由于目标在复杂场景中易被干扰物遮挡造成跟踪模板被污染导致跟踪失败的技术问题。In view of the defects of the prior art, the purpose of the present invention is to provide a target detection and tracking method in an occluded environment, which aims to solve the problem of the tracking failure caused by the contamination of the tracking template caused by the target being easily occluded by interference objects in complex scenes in the prior art. technical issues.
本发明提供了一种遮挡环境下的目标检测跟踪方法,包括:The present invention provides a target detection and tracking method in an occlusion environment, including:
(1)获取跟踪视频序列,并在搜索区域S内提取所述跟踪视频序列中当前帧图像的前景区域;(1) obtain the tracking video sequence, and extract the foreground area of the current frame image in the tracking video sequence in the search area S;
(2)判断所述当前帧图像是否为第一帧图像,若否,则进入步骤(3),若是,则根据所述第一帧图像获取目标的滤波检测系数α和所述搜索区域S内所述前景区域O与背景区域B=S-O的灰度,并进入步骤(5);(2) Determine whether the current frame image is the first frame image, if not, proceed to step (3), if so, obtain the filter detection coefficient α of the target and the search area S according to the first frame image The grayscale of the foreground area O and the background area B=S-O, and enter step (5);
(3)根据所述滤波检测系数α对当前帧图像进行相关滤波目标检测后确定目标的第一位置;根据前景区域O与背景区域B=S-O的灰度计算搜索区域S中各个像素点作为目标的概率,从而确定目标的第二位置;根据所述第一位置和所述第二位置获得目标的最终位置;(3) The first position of the target is determined after the relevant filtering target detection is performed on the current frame image according to the filter detection coefficient α; each pixel in the search area S is calculated according to the grayscale of the foreground area O and the background area B=S-O as the target to determine the second position of the target; obtain the final position of the target according to the first position and the second position;
(4)判断是否对所述滤波检测系数α更新,若是,则对所述滤波检测系数α更新的同时对所述前景区域O与背景区域B=S-O的灰度进行更新后进入步骤(5);若否,则只对所述前景区域O与背景区域B=S-O的灰度进行更新并进入步骤(5);(4) Determine whether to update the filter detection coefficient α, if so, update the filter detection coefficient α and update the grayscale of the foreground area O and the background area B=S-O, and then enter step (5) ; If not, then only update the grayscale of the foreground area O and the background area B=S-O and enter step (5);
(5)判断所述当前帧图像是否为最后一帧图像,若是,则结束;若否,则返回至步骤(1)。(5) Determine whether the current frame image is the last frame image, if yes, end; if not, return to step (1).
更进一步地,在步骤(1)中所述提取跟踪视频序列中当前帧图像的前景区域具体为:Further, in step (1), the extraction and tracking of the foreground area of the current frame image in the video sequence is specifically:
(1.1)根据相邻两帧图像In与In-1获得每个像素p(i,j)的光流幅度(1.1) Obtain the optical flow amplitude of each pixel p(i,j ) according to the adjacent two frames of images In and In-1
(1.2)由搜索区域S内每个像素点的光流幅度得到光流矩阵(1.2) By the optical flow amplitude of each pixel in the search area S get the optical flow matrix
(1.3)对光流矩阵V1进行二值化处理,得到中间矩阵V2;(1.3) Binarize the optical flow matrix V1 to obtain an intermediate matrix V2;
其中,前景点值为1,背景点值为0;Among them, the foreground point value is 1, and the background point value is 0;
(1.4)对中间矩阵V2进行形态学开运算去除噪声点,闭运算填充空洞,得到二值矩阵V3:V3=(V2°Se)·Se;(1.4) Perform a morphological opening operation on the intermediate matrix V2 to remove noise points, and a closing operation to fill in the holes to obtain a binary matrix V3: V3=(V2°Se)·Se;
其中,°为开运算操作,·为闭运算操作。Se为结构元素矩阵,其值为:Among them, ° is the opening operation, and · is the closing operation. Se is the structure element matrix, and its value is:
(1.5)对当前帧图像In中的搜索区域S进行超像素分割得到超像素集合U={uk},k=1,2,3...n,n为超像素个数;(1.5) superpixel segmentation is performed on the search area S in the current frame image In to obtain a superpixel set U={uk }, where k=1, 2, 3...n, where n is the number of super pixels;
(1.6)由二值矩阵V3,统计每个超像素uk包含的像素点总数Nsum与前景像素点个数Nk,若前景像素点个与像素点总数的比值大于δτ,δτ取值为0.4,则uk为前景类超像素,所有前景类超像素的集合构成前景区域O。(1.6) From the binary matrix V3, count the total number of pixels Nsum contained in each superpixeluk and the number of foreground pixels N k, if the ratio of the number of foreground pixels to the total number of pixels is is greater than δτ andδτ is 0.4, thenuk is a foregroundsuperpixel , and the set of all foreground superpixels constitutes a foreground area O.
更进一步地,所述搜索区域S为:Further, the search area S is:
S={(i,j)|x0-ρ*w0≤i≤x0+ρ*w0,y0-ρ*h0≤j≤y0+ρ*h0},S={(i, j)|x0 -ρ*w0 ≤i≤x0 +ρ*w0 , y0 -ρ*h0 ≤j≤y0 +ρ*h0 },
其中,(i,j)为搜索区域S内的像素点的坐标,(x0,y0)为上一帧图像中得到的目标中心位置坐标,w0为上一帧图像中的目标框宽度,h0为上一帧图像中的目标框高度,ρ为窗口扩大系数,取值与目标运动速度、平台稳定度等有关,一般情况下取1.6。Among them, (i, j) are the coordinates of the pixels in the search area S, (x0 , y0 ) are the coordinates of the center of the target obtained in the previous frame of image, and w0 is the width of the target frame in the previous frame of image , h0 is the height of the target frame in the previous frame of image, ρ is the window expansion coefficient, the value is related to the target movement speed, platform stability, etc., and is generally 1.6.
更进一步地,在步骤(2)中,获得所述滤波检测系数α具体为:Further, in step (2), obtaining the filter detection coefficient α is specifically:
计算搜索区域S内的方向梯度直方图FHOG特征,在傅里叶域中对FHOG特征矩阵进行循环位移得到训练样本X;Calculate the FHOG feature of the directional gradient histogram in the search area S, and perform cyclic displacement on the FHOG feature matrix in the Fourier domain to obtain the training sample X;
计算核空间中的脊回归函数:Compute the ridge regression function in kernel space:
其中,Φ(X)是一个非线性映射函数,将样本X映射到核空间使其线性可分;y为样本标签;λ是正则化系数,给定其值为1e-3。Among them, Φ(X) is a nonlinear mapping function that maps the sample X to the kernel space to make it linearly separable; y is the sample label; λ is the regularization coefficient, which is given a value of 1e-3.
求解脊回归函数,得到滤波检测系数α在傅里叶域中的表示Solve the ridge regression function to get the representation of the filter detection coefficient α in the Fourier domain
其中符号^为表达式在傅里叶域中的形式,k为核空间的核矩阵, kXX=Φ(X)Φ(X)T。将进行时域转换得到滤波检测系数α。The symbol ^ is the form of the expression in the Fourier domain, k is the kernel matrix of the kernel space, kXX =Φ(X)Φ(X)T . Will The time domain transformation is performed to obtain the filter detection coefficient α.
更进一步地,步骤(3)中,根据所述第一位置和所述第二位置获得目标的最终位置具体为:Further, in step (3), the final position of the target obtained according to the first position and the second position is specifically:
(3.1)获得搜索区域S内每一个像素点为目标像素点的概率;获得搜索区域S内相关滤波响应矩阵f;(3.1) Obtain the probability that each pixel in the search area S is the target pixel; obtain the correlation filter response matrix f in the search area S;
(3.2)根据搜索区域S内每个像素点为目标像素点的概率Pij获得概率矩阵P;(3.2) Obtain the probability matrix P according to the probability Pij that each pixel point in the search area S is the target pixel point;
(3.3)将概率矩阵P与相关滤波响应矩阵f融合得到最终的响应结果;(3.3) Fusing the probability matrix P and the correlation filter response matrix f to obtain the final response result;
(3.4)根据所述响应结果获得目标的最终位置。(3.4) Obtain the final position of the target according to the response result.
更进一步地,所述目标的最终位置Further, the final position of the target
其中,Re为最终的响应结果,Re=(1-θ)*f+θ*P,θ为融合调整系数,一般取值为0.35。Among them, Re is the final response result, Re=(1-θ)*f+θ*P, and θ is the fusion adjustment coefficient, which is generally 0.35.
更进一步地,步骤(4)中对所述滤波检测系数α更新具体为:Further, in step (4), the update of the filter detection coefficient α is specifically:
根据搜索区域S内的相关滤波响应值矩阵f获得最大响应值fmax的峰值旁瓣比PSR;Obtain the peak-to-side lobe ratio PSR of the maximum response value fmax according to the correlation filter response value matrix f in the search area S;
给定阈值γ,γ取值为4.8,当峰值旁瓣比PSR大于给定阈值γ时,获得当前帧In搜索区域S内的滤波检测系数α_new,并根据公式α=(1-inf)*α+inf*α_new对所述滤波检测系数α进行更新;Given a threshold γ, the value of γ is 4.8, when the peak-to-side lobe ratio PSR is greater than the given threshold γ, the filter detection coefficientα_new in the search area S of the current frame In is obtained, and according to the formula α=(1-inf)* α+inf*α_new updates the filter detection coefficient α;
其中,inf为检测模板系数更新学习率,一般取值为0.02。Among them, inf is the detection template coefficient update learning rate, and the general value is 0.02.
更进一步地,所述峰值旁瓣比PSR为:其中,fmax为相关滤波响应值矩阵中的最大响应值,设最大响应值点的5×5邻域为响应窗口T,μR和σR分别为窗口T以外区域R=S-T的响应值的均值与标准差。Further, the peak sidelobe ratio PSR is: Among them,fmax is the maximum response value in the correlation filter response value matrix, and the 5 × 5 neighborhood of the maximum response value point is the response window T, and μR and σR are the response values of the area R=ST outside the window T, respectively. mean and standard deviation.
更进一步地,步骤(4)中对所述前景区域O与背景区域B=S-O的灰度进行更新具体为:Further, in step (4), updating the grayscale of the foreground area O and the background area B=S-O is specifically:
统计前景区域O的灰度直方图HO_new,统计背景区域B的灰度直方图HO_new;Count the grayscale histogram HO _new of the foreground area O, and count the gray level histogram HO _new of the background area B;
根据HO=(1-β)*HO+β*HO_new,HB=(1-β)*HB+β*HB_new分别对前景区域与背景区域的灰度进行更新;According to HO =(1-β)*HO +β*HO _new, HB =(1-β)*HB +β*HB _new respectively update the gray level of the foreground area and the background area;
其中,β为直方图更新学习率,一般取值为0.04。Among them, β is the histogram update learning rate, and the general value is 0.04.
与现有技术中默认目标框中的区域全部为前景区域并对其进行特征提取相比,本发明中采用光流算法和超像素分割充分提取前景区域,更为精确地提取目标特征,在对前景区域进行灰度直方图的统计时最大化地减少了背景点的干扰。另一方面,本发明在对目标跟踪模板进行更新时,采用相关滤波模板与灰度直方图独立更新的方式,由峰值旁瓣比控制相关滤波模板的更新频率,而对灰度直方图进行每帧的灰度更新,在防止相关滤波模板被背景物污染的同时,缓解了其因停止更新造成的模板匹配度低的问题,提高了跟踪算法的精度,减少了实际场景的复杂性和不确定性对目标跟踪技术的限制。Compared with the prior art that all the regions in the default target frame are foreground regions and feature extraction is performed on them, the present invention adopts the optical flow algorithm and superpixel segmentation to fully extract the foreground regions, and extract the target features more accurately. The gray histogram statistics of the foreground area minimize the interference of background points. On the other hand, when updating the target tracking template, the present invention adopts the independent updating method of the correlation filtering template and the grayscale histogram, and controls the update frequency of the correlation filtering template by the peak sidelobe ratio, and the grayscale histogram is updated every time. The grayscale update of the frame not only prevents the relevant filtering template from being polluted by the background, but also alleviates the problem of low template matching caused by stopping the update, improves the accuracy of the tracking algorithm, and reduces the complexity and uncertainty of the actual scene. Sexual limitations on object tracking technology.
附图说明Description of drawings
图1为本发明实例提供的一种遮挡环境下的目标检测跟踪方法的流程示意图;1 is a schematic flowchart of a target detection and tracking method in an occluded environment provided by an example of the present invention;
图2为本发明实例提供的光流算法提取运动前景的结果图;其中(a)为原始图像,(b)为基于光流检测的前景区域,(c)为前景区域与原始图像的对比;Fig. 2 is the result diagram of the optical flow algorithm that the example of the present invention provides to extract the motion foreground; Wherein (a) is the original image, (b) is the foreground area based on optical flow detection, (c) is the contrast between the foreground area and the original image;
图3为本发明实例提供的光流算法改进后提取的运动前景的结果图;其中(a)为原始图像,(b)为基于光流核超像素分割的前景区域,(c)为前景区域与原始图像的对比比;Fig. 3 is the result diagram of the motion foreground extracted after the improvement of the optical flow algorithm provided by the example of the present invention; wherein (a) is the original image, (b) is the foreground region based on the optical flow kernel superpixel segmentation, (c) is the foreground region comparison with the original image;
图4为本发明实例在连续长时间遮挡环境中与KCF跟踪方法的对比实验结果;其中,(a)为目标未进入遮挡的情况,(b)为目标进入遮挡的情况,(c)为目标被持续遮挡的情况,(d)为目标离开遮挡的情况;Fig. 4 is the comparative experiment result of the example of the present invention and the KCF tracking method in the continuous long-term occlusion environment; wherein, (a) is the situation where the target does not enter the occlusion, (b) is the situation where the target enters the occlusion, and (c) is the target In the case of continuous occlusion, (d) is the case in which the target leaves the occlusion;
图5为本发明实例在受到严重遮挡时与KCG跟踪方法的对实验结果;其中,(a)为目标未进入遮挡的情况,(b)为目标进入遮挡的情况,(c)为目标初步离开遮挡的情况,(d)为目标彻底离开遮挡的情况;Figure 5 is the experimental result of the example of the present invention and the KCG tracking method when it is severely occluded; wherein, (a) is the situation when the target does not enter the occlusion, (b) is the situation when the target enters the occlusion, and (c) is the initial departure of the target. In the case of occlusion, (d) is the case in which the target completely leaves the occlusion;
图6为本发明算法跟踪成功率随融合调整系数变化曲线;Fig. 6 is the change curve of the algorithm tracking success rate of the present invention with the fusion adjustment coefficient;
图7为本发明算法与其他经典算法在遮挡场景下的跟踪性能对比曲线;其中(a)为跟踪精度曲线,(b)为跟踪成功率曲线。FIG. 7 is a comparison curve of the tracking performance of the algorithm of the present invention and other classical algorithms in an occlusion scene; wherein (a) is the tracking accuracy curve, and (b) is the tracking success rate curve.
具体实施方式Detailed ways
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention, but not to limit the present invention.
本发明提供了一种红外图像抗遮挡的目标跟踪方法,使用灰度特征与梯度方向直方图的特征共同确定目标所在位置,控制检测模板的更新频率从而解决部分遮挡和严重遮挡情况下的跟踪问题,提高目标跟踪的准确性和鲁棒性。The invention provides an infrared image anti-occlusion target tracking method, which uses the gray feature and the feature of the gradient direction histogram to jointly determine the location of the target, and controls the update frequency of the detection template to solve the tracking problem in the case of partial occlusion and serious occlusion , to improve the accuracy and robustness of target tracking.
本发明提供的目标检测跟踪方法可以适用于对红外图像进行目标跟踪,也可以适用于对可见光图像进行目标跟踪,在对可见光图像进行目标跟踪时,可将灰度直方图的统计转换为颜色直方图的统计即可。The target detection and tracking method provided by the present invention can be applied to target tracking of infrared images, and can also be applied to target tracking of visible light images. During target tracking of visible light images, the statistics of gray histograms can be converted into color histograms. Statistics of the graph.
在本发明实施例中,以红外图像为例,详述如下:In the embodiment of the present invention, taking an infrared image as an example, the details are as follows:
为实现上述目的,本发明提供了一种目标检测跟踪方法,包括:To achieve the above purpose, the present invention provides a target detection and tracking method, comprising:
S1:输入跟踪视频序列的当前帧In,由上一帧In-1中得到的目标中心位置坐标pos(x0,y0)以及目标尺寸大小(w0,h0)获取搜索区域S={(i,j)|x0-ρ*w0≤i≤x0+ρ*w0,y0-ρ*h0≤j≤y0+ρ*h0},ρ为窗口扩大系数,取值与目标运动速度、平台稳定度等有关,一般情况下取1.6。在区域S内提取前景区域O和背景区域B;S1: Input the current frame In of the tracking video sequence, obtain the search area S from the target center position coordinates pos (x0 , y0 ) and the target size (w0 , h0 ) obtained in the previous frame In-1 ={(i, j)|x0 -ρ*w0 ≤i≤x0 +ρ*w0 , y0 -ρ*h0 ≤j≤y0 +ρ*h0 }, ρ is the window expansion coefficient , the value is related to the target movement speed, platform stability, etc. In general, it is 1.6. Extract foreground area O and background area B in area S;
S2:若图像In为跟踪视频序列的第一帧I1,依据KCF(High-Speed Tracking withKernelized Correlation Filters)算法计算搜索区域S内相关滤波检测模板系数α;统计所述区域S内前景区域O与背景区域B=S-O的灰度直方图HO与HB,转至步骤S1;S2: If the image In is the first frame I1 of the tracking video sequence, calculate the correlation filter detection template coefficient α in the search area S according to the KCF (High-Speed Tracking with Kernelized Correlation Filters) algorithm; count the foreground area O in the area S The grayscale histogramsHO and HB of the background areaB =SO, go to step S1;
S3:基于图像In-1中的目标所在位置,在图像In的搜索区域S中进行相关滤波检测,得到相关滤波响应矩阵f;S3: Based on the location of the target in the image In-1 , perform correlation filtering detection in the search area S of the image In to obtain a correlation filtering response matrixf ;
S4:在搜索区域S内根据贝叶斯公式计算每一个像素点p(i,j)为目标像素点的概率Pij,搜索区域S中所有像素的Pij构成概率矩阵P,将概率矩阵P与响应矩阵f融合确定目标位置;S4: Calculate the probability Pij that each pixel p(i, j) is the target pixel in the search area S according to the Bayesian formula, and the Pij of all the pixels in the search area S constitute a probability matrix P, and the probability matrix P Fusion with the response matrix f to determine the target position;
S5:更新灰度直方图HO与HB,计算响应矩阵f中最大响应值点的峰值旁瓣比PSR,设定阈值γ控制滤波系数α的更新频率。S5: Update the grayscale histograms HO and HB , calculate the peak-to-side lobe ratio PSR of the maximum response value point in the response matrix f, and set the threshold γ to control the update frequency of the filter coefficient α.
本发明由于充分利用了目标未被遮挡部分的灰度信息,同时在跟踪过程中有效控制目标模板的更新频率,能够长时间部分遮挡情况下仍可持续稳定跟踪目标。Since the present invention makes full use of the grayscale information of the unobstructed part of the target, and at the same time effectively controls the update frequency of the target template during the tracking process, the invention can continuously and stably track the target under the condition of partial occlusion for a long time.
其中,步骤S1包括:Wherein, step S1 includes:
S1.1:对输入的图像In,采用Lucas-Kanade光流方法对相邻两帧图像In与In-1进行光流计算得到每个像素p(i,j)的光流幅度S1.1: For the input image In , use the Lucas-Kanade optical flow method to calculate the optical flow of two adjacent frames of images In and In-1 to obtain the optical flow amplitude of each pixel p(i, j)
S1.2:计算搜索区域内每个像素点的光流幅度,得到光流矩阵S1.2: Calculate the optical flow amplitude of each pixel in the search area to obtain the optical flow matrix
S1.3:采用Otsu算法对光流矩阵V1进行二值化处理,得到中间矩阵V2,其中前景点值为1,背景点值为0;S1.3: The Otsu algorithm is used to binarize the optical flow matrix V1 to obtain an intermediate matrix V2, in which the value of the foreground point is 1, and the value of the background point is 0;
S1.4:对中间矩阵V2进行形态学开运算去除噪声点,闭运算填充空洞,得到最终二值矩阵V3:S1.4: Perform a morphological opening operation on the intermediate matrix V2 to remove noise points, and a closing operation to fill in the holes to obtain the final binary matrix V3:
V3=(V2°Se)·SeV3=(V2°Se)·Se
其中,°为开运算操作,·为闭运算操作。Se为结构元素矩阵,其值为:Among them, ° is the opening operation, and · is the closing operation. Se is the structure element matrix, and its value is:
S1.5:采用SILC算法对输入的图像In中的搜索区域S进行超像素分割得到超像素集合U={uk},k=1,2,3...n,n为超像素个数;S1.5: Use the SILC algorithm to performsuperpixel segmentation on the search area S in the input image In to obtain a superpixel set U={uk }, k=1, 2, 3...n, where n is the number of superpixels number;
S1.6:由二值矩阵V3,统计每个超像素uk包含的像素点总数Nsum与前景像素点个数Nk,若前景像素点个与像素点总数的比值大于δτ,δτ取值为0.4,则uk为前景类超像素,所有前景类超像素的集合构成前景区域O。S1.6: From the binary matrix V3, count the total number of pixels Nsum contained in each superpixeluk and the number of foreground pixels N k, if the ratio of the number of foreground pixels to the total number of pixels is is greater than δτ andδτ is 0.4, thenuk is a foregroundsuperpixel , and the set of all foreground superpixels constitutes a foreground area O.
现有技术中的跟踪算法在对目标特征进行提取时,默认目标框中的区域全部为前景区域,并对其进行特征提取。然而一般情况下目标框中的区域除了前景区域外还包含大量背景区域,由此提取到的特征必然不能准确描述目标,尤其在统计前景区域的灰度特征时若混入大量背景点会使得前景区域的灰度直方图产生较大的误差,影响后续步骤中概率Pij的计算,进而影响目标位置的获取。本发明中采用光流算法和超像素分割充分提取前景区域,有效地剔除了背景点,在对前景区域进行灰度直方图的统计时最大化地减少了背景点的干扰,使得前景区域的灰度特征能够对目标进行准确的描述,提高了目标位置定位的精度。When the tracking algorithm in the prior art extracts target features, all the regions in the target frame are defaulted to be foreground regions, and feature extraction is performed on them. However, in general, the area in the target frame contains a large number of background areas in addition to the foreground area, and the extracted features must not accurately describe the target, especially if a large number of background points are mixed in the grayscale features of the foreground area, the foreground area will be affected. The gray histogram of , produces a large error, which affects the calculation of the probability Pij in the subsequent steps, and then affects the acquisition of the target position. In the present invention, the optical flow algorithm and superpixel segmentation are used to fully extract the foreground area, and the background points are effectively eliminated. The degree feature can accurately describe the target and improve the accuracy of target location positioning.
进一步的,步骤S4包括:Further, step S4 includes:
S4.1:依据贝叶斯公式计算区域S内每一个像素点为目标区域像素点的概率:S4.1: Calculate the probability that each pixel in the area S is a pixel of the target area according to the Bayesian formula:
其中,p(i,j)∈O为目标像素点,p(i,j)∈B为背景像素点。bij=(p(i,j)-gmin)/(gmax-gmin)*bin,bin为直方图的灰度划分区间个数,其值取32;gmin为区域S内的灰度最小值,gmax为区域S内的灰度最大值;Among them, p(i,j)∈O is the target pixel, and p(i,j)∈B is the background pixel. bij =(p(i, j)-gmin )/(gmax -gmin )*bin, bin is the number of gray level division intervals of the histogram, and its value is 32; gmin is the gray level in the area S The minimum value of the degree, gmax is the maximum gray value in the area S;
S4.2:由步骤S4.1中搜索区域S内每个像素点为目标像素点的概率Pij,得到概率矩阵P。将概率矩阵P与步骤S3中的相关滤波响应矩阵f融合得到最终的响应结果Re:S4.2: Obtain the probability matrix P from the probability Pij that each pixel in the search area S is the target pixel in step S4.1. The probability matrix P is fused with the correlation filter response matrix f in step S3 to obtain the final response result Re:
Re=(1-θ)*f+θ*PRe=(1-θ)*f+θ*P
其中,θ为融合调整系数,一般取值为0.35。目标中心位置坐标为Among them, θ is the fusion adjustment coefficient, and the general value is 0.35. The coordinates of the target center position are
进一步的,步骤S5包括:Further, step S5 includes:
S5.1:由步骤S3中的相关滤波响应值矩阵f,计算区域S内最大响应值fmax的峰值旁瓣比:S5.1: Calculate the peak-to-side lobe ratio of the maximum response value fmax in the region S from the correlation filter response value matrix f in step S3:
其中,fmax为相关滤波响应值矩阵中的最大响应值,设最大响应值点的5×5邻域为响应窗口T,计算响应窗口以外区域R=S-T的响应值的均值μR与标准差σR;Among them, fmax is the maximum response value in the correlation filter response value matrix, set the 5×5 neighborhood of the maximum response value point as the response window T, and calculate the mean μR and standard deviation of the response values in the area outside the response window R=ST σR ;
S5.2:给定阈值γ,γ一般取值为4.8,若PSR值大于γ,在当前帧In依据KCF算法计算区域S内滤波检测系数α_new,并按下式更新模板系数α:S5.2: Given a threshold γ, γ generally takes a value of 4.8. If the PSR value is greater than γ, the filter detection coefficientα_new in the area S is calculated according to the KCF algorithm in the current frame In, and the template coefficient α is updated as follows:
α=(1-inf)*α+inf*α_newα=(1-inf)*α+inf*α_new
其中,inf为检测模板系数更新学习率,一般取值为0.02.Among them, inf is the detection template coefficient update learning rate, which is generally 0.02.
S5.3:依据步骤S1中提取的前景区域O,统计前景灰度直方图HO_new与背景区域灰度直方图HB_new;S5.3: According to the foreground area O extracted in step S1, count the foreground gray level histogram HO _new and the background area gray level histogram HB _new;
S5.4:按照下式分别更新前景区域灰度直方图HO与背景区域的灰度直方图HB:S5.4: Update the grayscale histogram HO of the foreground area and the grayscale histogram HB of the background area respectively according to the following formula:
HO=(1-β)*HO+β*HO_newHO =(1-β)*HO+ β*HO _new
HB=(1-β)*HB+β*HB_newHB =(1-β)*HB +β*HB _new
其中,β为直方图更新学习率,一般取值为0.04。Among them, β is the histogram update learning rate, and the general value is 0.04.
现有技术中目标长时间被遮挡时,目标模板的更新频率很难掌握。目标模板依赖于目标的再检测,更新频率过快会导致模板被背景物污染从而不能准确检测出目标;而更新频率过慢又会使得目标模板更新不及时,当目标逐步离开遮挡时,模板未及时更新使得模板与目标不匹配,以致跟踪失败。本发明的更新方法策略为相关滤波模板与灰度直方图独立进行更新。对相关滤波模板进行更新时,采用PSR值判断当前目标遮挡情况,若PSR值小于4.8,则停止更新直到PSR值再次大于阈值,这样可有效防止相关滤波模板被干扰物污染。另一方面对于灰度特征,基于提取的前景区域以一定学习率进行每帧的灰度直方图更新,缓解因相关滤波停止更新造成的模板匹配度低的问题,实现目标离开遮挡后迅速准确定位目标所在位置以进行有效的跟踪,保障遮挡目标复现后的跟踪准确度。In the prior art, when the target is occluded for a long time, it is difficult to grasp the update frequency of the target template. The target template depends on the re-detection of the target. If the update frequency is too fast, the template will be polluted by the background and cannot accurately detect the target; if the update frequency is too slow, the target template will not be updated in time. When the target gradually leaves the occlusion, the template is not Updating in time makes the template not match the target, causing the tracking to fail. The updating method strategy of the present invention is to update the correlation filtering template and the grayscale histogram independently. When updating the relevant filter template, the PSR value is used to judge the current target occlusion. If the PSR value is less than 4.8, the update is stopped until the PSR value is greater than the threshold again, which can effectively prevent the relevant filter template from being polluted by interferers. On the other hand, for grayscale features, the grayscale histogram of each frame is updated at a certain learning rate based on the extracted foreground region, which alleviates the problem of low template matching caused by the stop of update of correlation filtering, and realizes rapid and accurate positioning of the target after it leaves the occlusion. The location of the target can be effectively tracked to ensure the tracking accuracy after the occlusion target is reproduced.
总体而言,通过本发明所构思的以上技术方案与现有技术相比,能够取得下列有益效果:In general, compared with the prior art, the above technical solutions conceived by the present invention can achieve the following beneficial effects:
本发明利用基于光流和超像素分割的前景检测有效提取目标区域,对目标区域与背景区域提取灰度信息计算像素级的概率置信图,辅助相关滤波的目标检测结果,并控制目标跟踪模板的更新频率,采用相关滤波模板与灰度直方图独立更新的方式,实现对长时间背景遮挡干扰下的运动目标的精确跟踪,提升了算法的鲁棒性和准确性,有效缓解了目标在遮挡情况下易发生的跟踪漂移甚至丢失的情况,减少了实际场景的复杂性和不确定性对目标跟踪技术的限制。The invention effectively extracts the target area by using foreground detection based on optical flow and superpixel segmentation, extracts grayscale information from the target area and the background area to calculate a pixel-level probability confidence map, assists the target detection result of correlation filtering, and controls the target tracking template. The update frequency adopts the independent update method of the relevant filtering template and the grayscale histogram to realize the accurate tracking of the moving target under the interference of long-term background occlusion, improve the robustness and accuracy of the algorithm, and effectively alleviate the occlusion of the target. This reduces the possibility of tracking drift or even loss, which reduces the complexity and uncertainty of the actual scene that limit the target tracking technology.
现参照附图并结合具体实例详述本发明提供的遮挡环境下的目标检测跟踪方法如下:The target detection and tracking method under the occlusion environment provided by the present invention is now described in detail with reference to the accompanying drawings and in conjunction with specific examples as follows:
图1为本发明实例提出的一种抗遮挡目标跟踪方法的流程示意图,该方法包括:1 is a schematic flowchart of an anti-occlusion target tracking method proposed by an example of the present invention, and the method includes:
S1:输入跟踪视频序列的当前帧In,由上一帧In-1中得到的目标中心位置坐标pos(x0,y0)以及目标尺寸大小(w0,h0)获取矩形搜索区域S={(i,j)|x0-ρ*w0≤i≤x0+ρ*w0,y0-ρ*h0≤j≤y0+ρ*h0},ρ为窗口扩大系数,取值与目标运动速度、平台稳定度等有关,一般情况下取1.6。在区域S内提取前景区域O和背景区域B;S1: Input the current frame In of the tracking video sequence, obtain the rectangular search area from the target center position coordinates pos (x0 , y0 ) and the target size (w0 , h0) obtained in the previous frame In-1 S={(i, j)|x0 -ρ*w0 ≤i≤x0 +ρ*w0 , y0 -ρ*h0 ≤j≤y0 +ρ*h0 }, ρ is the window expansion Coefficient, the value is related to the target movement speed, platform stability, etc. Generally, it is 1.6. Extract foreground area O and background area B in area S;
S2:若图像In为跟踪视频序列的第一帧I1,依据KCF(High-Speed Tracking withKernelized Correlation Filters)算法计算S内相关滤波检测模板系数α;统计所述区域S内前景区域O与背景区域B=S-O的灰度直方图HO与HB,转至步骤S1;S2: If the image In is the first frame I1 of the tracking video sequence, calculate the correlation filter detection template coefficient α in S according to the KCF (High-Speed Tracking with Kernelized Correlation Filters) algorithm; count the foreground area O and the background in the area S The grayscale histogramsHO and HB of the areaB =SO, go to step S1;
S3:基于图像In-1中的目标所在位置,在图像In的搜索区域S中进行相关滤波检测,得到相关滤波响应矩阵f;S3: Based on the location of the target in the image In-1 , perform correlation filtering detection in the search area S of the image In to obtain a correlation filtering response matrixf ;
S4:在区域S内根据贝叶斯公式计算每一个像素点p(i,j)为目标像素点的概率Pij,区域S中所有像素的Pij构成概率矩阵P,将概率矩阵P与响应矩阵f融合确定目标位置;S4: Calculate the probability Pij that each pixel p(i, j) is the target pixel in the region S according to the Bayesian formula, and the Pij of all the pixels in the region S constitute the probability matrix P, and the probability matrix P and the response The matrix f is fused to determine the target position;
S5:更新灰度直方图HO与HB,计算响应矩阵f中最大响应值点的峰值旁瓣比PSR,设定阈值γ控制相关滤波模板系数α的更新频率。S5: Update the grayscale histograms HO and HB , calculate the peak-to-side lobe ratio PSR of the maximum response value point in the response matrix f, and set the threshold γ to control the update frequency of the correlation filter template coefficient α.
其中,上述步骤S1具体包括以下子步骤:Wherein, the above-mentioned step S1 specifically includes the following sub-steps:
S1.1:对输入的图像In,采用Lucas-Kanade光流方法对相邻两帧图像In与In-1进行光流计算得到每个像素p(i,j)的光流幅度S1.1: For the input image In , use the Lucas-Kanade optical flow method to calculate the optical flow of two adjacent frames of images In and In-1 to obtain the optical flow amplitude of each pixel p(i, j)
S1.2:计算搜索区域内每个像素点的光流幅度,得到矩阵S1.2: Calculate the optical flow amplitude of each pixel in the search area to obtain a matrix
S1.3:采用Otsu算法对矩阵V1进行二值化处理,得到中间矩阵V2,其中前景点值为1,背景点值为0;S1.3: Use the Otsu algorithm to binarize the matrix V1 to obtain an intermediate matrix V2, in which the value of the foreground point is 1, and the value of the background point is 0;
S1.4:对中间矩阵V2进行形态学开运算去除噪声点,闭运算填充空洞,得到最终二值矩阵V3:S1.4: Perform a morphological opening operation on the intermediate matrix V2 to remove noise points, and a closing operation to fill in the holes to obtain the final binary matrix V3:
V3=(V2°Se)·SeV3=(V2°Se)·Se
其中,°为开运算操作,·为闭运算操作。Se为结构元素矩阵,其值为:Among them, ° is the opening operation, and · is the closing operation. Se is the structure element matrix, and its value is:
在本发明实例中,在进行光流场计算之后会依据超像素分割算法对前景区域进行进一步提取,其原因如下:In the example of the present invention, after the optical flow field calculation is performed, the foreground region will be further extracted according to the superpixel segmentation algorithm, and the reasons are as follows:
在移动摄像机拍摄的序列中,由于每一帧摄像头的运动方向与速度不同,使得图像中背景与前景具有不同的运动属性。由于背景运动是摄像头的移动造成,故不同背景像素的光流失量彼此之间较为相似,因摄像头移动产生的光流矢量即为背景光流失量。理想情况下前景与背景的光流场分布有显著的区别,可以根据前景与背景的光流差异来提取前景。然而在现实场景中由于目标物体周围可能存在部分背景像素点拥有和前景类似的光流矢量,这就会导致前景和背景的区分误判率上升,如图2所示,使得提取出的前景区域包含大量背景点,影响后续计算精度,因此需要对前景点进行进一步的挑选处理。In a sequence shot by a moving camera, the background and foreground in the image have different motion attributes due to the different motion direction and speed of the camera in each frame. Since the background motion is caused by the movement of the camera, the light loss of different background pixels is relatively similar to each other, and the optical flow vector generated by the movement of the camera is the background light loss. Ideally, there is a significant difference between the optical flow field distribution of the foreground and the background, and the foreground can be extracted according to the optical flow difference between the foreground and the background. However, in the real scene, there may be some background pixels around the target object that have optical flow vectors similar to the foreground, which will lead to an increase in the misjudgment rate of the distinction between the foreground and the background, as shown in Figure 2, which makes the extracted foreground area It contains a large number of background points, which affects the subsequent calculation accuracy, so further selection processing is required for the foreground points.
S1.5:采用SILC算法对输入的图像In中的区域S进行超像素分割得到超像素集合U={uk},k=1,2,3...n,n为超像素个数;S1.5: Use the SILC algorithm to performsuperpixel segmentation on the region S in the input image In to obtain a superpixel set U={uk }, k=1, 2, 3...n, where n is the number of superpixels ;
S1.5.1:假设区域S一共有Ns个像素点,期望分割的超像素数量为M,则每个超像素大小为Ns/M,初步随机选定M个聚类中心C={cl},l=1,2,3...M的位置,相邻cl的距离为S1.5.1: Assuming that the area S has a total of Ns pixels, the number of superpixels to be segmented is M, then the size of each superpixel is Ns /M, and M cluster centers are initially randomly selected C={cl }, where l=1, 2, 3...M, the distance between adjacent cl is
S1.5.2:将每个聚类中心cl移动到其3×3邻域中梯度最小的位置;S1.5.2: Move each cluster center cl to the position with the smallest gradient in its 3×3 neighborhood;
S1.5.3:对每个聚类中心cl遍历其2D×2D邻域内的每一个像素点,计算每个像素点与cl的灰度距离和空间距离;S1.5.3: Traverse each pixel in the 2D×2D neighborhood of each cluster center cl , and calculate the grayscale distance and spatial distance between each pixel and cl ;
S1.5.4:由于每个像素点可能会被多个cl遍历,取与其距离最小的cl作为该像素点的聚类中心;S1.5.4 : Since each pixel may be traversed by multiplecls , take the cl with the smallest distance from it as the cluster center of the pixel;
S1.5.5:重复步骤S1.5.1~S1.5.4直至每个像素点的聚类中心不再发生变化,得到超像素集合U={uk},k=1,2,3...n,n为超像素个数。S1.5.5: Repeat steps S1.5.1 to S1.5.4 until the cluster center of each pixel no longer changes, and obtain a superpixel set U={uk }, k=1, 2, 3...n, n is the number of superpixels.
S1.6:由二值矩阵V3,统计每个超像素uk包含的像素点总数Nsum与前景像素点个数Nk,若前景像素点个与像素点总数的比值大于δτ,δτ取值为0.4,则uk为前景类超像素,所有前景类超像素的集合构成前景区域O。S1.6: From the binary matrix V3, count the total number of pixels Nsum contained in each superpixeluk and the number of foreground pixels N k, if the ratio of the number of foreground pixels to the total number of pixels is is greater than δτ andδτ is 0.4, thenuk is a foregroundsuperpixel , and the set of all foreground superpixels constitutes a foreground area O.
如图3所示,在进行前景提取时,上述方法能较为准确地提取出前景区域。As shown in FIG. 3 , when performing foreground extraction, the above method can more accurately extract the foreground area.
进一步的,步骤S2具体包括以下子步骤:Further, step S2 specifically includes the following sub-steps:
S2.1:计算步骤搜索区域S内的方向梯度直方图(Felzenszwalb’s Histogram ofOriented Gradients,FHOG)特征,特征提取的结果为m×n×31矩阵,即31个方向通道。在傅里叶域中对每个通道上的矩阵进行循环位移作为训练样本X,样本标签的分布满足高斯函数:S2.1: The calculation step searches for Felzenszwalb's Histogram of Oriented Gradients (FHOG) features in the area S, and the result of feature extraction is an m×n×31 matrix, that is, 31 directional channels. The matrix on each channel is cyclically shifted in the Fourier domain as the training sample X, and the distribution of the sample labels satisfies the Gaussian function:
其中,sz为目标区域大小[h,w],σf取0.0625,c_sz为进行FHOG特征提取时的划分的细胞区域cell的大小,其值为4;Among them, sz is the size of the target area [h, w], σf is 0.0625, c_sz is the size of the divided cell area cell when FHOG feature extraction is performed, and its value is 4;
S2.2:由训练样本X求解核空间下的脊回归函数:S2.2: Solve the ridge regression function in the kernel space from the training sample X:
令w=∑iαiΦ(xi),上式变为:Let w=∑i αi Φ(xi ), the above formula becomes:
其中,Φ(X)是样本在核空间的映射,使得映射后的样本在新的空间线性可分;y为样本标签;λ是正则化系数,给定其值为1e-3;Among them, Φ(X) is the mapping of the samples in the kernel space, so that the mapped samples are linearly separable in the new space; y is the sample label; λ is the regularization coefficient, which is given as 1e-3;
采用核方法求解回归函数时,不需要知道非线性映射函数Φ(X)的具体形式,只刻画核空间矩阵k=Φ(X)Φ(X)T,在本实例中采用高斯核函数,计算得到相关滤波模型系数α在傅里叶域中的表示:When using the kernel method to solve the regression function, it is not necessary to know the specific form of the nonlinear mapping function Φ(X), and only the kernel space matrix k=Φ(X)Φ(X)T is described. In this example, the Gaussian kernel function is used to calculate Obtain the representation of the correlation filter model coefficient α in the Fourier domain:
其中符号^为表达式在傅里叶域中的形式。where the symbol ^ is the form of the expression in the Fourier domain.
进一步的,步骤S3包括:Further, step S3 includes:
对所述搜索区域计算相关滤波响应值:Calculate the correlation filter response value for the search region:
其中,符号^为表达式在傅里叶域中的形式,X为上一帧图像In-1中得到的训练样本,Z为图像In中的待检测样本,为测试样本与训练样本核矩阵,为非线性映射函数,可将样本映射到一个新空间并在新空间线性可分,一般不需要知道其具体形式。α为上一帧中计算得到的相关滤波系数,⊙为Hadamard乘积运算。选出使得响应值f(z)最大的样本z作为检测出的目标区域。Among them, the symbol ^ is the form of the expression in the Fourier domain, X is the training sample obtained in the image In-1 of the previous frame, Z is the sample to be detected in the image In , For the test sample and training sample kernel matrix, It is a nonlinear mapping function, which can map the sample to a new space and linearly separable in the new space, and generally does not need to know its specific form. α is the correlation filter coefficient calculated in the previous frame, and ⊙ is the Hadamard product operation. The sample z that maximizes the response value f(z) is selected as the detected target area.
进一步的,S4步骤包括:Further, step S4 includes:
S4.1:依据贝叶斯公式计算区域S内每一个像素点为目标像素点的概率:S4.1: Calculate the probability that each pixel in the area S is the target pixel according to the Bayesian formula:
其中,p(i,j)∈O为目标像素点,p(i,j)∈B为背景像素点。bij=(p(i,j)-gmin)/(gmax-gmin)*bin,bin为直方图的灰度划分区间个数,一般取值为32;gmin为区域S内的灰度最小值,gmax为区域S内的灰度最大值;Among them, p(i,j)∈O is the target pixel, and p(i,j)∈B is the background pixel. bij =(p(i, j)-gmin )/(gmax -gmin )*bin, bin is the number of gray-scale division intervals of the histogram, generally 32; gmin is the Gray minimum value, gmax is the gray maximum value in the area S;
在本发明实例中,使用上述公式计算像素点p(i,j)属于目标区域的概率的原因推导如下:In the example of the present invention, the reason for using the above formula to calculate the probability that the pixel point p(i, j) belongs to the target area is derived as follows:
由贝叶斯公式得到每一个像素点为目标像素点的概率为:The probability that each pixel is the target pixel obtained by the Bayesian formula is:
其中P(bij|p(i,j)∈O)≈HO(bij)/|O|,P(bij|p(i,j)∈B)≈HS(bij)/|B|,P(p(i,j)∈O)≈|O|/(|O|+|B|),则上式可以简化为:where P(bij |p(i,j)∈O)≈HO(bij )/|O|, P(bij |p(i,j)∈B)≈HS (bij )/|B |, P(p(i, j)∈O)≈|O|/(|O|+|B|), then the above formula can be simplified as:
S4.2:由步骤S4.1中区域S内每个像素点为目标像素点的概率Pij,得到概率矩阵P。将概率矩阵P与步骤S3中的相关滤波响应矩阵f融合得到最终的响应结果Re:S4.2: Obtain the probability matrix P from the probability Pij that each pixel point in the area S in step S4.1 is the target pixel point. The probability matrix P is fused with the correlation filter response matrix f in step S3 to obtain the final response result Re:
Re=(1-θ)*f+θ*PRe=(1-θ)*f+θ*P
其中,θ为融合调整系数,一般取值为0.35。目标中心位置坐标为Among them, θ is the fusion adjustment coefficient, and the general value is 0.35. The coordinates of the target center position are
在选择融合调整系数时,我们进行了多次试验,对比了θ不同取值时算法的跟踪成功率,以获得使得算法效果最佳的融合调整系数。图6即为本算法跟踪成功率随融合调整系数变化的曲线。首先定义重合率得分(overlap score,OS),假设由跟踪方法得到的目标框(bounding box)记为rt,人工标注目标框(ground_truth)记为ra,则其中|·|表示区域像素的数目。给定阈值0.5,当某一帧的OS大于0.5时,则当前帧视为成功,计算总的成功帧数占视频总帧数的百分比即为跟踪成功率。如图6所示,横坐标为θ的不同取值,纵坐标为跟踪成功率,当θ取值在0.35左右时,算法跟踪成功率最高。When selecting the fusion adjustment coefficient, we conducted several experiments and compared the tracking success rate of the algorithm with different values of θ to obtain the fusion adjustment coefficient that makes the algorithm the best. Figure 6 is the curve of the tracking success rate of the algorithm changing with the fusion adjustment coefficient. First define the overlap score (OS), assuming that the bounding box obtained by the tracking method is denoted as rt , and theground_truth manually labeled asra , then where |·| represents the number of region pixels. Given a threshold of 0.5, when the OS of a certain frame is greater than 0.5, the current frame is considered to be successful, and the percentage of the total number of successful frames to the total number of video frames is calculated as the tracking success rate. As shown in Figure 6, the abscissa is the different values of θ, and the ordinate is the tracking success rate. When the value of θ is around 0.35, the algorithm has the highest tracking success rate.
在本发明实施例中,完成一帧的目标检测后,进行目标跟踪模板的更新,进一步的,步骤S5包括:In the embodiment of the present invention, after the target detection of one frame is completed, the target tracking template is updated. Further, step S5 includes:
S5.1:由步骤S3中的相关滤波响应值矩阵f,计算区域S内最大响应值fmax的峰值旁瓣比:S5.1: Calculate the peak-to-side lobe ratio of the maximum response value fmax in the region S from the correlation filter response value matrix f in step S3:
其中,fmax为相关滤波响应值矩阵中的最大响应值,设最大响应值点的5×5邻域为响应窗口T,计算响应窗口以外区域R=S-T的响应值的均值μR与标准差σR;Among them, fmax is the maximum response value in the correlation filter response value matrix, set the 5×5 neighborhood of the maximum response value point as the response window T, and calculate the mean μR and standard deviation of the response values in the area outside the response window R=ST σR ;
S5.2:设定阈值γ,若PSR值大于γ,在当前帧In依据KCF算法计算区域S内相关滤波检测模板系数α_new,并按下式更新模板系数α:S5.2: Set the threshold γ, if the PSR value is greater than γ, calculate the correlation filter detection template coefficientα_new in the area S according to the KCF algorithm in the current frame In, and update the template coefficient α as follows:
α=(1-inf)*α+inf*α_newα=(1-inf)*α+inf*α_new
其中,inf为检测模板系数更新学习率,一般取值为0.02.Among them, inf is the detection template coefficient update learning rate, which is generally 0.02.
S5.3:依据步骤S1中提取的前景区域O,统计前景与背景区域的灰度直方图HO_new与HB_new;S5.3: According to the foreground region O extracted in step S1, count the grayscale histograms HO _new and HB _new of the foreground and background regions;
S5.4:按照下式分别更新前景区域与背景区域的灰度直方图HO,HB:S5.4: Update the grayscale histograms HO and HB of the foreground area and the background area respectively according to the following formulas:
HO=(1-β)*HO+β*Ho_newHO =(1-β)*HO +β*Ho _new
HB=(1-β)*HB+β*HB_newHB =(1-β)*HB +β*HB _new
其中,β为直方图更新学习率,一般取值为0.04.Among them, β is the histogram update learning rate, and the general value is 0.04.
在本实例中,按照步骤S5所述更新方式进行模板更新的原因为:In this example, the reason for updating the template according to the updating method described in step S5 is:
目标被长时间部分遮挡时,目标模板的更新频率很难掌握。由于目标模型依赖于目标的再检测,更新频率过快会导致模板被背景物污染从而造成目标丢失;而更新频率过慢又会使得目标模板更新不及时,当目标逐步复现时,易造成模板不匹配,以致跟踪失败。因此在本实例中,相关滤波模板与灰度直方图独立进行更新。对相关滤波模板进行更新时,采用PSR值判断当前目标遮挡情况,若PSR值小于4.8,则停止更新直到PSR值再次大于阈值,这样可有效防止模板被污染。对于灰度直方图,则以一定学习率进行更新,缓解因相关滤波停止更新造成的模板匹配度低的问题。When the target is partially occluded for a long time, the update frequency of the target template is difficult to grasp. Since the target model relies on the re-detection of the target, too fast update frequency will cause the template to be polluted by the background material and cause target loss; too slow update frequency will make the target template update untimely. When the target gradually recurs, it is easy to cause the template to not be updated match, so that tracking fails. Therefore, in this example, the correlation filter template and the grayscale histogram are updated independently. When updating the relevant filtering template, the PSR value is used to determine the current target occlusion. If the PSR value is less than 4.8, the update is stopped until the PSR value is greater than the threshold again, which can effectively prevent the template from being polluted. For the gray histogram, it is updated with a certain learning rate to alleviate the problem of low template matching caused by the stop of updating the correlation filter.
图4与图5为两种遮挡场景下本发明算法与KCF算法的对比试验,其中黑色框为KCF跟踪结果,白色框为本发明算法跟踪结果。由图4(c)(d)两幅图像可以看出,当目标车辆被干扰物长时间部分遮挡时,KCF算法的模板被污染,不能进行正常跟踪,而本发明算法可持续跟踪目标。由图5(c)(d)可以看出,运动车辆受到严重遮挡后KCF算法不能重新找回目标,本发明算法可以在目标复现时准确定位目标所在位置。FIG. 4 and FIG. 5 are comparative experiments between the algorithm of the present invention and the KCF algorithm under two occlusion scenarios, wherein the black box is the KCF tracking result, and the white box is the tracking result of the algorithm of the present invention. It can be seen from the two images in Figure 4(c)(d) that when the target vehicle is partially occluded by the interferer for a long time, the template of the KCF algorithm is polluted and cannot be tracked normally, while the algorithm of the present invention can track the target continuously. It can be seen from Figure 5(c)(d) that the KCF algorithm cannot retrieve the target after the moving vehicle is severely occluded, and the algorithm of the present invention can accurately locate the target location when the target recurs.
为了更加直观地评判本发明算法的准确性与可靠性,对比本发明算法与其他经典算法在遮挡场景中的优劣,我们采用one_pass evaluation(OPE)方法对跟踪方法进行性能比较。图7中曲线标记为三角形的黑色实线为本发明算法性能曲线,黑色实线为KCF算法性能曲线,黑色虚线为ASLA算法性能曲线,曲线标记为三角形的黑色虚线为DFT算法性能曲线。假设由跟踪方法得到的每一帧的目标位置(bounding box)中心点与人工标注(ground_truth)的中心点两者之间的距离为τ,给定阈值ετ,若τ小于给定阈值则表明当前帧跟踪成功,计算跟踪成功的帧数占视频序列的百分比可一定程度上反映算法的跟踪准确率。如图7(a)所示,横坐标为位置误差阈值ετ,纵坐标为跟踪准确率,即跟踪成功的帧数占视频序列的百分比。由图7(a)可以看出,与传统经典算法相比,本算法在遮挡场景下拥有更高的准确率。In order to judge the accuracy and reliability of the algorithm of the present invention more intuitively, and compare the advantages and disadvantages of the algorithm of the present invention and other classical algorithms in the occlusion scene, we use the one_pass evaluation (OPE) method to compare the performance of the tracking method. In Figure 7, the black solid line marked as a triangle is the algorithm performance curve of the present invention, the black solid line is the KCF algorithm performance curve, the black dashed line is the ASLA algorithm performance curve, and the black dashed line marked as a triangle is the DFT algorithm performance curve. Assume that the distance between the center point of the target position (bounding box) of each frame obtained by the tracking method and the center point of the artificial annotation (ground_truth) is τ, and a threshold ετ is given. If τ is less than the given threshold, it indicates that The current frame tracking is successful, and calculating the percentage of successfully tracked frames in the video sequence can reflect the tracking accuracy of the algorithm to a certain extent. As shown in Figure 7(a), the abscissa is the position error threshold ετ , and the ordinate is the tracking accuracy, that is, the percentage of successful tracking frames in the video sequence. As can be seen from Figure 7(a), compared with the traditional classical algorithm, this algorithm has a higher accuracy in the occlusion scene.
图7(b)为跟踪方法的成功率评估曲线,我们计算每一帧的重合率的分(overlapscore,OS),当某一帧的OS大于给定阈值OT时,将当前帧视为成功,计算总的成功帧数占视频总帧数的百分比作为跟踪成功率。如图7(b)所示,横坐标为目标框与标注框的重合率阈值OT,其取值范围为0~1,纵轴坐标为跟踪成功率。由图中可以看出,在遮挡场景下本算法跟踪性能优于其他经典跟踪方法。Figure 7(b) shows the success rate evaluation curve of the tracking method. We calculate the overlap score (OS) of each frame. When the OS of a certain frame is greater than the given threshold OT , the current frame is regarded as successful , calculate the percentage of the total number of successful frames to the total number of video frames as the tracking success rate. As shown in Figure 7(b), the abscissa is the coincidence rate threshold OT of the target frame and the label frame, and its value range is 0-1, and the ordinate is the tracking success rate. It can be seen from the figure that the tracking performance of this algorithm is better than other classical tracking methods in the occlusion scene.
本领域的技术人员容易理解,以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。Those skilled in the art can easily understand that the above are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present invention, etc., All should be included within the protection scope of the present invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910182645.4ACN110009665B (en) | 2019-03-12 | 2019-03-12 | A Target Detection and Tracking Method in Occlusion Environment |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910182645.4ACN110009665B (en) | 2019-03-12 | 2019-03-12 | A Target Detection and Tracking Method in Occlusion Environment |
| Publication Number | Publication Date |
|---|---|
| CN110009665Atrue CN110009665A (en) | 2019-07-12 |
| CN110009665B CN110009665B (en) | 2020-12-29 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910182645.4AExpired - Fee RelatedCN110009665B (en) | 2019-03-12 | 2019-03-12 | A Target Detection and Tracking Method in Occlusion Environment |
| Country | Link |
|---|---|
| CN (1) | CN110009665B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110533699A (en)* | 2019-07-30 | 2019-12-03 | 平安科技(深圳)有限公司 | The dynamic multiframe speed-measuring method of pixel variation based on optical flow method |
| CN110599519A (en)* | 2019-08-27 | 2019-12-20 | 上海交通大学 | Anti-occlusion related filtering tracking method based on domain search strategy |
| CN110619658A (en)* | 2019-09-16 | 2019-12-27 | 北京地平线机器人技术研发有限公司 | Object tracking method, object tracking device and electronic equipment |
| CN111062967A (en)* | 2019-11-25 | 2020-04-24 | 山大地纬软件股份有限公司 | Electric power business hall passenger flow statistical method and system based on target dynamic tracking |
| CN111145216A (en)* | 2019-12-26 | 2020-05-12 | 电子科技大学 | A Tracking Method of Video Image Target |
| CN111402292A (en)* | 2020-03-10 | 2020-07-10 | 南昌航空大学 | Image sequence optical flow calculation method based on characteristic deformation error occlusion detection |
| CN111402294A (en)* | 2020-03-10 | 2020-07-10 | 腾讯科技(深圳)有限公司 | Target tracking method, target tracking device, computer-readable storage medium and computer equipment |
| CN111539395A (en)* | 2020-07-08 | 2020-08-14 | 浙江浙能天然气运行有限公司 | A real-time detection method of excavators based on optical flow method and support vector machine |
| CN111563517A (en)* | 2020-04-20 | 2020-08-21 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
| CN111986262A (en)* | 2020-09-07 | 2020-11-24 | 北京凌云光技术集团有限责任公司 | Image area positioning method and device |
| CN112446241A (en)* | 2019-08-29 | 2021-03-05 | 阿里巴巴集团控股有限公司 | Method and device for obtaining characteristic information of target object and electronic equipment |
| CN112926693A (en)* | 2021-04-12 | 2021-06-08 | 辽宁工程技术大学 | Kernel correlation filtering algorithm for fast motion and motion blur |
| CN113095176A (en)* | 2021-03-30 | 2021-07-09 | 中国建设银行股份有限公司 | Method and device for background reduction of video data |
| CN113592908A (en)* | 2021-07-26 | 2021-11-02 | 华中科技大学 | Template matching target tracking and system based on Otsu method and SAD-MCD fusion |
| CN114511803A (en)* | 2022-01-05 | 2022-05-17 | 绍兴市北大信息技术科创中心 | Target occlusion detection method for visual tracking task |
| CN116701663A (en)* | 2023-08-07 | 2023-09-05 | 鹏城实验室 | A method of constructing knowledge map based on digital retina system |
| CN117132567A (en)* | 2023-08-28 | 2023-11-28 | 中国移动紫金(江苏)创新研究院有限公司 | DPI (deep inspection) optical splitter detection method, equipment, storage medium and device |
| CN117218161A (en)* | 2023-11-09 | 2023-12-12 | 聊城市敏锐信息科技有限公司 | Fish track tracking method and system in fish tank |
| CN118710682A (en)* | 2024-06-25 | 2024-09-27 | 北京讯联昊天科技有限公司 | A target tracking method and system for image target simulator |
| CN119027368A (en)* | 2024-07-12 | 2024-11-26 | 北京市遥感信息研究所 | Intelligent stripe detection method and device for high-resolution and wide-width optical images |
| CN120014017A (en)* | 2025-04-15 | 2025-05-16 | 宁波大龙农业科技有限公司 | A method and system for extracting dynamic traits of seedling growth |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101621615A (en)* | 2009-07-24 | 2010-01-06 | 南京邮电大学 | Self-adaptive background modeling and moving target detecting method |
| US20120189164A1 (en)* | 2007-04-06 | 2012-07-26 | International Business Machines Corporation | Rule-based combination of a hierarchy of classifiers for occlusion detection |
| CN103077539A (en)* | 2013-01-23 | 2013-05-01 | 上海交通大学 | Moving object tracking method under complicated background and sheltering condition |
| CN103310465A (en)* | 2013-06-27 | 2013-09-18 | 东南大学 | Vehicle occlusion treating method based on Markov random field |
| CN104156976A (en)* | 2013-05-13 | 2014-11-19 | 哈尔滨点石仿真科技有限公司 | Multiple characteristic point tracking method for detecting shielded object |
| CN106384359A (en)* | 2016-09-23 | 2017-02-08 | 青岛海信电器股份有限公司 | Moving target tracking method and television set |
| CN106408591A (en)* | 2016-09-09 | 2017-02-15 | 南京航空航天大学 | Anti-blocking target tracking method |
| CN107609571A (en)* | 2017-08-02 | 2018-01-19 | 南京理工大学 | A kind of adaptive target tracking method based on LARK features |
| US20180357212A1 (en)* | 2017-06-13 | 2018-12-13 | Microsoft Technology Licensing, Llc | Detecting occlusion of digital ink |
| CN109345472A (en)* | 2018-09-11 | 2019-02-15 | 重庆大学 | An infrared moving small target detection method for complex scenes |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120189164A1 (en)* | 2007-04-06 | 2012-07-26 | International Business Machines Corporation | Rule-based combination of a hierarchy of classifiers for occlusion detection |
| CN101621615A (en)* | 2009-07-24 | 2010-01-06 | 南京邮电大学 | Self-adaptive background modeling and moving target detecting method |
| CN103077539A (en)* | 2013-01-23 | 2013-05-01 | 上海交通大学 | Moving object tracking method under complicated background and sheltering condition |
| CN104156976A (en)* | 2013-05-13 | 2014-11-19 | 哈尔滨点石仿真科技有限公司 | Multiple characteristic point tracking method for detecting shielded object |
| CN103310465A (en)* | 2013-06-27 | 2013-09-18 | 东南大学 | Vehicle occlusion treating method based on Markov random field |
| CN106408591A (en)* | 2016-09-09 | 2017-02-15 | 南京航空航天大学 | Anti-blocking target tracking method |
| CN106384359A (en)* | 2016-09-23 | 2017-02-08 | 青岛海信电器股份有限公司 | Moving target tracking method and television set |
| US20180357212A1 (en)* | 2017-06-13 | 2018-12-13 | Microsoft Technology Licensing, Llc | Detecting occlusion of digital ink |
| CN107609571A (en)* | 2017-08-02 | 2018-01-19 | 南京理工大学 | A kind of adaptive target tracking method based on LARK features |
| CN109345472A (en)* | 2018-09-11 | 2019-02-15 | 重庆大学 | An infrared moving small target detection method for complex scenes |
| Title |
|---|
| G.DI CATERINA 等: "Robust complete occlusion handling in adaptive template matching target tracking", 《ELECTRONICS LETTERS》* |
| WANG Z 等: "CamShift Guided Particle Filter for Visual Tracking", 《PATTERN RECOGNITION LETTERS》* |
| 张彦超 等: "遮挡目标的分片跟踪处理", 《中国图象图形学报》* |
| 王展青 等: "跟踪遮挡目标的一种鲁棒算法", 《计算机工程与应用》* |
| 顾培婷 等: "抗遮挡的相关滤波目标跟踪算法", 《华侨大学学报(自然科学版)》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110533699B (en)* | 2019-07-30 | 2024-05-24 | 平安科技(深圳)有限公司 | Dynamic multi-frame velocity measurement method for pixel change based on optical flow method |
| CN110533699A (en)* | 2019-07-30 | 2019-12-03 | 平安科技(深圳)有限公司 | The dynamic multiframe speed-measuring method of pixel variation based on optical flow method |
| CN110599519A (en)* | 2019-08-27 | 2019-12-20 | 上海交通大学 | Anti-occlusion related filtering tracking method based on domain search strategy |
| CN110599519B (en)* | 2019-08-27 | 2022-11-08 | 上海交通大学 | Anti-occlusion correlation filter tracking method based on domain search strategy |
| CN112446241A (en)* | 2019-08-29 | 2021-03-05 | 阿里巴巴集团控股有限公司 | Method and device for obtaining characteristic information of target object and electronic equipment |
| CN110619658B (en)* | 2019-09-16 | 2022-04-19 | 北京地平线机器人技术研发有限公司 | Object tracking method, object tracking device and electronic equipment |
| CN110619658A (en)* | 2019-09-16 | 2019-12-27 | 北京地平线机器人技术研发有限公司 | Object tracking method, object tracking device and electronic equipment |
| CN111062967A (en)* | 2019-11-25 | 2020-04-24 | 山大地纬软件股份有限公司 | Electric power business hall passenger flow statistical method and system based on target dynamic tracking |
| CN111062967B (en)* | 2019-11-25 | 2023-05-26 | 山大地纬软件股份有限公司 | Electric power business hall passenger flow statistical method and system based on target dynamic tracking |
| CN111145216B (en)* | 2019-12-26 | 2023-08-18 | 电子科技大学 | A Tracking Method of Video Image Target |
| CN111145216A (en)* | 2019-12-26 | 2020-05-12 | 电子科技大学 | A Tracking Method of Video Image Target |
| CN111402294A (en)* | 2020-03-10 | 2020-07-10 | 腾讯科技(深圳)有限公司 | Target tracking method, target tracking device, computer-readable storage medium and computer equipment |
| CN111402292A (en)* | 2020-03-10 | 2020-07-10 | 南昌航空大学 | Image sequence optical flow calculation method based on characteristic deformation error occlusion detection |
| CN111402294B (en)* | 2020-03-10 | 2022-10-18 | 腾讯科技(深圳)有限公司 | Target tracking method, target tracking device, computer-readable storage medium and computer equipment |
| CN111563517A (en)* | 2020-04-20 | 2020-08-21 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
| CN111563517B (en)* | 2020-04-20 | 2023-07-04 | 腾讯科技(深圳)有限公司 | Image processing method, device, electronic equipment and storage medium |
| CN111539395A (en)* | 2020-07-08 | 2020-08-14 | 浙江浙能天然气运行有限公司 | A real-time detection method of excavators based on optical flow method and support vector machine |
| CN111986262A (en)* | 2020-09-07 | 2020-11-24 | 北京凌云光技术集团有限责任公司 | Image area positioning method and device |
| CN111986262B (en)* | 2020-09-07 | 2024-04-26 | 凌云光技术股份有限公司 | Image area positioning method and device |
| CN113095176A (en)* | 2021-03-30 | 2021-07-09 | 中国建设银行股份有限公司 | Method and device for background reduction of video data |
| CN112926693A (en)* | 2021-04-12 | 2021-06-08 | 辽宁工程技术大学 | Kernel correlation filtering algorithm for fast motion and motion blur |
| CN112926693B (en)* | 2021-04-12 | 2024-05-24 | 辽宁工程技术大学 | Nuclear related filtering method for fast motion and motion blur |
| CN113592908A (en)* | 2021-07-26 | 2021-11-02 | 华中科技大学 | Template matching target tracking and system based on Otsu method and SAD-MCD fusion |
| CN114511803A (en)* | 2022-01-05 | 2022-05-17 | 绍兴市北大信息技术科创中心 | Target occlusion detection method for visual tracking task |
| CN114511803B (en)* | 2022-01-05 | 2024-09-06 | 绍兴市北大信息技术科创中心 | Target shielding detection method for visual tracking task |
| CN116701663A (en)* | 2023-08-07 | 2023-09-05 | 鹏城实验室 | A method of constructing knowledge map based on digital retina system |
| CN116701663B (en)* | 2023-08-07 | 2024-01-09 | 鹏城实验室 | Method for constructing knowledge graph based on digital retina system |
| CN117132567B (en)* | 2023-08-28 | 2025-09-19 | 中国移动紫金(江苏)创新研究院有限公司 | DPI (deep inspection) optical splitter detection method, equipment, storage medium and device |
| CN117132567A (en)* | 2023-08-28 | 2023-11-28 | 中国移动紫金(江苏)创新研究院有限公司 | DPI (deep inspection) optical splitter detection method, equipment, storage medium and device |
| CN117218161B (en)* | 2023-11-09 | 2024-01-16 | 聊城市敏锐信息科技有限公司 | Fish track tracking method and system in fish tank |
| CN117218161A (en)* | 2023-11-09 | 2023-12-12 | 聊城市敏锐信息科技有限公司 | Fish track tracking method and system in fish tank |
| CN118710682A (en)* | 2024-06-25 | 2024-09-27 | 北京讯联昊天科技有限公司 | A target tracking method and system for image target simulator |
| CN118710682B (en)* | 2024-06-25 | 2025-02-14 | 北京讯联昊天科技有限公司 | A target tracking method and system for image target simulator |
| CN119027368A (en)* | 2024-07-12 | 2024-11-26 | 北京市遥感信息研究所 | Intelligent stripe detection method and device for high-resolution and wide-width optical images |
| CN120014017A (en)* | 2025-04-15 | 2025-05-16 | 宁波大龙农业科技有限公司 | A method and system for extracting dynamic traits of seedling growth |
| Publication number | Publication date |
|---|---|
| CN110009665B (en) | 2020-12-29 |
| Publication | Publication Date | Title |
|---|---|---|
| CN110009665B (en) | A Target Detection and Tracking Method in Occlusion Environment | |
| US8948450B2 (en) | Method and system for automatic object detection and subsequent object tracking in accordance with the object shape | |
| Porikli et al. | Human body tracking by adaptive background models and mean-shift analysis | |
| CN104200495B (en) | A kind of multi-object tracking method in video monitoring | |
| CN108846854B (en) | Vehicle tracking method based on motion prediction and multi-feature fusion | |
| WO2020215492A1 (en) | Multi-bernoulli multi-target video detection and tracking method employing yolov3 | |
| WO2019228211A1 (en) | Lane-line-based intelligent driving control method and apparatus, and electronic device | |
| CN104615986B (en) | The method that pedestrian detection is carried out to the video image of scene changes using multi-detector | |
| CN106127807A (en) | A kind of real-time video multiclass multi-object tracking method | |
| CN103971386A (en) | Method for foreground detection in dynamic background scenario | |
| CN102332166B (en) | Probabilistic model based automatic target tracking method for moving camera | |
| CN110751671B (en) | Target tracking method based on kernel correlation filtering and motion estimation | |
| CN110766723A (en) | Unmanned aerial vehicle target tracking method and system based on color histogram similarity | |
| CN113139896A (en) | Target detection system and method based on super-resolution reconstruction | |
| CN107066968A (en) | The vehicle-mounted pedestrian detection method of convergence strategy based on target recognition and tracking | |
| CN105335701A (en) | Pedestrian detection method based on HOG and D-S evidence theory multi-information fusion | |
| CN106920253A (en) | It is a kind of based on the multi-object tracking method for blocking layering | |
| CN116664628A (en) | Target tracking method and device based on feature fusion and loss judgment mechanism | |
| Zhao et al. | APPOS: An adaptive partial occlusion segmentation method for multiple vehicles tracking | |
| Zhang et al. | A coarse-to-fine leaf detection approach based on leaf skeleton identification and joint segmentation | |
| CN113744313B (en) | Deep learning integrated tracking algorithm based on target movement track prediction | |
| US12293529B2 (en) | Method, system and non-transitory computer-readable media for prioritizing objects for feature extraction | |
| Yi et al. | Orientation and scale invariant mean shift using object mask-based kernel | |
| Sun et al. | Learning based particle filtering object tracking for visible-light systems | |
| CN116665097A (en) | Self-adaptive target tracking method combining context awareness |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee | Granted publication date:20201229 | |
| CF01 | Termination of patent right due to non-payment of annual fee |