
技术领域technical field
本发明涉及一种结合自适应空间权重和畸变抑制的相关滤波跟踪方法,属于计算机视觉与智能信息处理领域。The invention relates to a correlation filtering tracking method combining adaptive space weight and distortion suppression, belonging to the field of computer vision and intelligent information processing.
背景技术Background technique
目标跟踪是计算机视觉领域研究中的基础且关键的问题,在智能视频监控、运动分析、人机交互、行为分析、无人机追踪等领域有着广泛的应用。虽然目标跟踪技术在过去几十年取得了较大进展,但在运动目标出现形变、遮挡、尺度变化、背景杂斑等情形下准确且鲁棒的对目标进行跟踪依然是个具有挑战性的任务。Object tracking is a basic and key problem in the field of computer vision research, and has a wide range of applications in intelligent video surveillance, motion analysis, human-computer interaction, behavior analysis, UAV tracking and other fields. Although object tracking technology has made great progress in the past few decades, it is still a challenging task to accurately and robustly track objects under the conditions of deformation, occlusion, scale change, background noise, etc. of moving objects.
近年来,由于相关滤波类跟踪算法性能优异,能够较好的平衡速度和精度,成为主流跟踪算法,受到了国内外学者的广泛关注。2010年提出的MOSSE跟踪算法创新性地将相关滤波的思想引入到目标跟踪领域,奠定了相关滤波类跟踪算法的基础。之后学者们在MOSSE跟踪算法的基础上,针对特征、尺度等进行改进,提出了CSK、KCF、SAMF、Staple、fDSST等一系列经典跟踪算法。In recent years, due to the excellent performance of correlation filtering tracking algorithms, which can balance speed and accuracy well, they have become mainstream tracking algorithms and have received extensive attention from scholars at home and abroad. The MOSSE tracking algorithm proposed in 2010 innovatively introduced the idea of correlation filtering into the field of target tracking, laying the foundation for correlation filtering-based tracking algorithms. Later, on the basis of the MOSSE tracking algorithm, scholars improved features and scales, and proposed a series of classic tracking algorithms such as CSK, KCF, SAMF, Staple, and fDSST.
边界效应是一个值得关注的问题,由于传统相关滤波类跟踪方法利用循环矩阵的性质将计算转换到频域,在提高计算速度的同时也产生了部分非真实样本,从而导致不期望的边界效应,降低滤波器的判别能力,影响跟踪性能。为减小边界效应带来的影响,学者们提出了BACF、ARCF、ASRCF等算法。基于背景感知的BACF算法,使用了真实移位产生的负样本并增大搜索区域,但较大的搜索区域容易引入背景噪声,在背景复杂时容易造成跟踪漂移。基于畸变抑制的ARCF算法,能够抑制响应图的畸变,但是空间正则化权重没有学习能力,不能适应目标外观变化。基于自适应空间正则化的ASRCF算法,能够高效地学习得到一个空间权重以适应目标外观变化,但在目标出现运动模糊及较大形变时滤波器对不准确目标的学习容易过拟合。Boundary effects are a concern. Since the traditional correlation filtering-based tracking method uses the properties of circulant matrices to convert the calculation to the frequency domain, while improving the calculation speed, it also generates some unreal samples, resulting in undesired boundary effects. Reduce the discriminative ability of the filter and affect the tracking performance. In order to reduce the impact of boundary effects, scholars have proposed algorithms such as BACF, ARCF, and ASRCF. The BACF algorithm based on background perception uses the negative samples generated by the real shift and enlarges the search area, but the larger search area is easy to introduce background noise, and it is easy to cause tracking drift when the background is complex. The ARCF algorithm based on distortion suppression can suppress the distortion of the response map, but the spatial regularization weight has no learning ability and cannot adapt to the change of target appearance. The ASRCF algorithm based on adaptive spatial regularization can efficiently learn a spatial weight to adapt to changes in the appearance of the target, but when the target has motion blur and large deformation, the filter is prone to overfitting to inaccurate targets.
发明内容SUMMARY OF THE INVENTION
本发明提出了一种结合自适应空间权重和畸变抑制的相关滤波跟踪方法,目的在于保证足够搜索区域的同时,能有效缓解边界效应和模型退化问题而提供一种有着较高精度的目标跟踪方法。The present invention proposes a correlation filtering tracking method combining adaptive space weight and distortion suppression. The purpose is to ensure sufficient search area, while effectively alleviating boundary effects and model degradation problems and providing a target tracking method with higher accuracy .
本发明通过以下技术方案来实现上述目的:The present invention realizes above-mentioned purpose through following technical scheme:
一种结合自适应空间权重和畸变抑制的相关滤波跟踪方法,包括以下步骤:(1)提取手工特征A correlation filtering tracking method combining adaptive spatial weights and distortion suppression, comprising the following steps: (1) extracting handcrafted features
本发明通过提取FHOG特征,CN特征和灰度特征以增强算法模型对目标的表达能力。FHOG特征对光照变化不敏感,CN特征对运动模糊、低分辨率表征能力强,灰度特征运算速度快,特征互补提升算法模型对目标表达能力。The present invention enhances the expression ability of the algorithm model to the target by extracting the FHOG feature, the CN feature and the gray level feature. The FHOG feature is insensitive to illumination changes, the CN feature has a strong ability to represent motion blur and low resolution, the grayscale feature has a fast computing speed, and the feature complementarity improves the algorithm model's ability to express the target.
(2)在目标函数中结合自适应空间权重项和畸变抑制项(2) Combine the adaptive spatial weight term and the distortion suppression term in the objective function
在目标函数中加入畸变抑制项来约束当前帧响应图的变化率,增强滤波器的判别能力,以缓解滤波器模型退化问题;在目标函数中加入自适应空间权重项使空间正则化权重能够随着目标的变化而更新,使得滤波器能充分利用目标的多样性信息。结合自适应空间权重项和畸变抑制项后,目标函数为:A distortion suppression term is added to the objective function to constrain the rate of change of the response map of the current frame, and the discriminative ability of the filter is enhanced to alleviate the degradation problem of the filter model; adding an adaptive spatial weight term to the objective function enables the spatial regularization weight to change with time. It is updated as the target changes, so that the filter can make full use of the diversity information of the target. After combining the adaptive space weight term and the distortion suppression term, the objective function is:
式中,D表示总通道数,k和k-1分别代表第k帧和第k-1帧,d代表d通道,代表帧第通道的特征,C是从背景感知相关滤波器(BACF)中保留的裁剪矩阵用以确保足够的搜索区域,λ1和λ2是空间正则化参数,β是畸变惩罚参数,p,q表示二维空间中两幅响应图中两个峰值的位置差异,ψp,q表示使两幅响应图两个峰值重合的移位操作,wr是空间正则化参考权重,w是空间正则化权重,第二项代表畸变抑制项,第三项和第四项代表自适应空间权重项。In the formula, D represents the total number of channels, k and k-1 represent the kth frame and the k-1th frame respectively, d represents the d channel, represents the features of the frame channel, C is the clipping matrix retained from the background-aware correlation filter (BACF) to ensure sufficient search area, λ1 and λ2 are the spatial regularization parameters, β is the distortion penalty parameter, p, q represents the position difference of the two peaks in the two response maps in the two-dimensional space, ψp,q represents the shift operation to make the two peaks of the two response maps coincide, wr is the spatial regularization reference weight, w is the spatial regularity The second term represents the distortion suppression term, and the third and fourth terms represent the adaptive spatial weight term.
(3)采用交替方向乘子法(ADMM)对相关滤波器进行优化迭代求解(3) Using the alternating direction multiplier method (ADMM) to optimize the iterative solution of the correlation filter
考虑到计算的方便,首先将式(1)转换为如下形式:Considering the convenience of calculation, formula (1) is first converted into the following form:
Xk是xk的向量形式,ID是D×D的单位矩阵,符号代表克罗内克积,上标T代表共轭转置运算,Mk-1代表上一帧的响应图,其值为为了减小运算量,将式(2)转换到频域:Xk is the vector form of xk , ID is theD ×D identity matrix, notation Represents the Kronecker product, the superscript T represents the conjugate transpose operation, Mk-1 represents the response map of the previous frame, and its value is In order to reduce the computational complexity, formula (2) is converted to the frequency domain:
式(3)中,代表Xk的离散傅里叶变换,代表Mk-1[ψp,q]的离散傅里叶变换,为了方便后续的优化求解,在式(3)中引入一个新参数In formula (3), represents the discrete Fourier transform of Xk , Represents the discrete Fourier transform of Mk-1 [ψp,q ]. In order to facilitate the subsequent optimization solution, a new parameter is introduced in Eq. (3).
由于式(3)是凸函数,将式(3)写成如下的增广拉格朗日形式:Since formula (3) is a convex function, formula (3) can be written in the augmented Lagrangian form as follows:
式(4)中μ是惩罚因子,并引入傅立叶域中的拉格朗日向量作为辅助变量。In formula (4), μ is the penalty factor, and the Lagrangian vector in the Fourier domain is introduced as an auxiliary variable.
对第k帧运用ADMM算法意味着可将式(4)分解为两个子问题求解,即求解h*k+1和Applying the ADMM algorithm to the kth frame means that equation (4) can be decomposed into two sub-problems to solve, namely solving h*k+1 and
子问题1:求解h*k+1。Subproblem 1: Solve for h*k+1 .
容易求得:Easy to get:
其中,gk和以及α和的转换关系如下:where gk and and α and The conversion relationship is as follows:
子问题2:求解Subproblem 2: Solving
为了求解方便,将式(8)分解为N个子问题的求解,n=[1,2,...N]。For the convenience of solving, formula (8) is decomposed into the solving of N sub-problems, n=[1,2,...N].
其中,conj(·)表示复共轭运算。是的离散傅里叶变换,即每个子问题的解如下:in, conj(·) represents a complex conjugate operation. Yes The discrete Fourier transform of The solution to each subproblem is as follows:
由于式(10)中含有求逆运算,计算量较大,采用Sherman-Morrison公式对其继续优化,可得出式(10)的等价形式如下:Since Equation (10) contains an inversion operation, which requires a large amount of calculation, the Sherman-Morrison formula is used to continue to optimize it, and the equivalent form of Equation (10) can be obtained as follows:
其中至此,子问题h*k+1和解毕。in So far, the subproblems h*k+1 and Solved.
拉格朗日乘子的更新方案为:Lagrange Multipliers The update plan is:
(4)采用交替方向乘子法(ADMM)对自适应空间权重参数进行迭代求解(4) Iteratively solve the adaptive spatial weight parameters by using the alternating direction multiplier method (ADMM).
为了减少计算量,对自适应空间权重参数w的求解依然采用ADMM算法求解。引入辅助变量f构造限制等式w=f,则可将目标函数写成:In order to reduce the amount of calculation, the ADMM algorithm is still used to solve the adaptive spatial weight parameter w. Introducing the auxiliary variable f to construct the restriction equation w=f, the objective function can be written as:
接着将式(13)写成增广拉格朗日形式:Then formula (13) can be written in augmented Lagrangian form:
δ是惩罚参数,s是拉格朗日乘子。引入参数式(14)可写成如下等价形式:δ is the penalty parameter and s is the Lagrange multiplier. Import parameters Equation (14) can be written in the following equivalent form:
则对式(15)的求解可以转换为两个子问题的求解。Then the solution of equation (15) can be transformed into the solution of two sub-problems.
子问题1:w*的求解。Subproblem 1: Solving for w* .
w*的解为:The solution for w* is:
其中,in,
子问题2:f*的求解。Subproblem 2: Solving for f* .
f*的解为:The solution for f* is:
拉格朗日乘子的更新方案为:The update scheme of Lagrange multipliers is:
m(i+1)=m(i)+w(i+1)-f(i+1) (20)m(i+1) = m(i) + w(i+1) - f(i+1) (20)
(5)模型更新(5) Model update
目标表观模型按照下式更新:The target appearance model is updated as follows:
式(21)中,k和k-1分别表示k帧和k-1帧,η表示表观模型学习率。In Equation (21), k and k-1 represent k frames and k-1 frames, respectively, and η represents the apparent model learning rate.
附图说明Description of drawings
图1为本发明整体框架图。Fig. 1 is the overall frame diagram of the present invention.
具体实施方式Detailed ways
下面结合附图对本发明作进一步说明:The present invention will be further described below in conjunction with the accompanying drawings:
如图1所示,一种结合自适应空间权重和畸变抑制的相关滤波跟踪方法包括以下步骤:As shown in Figure 1, a correlation filtering tracking method combining adaptive spatial weights and distortion suppression includes the following steps:
(1)输入视频帧,提取当前帧的FHOG特征、CN特征和灰度特征用以描述目标。(1) Input the video frame, extract the FHOG feature, CN feature and gray level feature of the current frame to describe the target.
(2)自适应空间权重参数随着目标表观模型的变化而更新。(2) The adaptive spatial weight parameter is updated with the change of the target appearance model.
(3)根据上一帧的响应图约束当前帧响应图的变化率,以得到质量更高的响应图。(3) Constrain the change rate of the response map of the current frame according to the response map of the previous frame, so as to obtain a response map of higher quality.
(4)把当前帧响应图的最高点作为最终的跟踪结果。(4) Take the highest point of the response graph of the current frame as the final tracking result.
为了验证本发明所述结合自适应空间权重和畸变抑制的相关滤波跟踪方法的合理性和有效性,选取OTB-2013和OTB-2015两个标准数据集进行实验,并采用距离精度和成功率作为评价指标。本发明实现所用编程软件为MATLAB R2017a,操作系统为Windows10,计算机配置为CPU:Intel(R)Celeron(R)G1820,主频2.70GHz,内存8G。In order to verify the rationality and effectiveness of the correlation filtering tracking method combining adaptive spatial weight and distortion suppression, two standard data sets, OTB-2013 and OTB-2015, were selected for experiments, and the distance accuracy and success rate were used as evaluation indicators. The programming software used in the implementation of the present invention is MATLAB R2017a, the operating system is Windows 10, and the computer is configured as CPU: Intel(R) Celeron(R) G1820, the main frequency is 2.70GHz, and the memory is 8G.
表1列出了本发明在OTB-2013数据集上的实验结果,距离精度达到88.9%,成功率达到67.8%,跟踪性能优于对比算法,实现了更加准确的目标跟踪。Table 1 lists the experimental results of the present invention on the OTB-2013 data set. The distance accuracy reaches 88.9%, and the success rate reaches 67.8%. The tracking performance is better than the comparison algorithm, and more accurate target tracking is achieved.
本发明也使用了更加具有挑战性的OTB-2015数据库进行实验,实验结果如表2所示,本发明的距离精度达到87.9%,成功率达到67.5%,与其他方法相比,跟踪精度和成功率均有不同程度的提升,因而本发明的合理性和有效性得到充分证实。The present invention also uses the more challenging OTB-2015 database for experiments. The experimental results are shown in Table 2. The distance accuracy of the present invention reaches 87.9%, and the success rate reaches 67.5%. Compared with other methods, the tracking accuracy and success rate Therefore, the rationality and effectiveness of the present invention have been fully confirmed.
表1 OTB-2013数据集实验结果Table 1 Experimental results of OTB-2013 dataset
表2 OTB-2015数据集实验结果Table 2 Experimental results of OTB-2015 dataset
| Application Number | Priority Date | Filing Date | Title | 
|---|---|---|---|
| CN202010001660.7ACN113066102A (en) | 2020-01-02 | 2020-01-02 | A Correlation Filter Tracking Method Combining Adaptive Spatial Weights and Distortion Suppression | 
| Application Number | Priority Date | Filing Date | Title | 
|---|---|---|---|
| CN202010001660.7ACN113066102A (en) | 2020-01-02 | 2020-01-02 | A Correlation Filter Tracking Method Combining Adaptive Spatial Weights and Distortion Suppression | 
| Publication Number | Publication Date | 
|---|---|
| CN113066102Atrue CN113066102A (en) | 2021-07-02 | 
| Application Number | Title | Priority Date | Filing Date | 
|---|---|---|---|
| CN202010001660.7APendingCN113066102A (en) | 2020-01-02 | 2020-01-02 | A Correlation Filter Tracking Method Combining Adaptive Spatial Weights and Distortion Suppression | 
| Country | Link | 
|---|---|
| CN (1) | CN113066102A (en) | 
| Publication number | Priority date | Publication date | Assignee | Title | 
|---|---|---|---|---|
| CN114863338A (en)* | 2022-05-07 | 2022-08-05 | 泉州师范学院 | A target tracking method for model optimization | 
| CN115601400A (en)* | 2022-11-02 | 2023-01-13 | 上海交通大学(Cn) | A multi-mode fusion tracking and positioning method for video targets with suppression of correlation filter distortion | 
| CN117456005A (en)* | 2023-09-14 | 2024-01-26 | 中国铁路设计集团有限公司 | Railway track fastener space displacement extraction method | 
| Publication number | Priority date | Publication date | Assignee | Title | 
|---|---|---|---|---|
| US20140126670A1 (en)* | 2012-11-07 | 2014-05-08 | Datum Systems, Inc. | Method and apparatus for nonlinear-channel identification and estimation of nonlinear-distorted signals | 
| CN108629797A (en)* | 2018-04-28 | 2018-10-09 | 四川大学 | A kind of visual target tracking method based on particle filter | 
| CN108776975A (en)* | 2018-05-29 | 2018-11-09 | 安徽大学 | Visual tracking method based on semi-supervised feature and filter joint learning | 
| CN109034193A (en)* | 2018-06-20 | 2018-12-18 | 上海理工大学 | Multiple features fusion and dimension self-adaption nuclear phase close filter tracking method | 
| CN109859241A (en)* | 2019-01-09 | 2019-06-07 | 厦门大学 | Adaptive features select and time consistency robust correlation filtering visual tracking method | 
| Publication number | Priority date | Publication date | Assignee | Title | 
|---|---|---|---|---|
| US20140126670A1 (en)* | 2012-11-07 | 2014-05-08 | Datum Systems, Inc. | Method and apparatus for nonlinear-channel identification and estimation of nonlinear-distorted signals | 
| CN108629797A (en)* | 2018-04-28 | 2018-10-09 | 四川大学 | A kind of visual target tracking method based on particle filter | 
| CN108776975A (en)* | 2018-05-29 | 2018-11-09 | 安徽大学 | Visual tracking method based on semi-supervised feature and filter joint learning | 
| CN109034193A (en)* | 2018-06-20 | 2018-12-18 | 上海理工大学 | Multiple features fusion and dimension self-adaption nuclear phase close filter tracking method | 
| CN109859241A (en)* | 2019-01-09 | 2019-06-07 | 厦门大学 | Adaptive features select and time consistency robust correlation filtering visual tracking method | 
| Title | 
|---|
| 宋日成等: "基于方向可靠性的互补跟踪算法", 《光学学报》* | 
| 李玮等: "自适应方法在遥感图像处理中的应用研究", 《遥感信息》* | 
| Publication number | Priority date | Publication date | Assignee | Title | 
|---|---|---|---|---|
| CN114863338A (en)* | 2022-05-07 | 2022-08-05 | 泉州师范学院 | A target tracking method for model optimization | 
| CN115601400A (en)* | 2022-11-02 | 2023-01-13 | 上海交通大学(Cn) | A multi-mode fusion tracking and positioning method for video targets with suppression of correlation filter distortion | 
| CN117456005A (en)* | 2023-09-14 | 2024-01-26 | 中国铁路设计集团有限公司 | Railway track fastener space displacement extraction method | 
| Publication | Publication Date | Title | 
|---|---|---|
| CN110335290B (en) | Twin candidate region generation network target tracking method based on attention mechanism | |
| CN114092517B (en) | Multi-target tracking method based on traditional and deep learning algorithms | |
| CN108549839B (en) | Adaptive feature fusion multi-scale correlation filtering visual tracking method | |
| CN108734723A (en) | A kind of correlation filtering method for tracking target based on adaptive weighting combination learning | |
| CN107689052B (en) | Visual object tracking method based on multi-model fusion and structured deep features | |
| CN113066102A (en) | A Correlation Filter Tracking Method Combining Adaptive Spatial Weights and Distortion Suppression | |
| Jing et al. | Uncertainty guided adaptive warping for robust and efficient stereo matching | |
| CN107369166A (en) | A kind of method for tracking target and system based on multiresolution neutral net | |
| CN111080675A (en) | A Target Tracking Method Based on Spatio-temporal Constraint Correlation Filtering | |
| CN109035300B (en) | Target tracking method based on depth feature and average peak correlation energy | |
| CN107452022A (en) | A kind of video target tracking method | |
| US11821986B1 (en) | Target tracking method, system, device and storage medium | |
| CN111862167B (en) | Rapid robust target tracking method based on sparse compact correlation filter | |
| CN110555864B (en) | Self-adaptive target tracking method based on PSPCE | |
| CN112991390B (en) | Multi-tracker fusion target tracking method based on background perception | |
| CN109325966A (en) | A method for visual tracking through spatiotemporal context | |
| CN113112522B (en) | Siamese network target tracking method based on deformable convolution and template updating | |
| CN114359347A (en) | Space-time regularization self-adaptive correlation filtering target tracking algorithm based on sample reliability | |
| CN116486203B (en) | Single-target tracking method based on twin network and online template updating | |
| CN110276782B (en) | Hyperspectral target tracking method combining spatial spectral features and related filtering | |
| CN111931722A (en) | Correlated filtering tracking method combining color ratio characteristics | |
| CN114066942A (en) | A Correlation Filtering Target Tracking Method Based on Manifold Background Perception | |
| CN114119974A (en) | Semantic segmentation model based on improved PSPNet | |
| Chen et al. | Long-term correlation tracking via spatial–temporal context | |
| CN109492530B (en) | Robust visual object tracking method based on depth multi-scale space-time characteristics | 
| Date | Code | Title | Description | 
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |