Movatterモバイル変換


[0]ホーム

URL:


CN115240128A - A Target Tracking Method Based on Scale Adaptive - Google Patents

A Target Tracking Method Based on Scale Adaptive
Download PDF

Info

Publication number
CN115240128A
CN115240128ACN202210628171.3ACN202210628171ACN115240128ACN 115240128 ACN115240128 ACN 115240128ACN 202210628171 ACN202210628171 ACN 202210628171ACN 115240128 ACN115240128 ACN 115240128A
Authority
CN
China
Prior art keywords
target
scale
classifier
monitoring
filter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210628171.3A
Other languages
Chinese (zh)
Inventor
柳稼航
沈鑫怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and AstronauticsfiledCriticalNanjing University of Aeronautics and Astronautics
Priority to CN202210628171.3ApriorityCriticalpatent/CN115240128A/en
Publication of CN115240128ApublicationCriticalpatent/CN115240128A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种基于尺度自适应的目标跟踪方法,利用位置滤波器Fpos来获取目标的位置信息,然后通过提取HOG特征用于训练尺度滤波器Fscale,得到目标的尺度信息,将位置信息和尺度信息传输给云台控制模块通过计算运动目标的动态方向和目标尺度变化量转化为偏移角度和变焦数据,使得目标对象能够准确显示在监控图像的指定位置。本发明能够有效解决跟踪系统自动化程度低的问题。

Figure 202210628171

The invention discloses a target tracking method based on scale adaptation. The position filter Fpos is used to obtain the position information of the target, and then the HOG feature is extracted for training the scale filter Fscale to obtain the scale information of the target. The information and scale information are transmitted to the PTZ control module and converted into offset angle and zoom data by calculating the dynamic direction of the moving target and the change in the target scale, so that the target object can be accurately displayed in the designated position of the monitoring image. The invention can effectively solve the problem of low automation degree of the tracking system.

Figure 202210628171

Description

Translated fromChinese
一种基于尺度自适应的目标跟踪方法A Target Tracking Method Based on Scale Adaptive

技术领域technical field

本发明涉及目标追踪技术领域,尤其是一种基于尺度自适应的目标跟踪方法。The invention relates to the technical field of target tracking, in particular to a target tracking method based on scale adaptation.

背景技术Background technique

近年来,随着科技的不断发展,智能监控得到了社会的广泛关注。视频监控系统使用平移-倾斜-变焦(PTZ)摄像头,可以灵活的进行姿态调整和变焦功能。目前这些摄像头主要用于周边监视,其中安保人员必须从远距离监视入侵者,利用PTZ相机进行监控时需要通过手动操控485键盘或者客户端跟踪监控目标,但在操作过程中这个可疑人员可能是人也可能是车或者其他移动目标,这些目标往往移动速度较快,安保人员不仅需要调整云台的速度以尽可能的接近目标移动的速度,而且还需要手动变焦以尽可能的放大监控目标在画面中的比例。而这种人为的操控往往滞后于目标的运动,影响追踪的实时性。而且长时间监控屏幕很容易出现人为失误,使得在监控过程中遗漏很多重要细节。因此,设计出一种尺度自适应的目标跟踪方法是非常重要的。In recent years, with the continuous development of science and technology, intelligent monitoring has received extensive attention from the society. The video surveillance system uses a pan-tilt-zoom (PTZ) camera, which can flexibly adjust the posture and zoom. At present, these cameras are mainly used for peripheral surveillance, in which security personnel must monitor intruders from a long distance. When using PTZ cameras to monitor, they need to manually control the 485 keyboard or the client to track the monitoring target, but during the operation, the suspicious person may be a human being. It may also be a car or other moving targets. These targets tend to move faster. Security personnel not only need to adjust the speed of the gimbal to be as close as possible to the moving speed of the target, but also need to manually zoom to zoom in on the monitoring target as much as possible. ratio in . This artificial manipulation often lags behind the movement of the target, which affects the real-time tracking. And long-term monitoring of the screen is prone to human error, so that many important details are missed in the monitoring process. Therefore, it is very important to design a scale-adaptive target tracking method.

目前,KCF算法由于其优越的追踪性能在实际生活中受到广泛的应用。但由于其追踪框大小固定不变,当目标发生较大尺度变化时,在后续帧中无法达到预期效果。Li等人提出IBCCF算法,采用二维滤波器和一维边界滤波器相结合的框架有效地解决尺度的问题,但是计算冗余影响跟踪速度。SAMF算法利用颜色和梯度互补特性并通过引入尺度池的方法对目标尺度变化问题进行改进,可以在小范围内实现目标尺度自适应追踪。由于算法需要在每一帧对当前目标进行7种尺度的缩放以得到最佳尺度,单就每一帧计算7次响应的过程,时间消耗相当于KCF的7倍,因此SAMF算法的计算成本较高。At present, the KCF algorithm is widely used in real life due to its superior tracking performance. However, since the size of the tracking frame is fixed, when the target changes in a large scale, the expected effect cannot be achieved in subsequent frames. Li et al. proposed the IBCCF algorithm, which uses a framework combining two-dimensional filters and one-dimensional boundary filters to effectively solve the problem of scale, but the computational redundancy affects the tracking speed. The SAMF algorithm uses the complementary characteristics of color and gradient and improves the problem of target scale change by introducing the method of scale pooling, which can achieve target scale adaptive tracking in a small range. Since the algorithm needs to scale the current target with 7 scales in each frame to obtain the optimal scale, the process of calculating 7 responses per frame alone consumes 7 times the time of KCF, so the calculation cost of the SAMF algorithm is higher than that of KCF. high.

发明内容SUMMARY OF THE INVENTION

本发明所要解决的技术问题在于,提供一种基于尺度自适应的目标跟踪方法,能够有效解决跟踪系统自动化程度低的问题。The technical problem to be solved by the present invention is to provide a target tracking method based on scale adaptation, which can effectively solve the problem of low automation degree of the tracking system.

为解决上述技术问题,本发明提供一种基于尺度自适应的目标跟踪方法,包括如下步骤:In order to solve the above technical problems, the present invention provides a target tracking method based on scale adaptation, comprising the following steps:

步骤1、输入首帧图像,初始化目标的位置pos1和尺度scale1Step 1. Input the first frame image, initialize the target position pos1 and scale scale1 ;

步骤2、提取搜索区域的Hog特征和Lab特征,利用循环矩阵的形式来生成更多的样本信息,其中基样本为正样本,其余都为负样本;Step 2. Extract the Hog feature and Lab feature of the search area, and use the form of a cyclic matrix to generate more sample information, where the base sample is a positive sample, and the rest are negative samples;

步骤3、通过岭回归训练目标分类器,转化到傅里叶域中训练和更新相关滤波器;Step 3. Train the target classifier through ridge regression, and transform it into the Fourier domain to train and update the correlation filter;

步骤4、使用训练好的滤波器来预测目标所在的位置,确定响应峰值得到跟踪目标的位置pos2Step 4, use the trained filter to predict the position of the target, and determine the response peak to obtain the position pos2 of the tracking target;

步骤5、构建一个独立的尺度判别分类器,通过对目标尺度进行采样得到一组尺度空间;Step 5. Build an independent scale discriminant classifier, and obtain a set of scale spaces by sampling the target scale;

步骤6、计算尺度滤波器的响应值,以响应最大值作为最佳匹配,确定并更新当前尺度scale2Step 6, calculate the response value of the scale filter, and determine and update the current scale scale2 with the maximum response value as the best match;

步骤7、利用跟踪算法得到目标矩形框的中心坐标,通过比较两个坐标之间的位置来决定是否进行云台转动,保从而保持监控目标始终处于监控画面的中心区域;Step 7, use the tracking algorithm to obtain the center coordinates of the target rectangular frame, and determine whether to perform the pan-tilt rotation by comparing the position between the two coordinates, so as to ensure that the monitoring target is always in the central area of the monitoring screen;

步骤8、基于改进的KCF算法输出实时的目标尺度,利用目标的实际高度与理想高度的差值,以此确定目标的缩放率进行相机的变焦控制。Step 8. Output the real-time target scale based on the improved KCF algorithm, and use the difference between the actual height of the target and the ideal height to determine the zoom ratio of the target to control the zoom of the camera.

优选的,步骤3中,通过岭回归训练目标分类器f(x)=wTx,转化到傅里叶域中训练和更新相关滤波器,岭回归的训练过程为:Preferably, in step 3, the target classifier f(x)=wT x is trained by ridge regression, and transformed into the Fourier domain to train and update the relevant filters. The training process of ridge regression is:

Figure BDA0003678682580000021
Figure BDA0003678682580000021

其中,(xi,yi)是一个训练样本和回归目标对应的序列,λ是正则化参数,用于防止过度拟合,w是训练权重。where (xi ,yi ) is a sequence corresponding to a training sample and a regression target, λ is a regularization parameter to prevent overfitting, and w is a training weight.

优选的,步骤5中,构建一个独立的尺度判别分类器,通过对目标尺度进行采样得到一组尺度空间具体为:首先通过位置滤波器估计目标所在的位置,利用当前位置为中心点,建立目标尺度金字塔来提取训练样本,假设训练样本为x1,x2,...,xn,每个样本对应的输出为yi,使用岭回归求解最优尺度滤波器hi,如下:Preferably, in step 5, an independent scale discriminant classifier is constructed, and a set of scale spaces is obtained by sampling the target scale. Specifically: first, estimate the position of the target through the position filter, and use the current position as the center point to establish the target. The scale pyramid is used to extract training samples. Assuming that the training samples are x1 , x2 , ..., xn , the output corresponding to each sample is yi , and ridge regression is used to solve the optimal scale filter hi , as follows:

Figure BDA0003678682580000022
Figure BDA0003678682580000022

其中,

Figure BDA0003678682580000023
表示卷积;λ为正则化参数,用于防止过度拟合;in,
Figure BDA0003678682580000023
Represents convolution; λ is a regularization parameter to prevent overfitting;

用于尺度评估的目标样本的候选尺度参数选取为:The candidate scale parameters of the target sample used for scale evaluation are selected as:

anW×anRan W×an R

其中,

Figure BDA0003678682580000024
a为比例因子,跟踪对象的比例大小为W×R;in,
Figure BDA0003678682580000024
a is the scale factor, and the scale of the tracked object is W×R;

获取最终的尺度滤波器的响应最大值即为候选目标的最优尺度,对尺度分类器学习系数

Figure BDA0003678682580000031
和外观模型
Figure BDA0003678682580000032
以学习率η进行更新:Obtaining the maximum response value of the final scale filter is the optimal scale of the candidate target, and learning coefficients for the scale classifier
Figure BDA0003678682580000031
and appearance model
Figure BDA0003678682580000032
Update with learning rate η:

Figure BDA0003678682580000033
Figure BDA0003678682580000033

Figure BDA0003678682580000034
Figure BDA0003678682580000034

其中,

Figure BDA0003678682580000035
Figure BDA0003678682580000036
分别为尺度分类器的外观模型和尺度分类器的系数矩阵,
Figure BDA0003678682580000037
Figure BDA0003678682580000038
为当前帧中尺度分类器的外观模型和系数矩阵。in,
Figure BDA0003678682580000035
and
Figure BDA0003678682580000036
are the appearance model of the scale classifier and the coefficient matrix of the scale classifier, respectively,
Figure BDA0003678682580000037
and
Figure BDA0003678682580000038
is the appearance model and coefficient matrix of the scale classifier in the current frame.

优选的,步骤7中,利用跟踪算法得到目标矩形框的中心坐标(u,v),已知监控画面的中心坐标为

Figure BDA0003678682580000039
通过比较两个坐标之间的位置来决定是否进行云台转动,从而保持监控目标始终处于监控画面的中心区域;Preferably, in step 7, the center coordinates (u, v) of the target rectangular frame are obtained by using a tracking algorithm, and the center coordinates of the known monitoring screen are
Figure BDA0003678682580000039
Determine whether to rotate the pan/tilt by comparing the position between the two coordinates, so as to keep the monitoring target in the central area of the monitoring screen;

目标移动的水平方向的角度α和垂直方向的角度β分别为:The angle α in the horizontal direction and the angle β in the vertical direction of the target movement are:

Figure BDA00036786825800000310
Figure BDA00036786825800000310

其中,u和v分别代表运动目标水平方向和垂直方向上的坐标,Mw×Mh为监控视频画面的分辨率,角度θ1为相机水平最大视角的一半,角度

Figure BDA00036786825800000311
为相机垂直最大视角的一半。Among them, u and v represent the coordinates of the moving target in the horizontal and vertical directions respectively, Mw ×Mh is the resolution of the monitoring video screen, and the angle θ1 is half of the maximum horizontal viewing angle of the camera.
Figure BDA00036786825800000311
Half of the vertical maximum viewing angle of the camera.

优选的,步骤8中,当需要变焦时,镜头的调焦倍数按如下公式进行:Preferably, in step 8, when zooming is required, the zoom factor of the lens is carried out according to the following formula:

Figure BDA00036786825800000312
Figure BDA00036786825800000312

其中,Se为监控目标在画面中的理想尺寸,Sp为当前追踪算法检测出的目标的实际尺寸。Among them,Se is the ideal size of the monitoring target in the screen, andSp is the actual size of the target detected by the current tracking algorithm.

本发明的有益效果为:(1)本发明增加了一个单独的尺度判别分类器,可以自适应的进行尺度变化,并结合自动变焦的方法,使得摄像机始终能捕捉到目标的清晰图像,工作效率高,实用性强;(2)提出了一种基于PTZ摄像头的自动目标追踪系统,结合改进的KCF算法与提出的PTZ控制算法相结合完成了目标的精准定位和追踪,使监控系统更加的科学化、自动化。The beneficial effects of the present invention are: (1) The present invention adds a separate scale discriminating classifier, which can adaptively change the scale, and combined with the method of automatic zooming, so that the camera can always capture a clear image of the target, and the work efficiency (2) An automatic target tracking system based on PTZ camera is proposed, combined with the improved KCF algorithm and the proposed PTZ control algorithm to complete the precise positioning and tracking of the target, making the monitoring system more scientific ization and automation.

附图说明Description of drawings

图1为本发明的方法流程示意图。FIG. 1 is a schematic flow chart of the method of the present invention.

图2为本发明第t帧跟踪过程示意图。FIG. 2 is a schematic diagram of the t-th frame tracking process of the present invention.

图3为本发明PTZ相机目标追踪系统图。FIG. 3 is a diagram of the target tracking system of the PTZ camera according to the present invention.

图4为本发明监控画面中心与运动目标相对位置示意图。FIG. 4 is a schematic diagram of the relative positions of the center of the monitoring screen and the moving target according to the present invention.

图5(a)为本发明原始KCF算法对于目标追踪的结果图。Figure 5(a) is a result diagram of the original KCF algorithm of the present invention for target tracking.

图5(b)为本发明原始KCF算法对于目标追踪的结果图。Figure 5(b) is a result diagram of the original KCF algorithm of the present invention for target tracking.

图6(a)为本发明对于目标追踪的结果图。FIG. 6( a ) is a result diagram of target tracking according to the present invention.

图6(b)为本发明对于目标追踪的结果图。FIG. 6(b) is a result diagram of the present invention for target tracking.

图6(c)为本发明对于目标追踪的结果图。FIG. 6( c ) is a result diagram of target tracking according to the present invention.

具体实施方式Detailed ways

如图1所示,一种基于尺度自适应的目标跟踪方法,包括如下步骤:As shown in Figure 1, a target tracking method based on scale adaptation includes the following steps:

第一步:依据用户框选的初始目标框,从图2中看到,矩形框的区域即为追踪目标,得到目标的初始位置以及大小;Step 1: According to the initial target frame selected by the user, it can be seen from Figure 2 that the area of the rectangular frame is the tracking target, and the initial position and size of the target are obtained;

第二步:采用4×4的cell提取padding区域的Hog和Lab特征信息,对所有特征根据cell位置进行加权。利用循环矩阵的性质生成更多的样本信息,基样本为正样本,其余都为负样本;Step 2: Use a 4×4 cell to extract the Hog and Lab feature information of the padding area, and weight all the features according to the cell position. Use the properties of the circulant matrix to generate more sample information, the base sample is a positive sample, and the rest are negative samples;

其中矩形框扩大2.5倍变为padding窗口,进行padding操作是为了避免目标因为循环移位被分解,扩大后的padding包含了样本中需要特别学习的背景信息。The rectangular box is expanded by 2.5 times to become a padding window. The padding operation is performed to avoid the target from being decomposed due to cyclic shift. The expanded padding contains the background information in the sample that needs to be specially learned.

第三步:通过岭回归训练目标分类器f(x)=ωTx,训练的目的是最小化样本xi和回归目标yi的平方误差,利用循环矩阵在傅里叶空间可对角化的性质,转化到频域中进行训练和更新;The third step: train the target classifier f(x) = ωT x through ridge regression. The purpose of training is to minimize the squared error between the sample xi and the regression target yi , and use the circulant matrix to diagonalize in the Fourier space. The properties of , are transformed into the frequency domain for training and updating;

第四步:使用训练好的滤波器对新输入的帧进行检测,通过计算响应最大值来确定目标所在的位置,图2中的置信图为本发明对目标追踪时的响应效果可视化图;The fourth step: use the trained filter to detect the newly input frame, and determine the position of the target by calculating the maximum response value. The confidence map in Figure 2 is a visualization of the response effect of the present invention when tracking the target;

第五步:利用当前位置为中心点,建立目标尺度金字塔来提取训练样本,假设训练样本为x1,x2,...,xn,每个样本对应的输出为y1,y2,...,yn。使用岭回归求解最优尺度滤波器hi,如下:Step 5: Use the current position as the center point to establish a target scale pyramid to extract training samples, assuming that the training samples are x1 , x2 ,..., xn , the output corresponding to each sample is y1 , y2 , ..., yn . The optimal scaling filter hi is solved using ridgeregression as follows:

Figure BDA0003678682580000051
Figure BDA0003678682580000051

其中,

Figure BDA0003678682580000052
表示卷积;λ为正则化参数,用于防止过度拟合。in,
Figure BDA0003678682580000052
Represents convolution; λ is a regularization parameter to prevent overfitting.

用于尺度评估的目标样本的候选尺度参数选取为:The candidate scale parameters of the target sample used for scale evaluation are selected as:

anW×anRan W×an R

其中,

Figure BDA0003678682580000053
a为比例因子,跟踪对象的比例大小为W×R。in,
Figure BDA0003678682580000053
a is the scale factor, and the scale of the tracked object is W×R.

第六步:获取最终的尺度滤波器的响应最大值即为候选目标的最优尺度。对尺度分类器学习系数

Figure BDA0003678682580000054
和外观模型
Figure BDA0003678682580000055
以学习率η进行更新:Step 6: Obtaining the maximum response value of the final scale filter is the optimal scale of the candidate target. Learning Coefficients for Scale Classifiers
Figure BDA0003678682580000054
and appearance model
Figure BDA0003678682580000055
Update with learning rate η:

Figure BDA0003678682580000056
Figure BDA0003678682580000056

Figure BDA0003678682580000057
Figure BDA0003678682580000057

其中,

Figure BDA0003678682580000058
Figure BDA0003678682580000059
分别为尺度分类器的外观模型和尺度分类器的系数矩阵,
Figure BDA00036786825800000510
Figure BDA00036786825800000511
为当前帧中尺度分类器的外观模型和系数矩阵。in,
Figure BDA0003678682580000058
and
Figure BDA0003678682580000059
are the appearance model of the scale classifier and the coefficient matrix of the scale classifier, respectively,
Figure BDA00036786825800000510
and
Figure BDA00036786825800000511
is the appearance model and coefficient matrix of the scale classifier in the current frame.

第七步:参考图4,分别为水平和垂直运动定义了最大和最小逻辑极限mp、Mp、mt和Mt,跟踪算法得到监控目标的实际位置为(u,v),当目标在监控画面中的位置超出设置的极限框时,系统开始启动云台转动程序。设置水平方向的距离阈值

Figure BDA00036786825800000512
垂直方向的阈值为
Figure BDA00036786825800000513
云台转动程序的启动规则如下:Step 7: Referring to Figure 4, the maximum and minimum logical limits mp , Mp , mt and Mt are defined for the horizontal and vertical motions, respectively. The actual position of the monitoring target obtained by the tracking algorithm is (u, v), when the target When the position in the monitoring screen exceeds the set limit frame, the system starts to start the pan-tilt rotation program. Set the distance threshold in the horizontal direction
Figure BDA00036786825800000512
The vertical threshold is
Figure BDA00036786825800000513
The startup rules of the gimbal rotation program are as follows:

(1)如果

Figure BDA00036786825800000514
Figure BDA00036786825800000515
说明目标未超出逻辑极限处于中心区域,此时无需启动云台转动程序;(1) If
Figure BDA00036786825800000514
and
Figure BDA00036786825800000515
It means that the target does not exceed the logical limit and is in the center area, and there is no need to start the gimbal rotation program at this time;

(2)如果

Figure BDA00036786825800000516
Figure BDA00036786825800000517
说明目标超出中心区域处于监控画面的左边缘,启动云台向左水平转动α角度;(2) If
Figure BDA00036786825800000516
and
Figure BDA00036786825800000517
It means that the target is beyond the center area and is at the left edge of the monitoring screen, start the gimbal and rotate it horizontally to the left by α angle;

(3)如果

Figure BDA00036786825800000518
Figure BDA00036786825800000519
说明目标超出中心区域处于监控画面的右边缘,启动云台向右水平转动α角度;(3) If
Figure BDA00036786825800000518
and
Figure BDA00036786825800000519
It means that the target is beyond the center area and is on the right edge of the monitoring screen, start the gimbal and rotate it horizontally to the right by an angle of α;

(4)如果

Figure BDA0003678682580000061
Figure BDA0003678682580000062
说明目标超出中心区域处于监控画面的上边缘,启动云台向上垂直转动β角度;(4) If
Figure BDA0003678682580000061
and
Figure BDA0003678682580000062
It means that the target is beyond the center area and is on the upper edge of the monitoring screen, start the gimbal to rotate vertically upward by β angle;

(5)如果

Figure BDA0003678682580000063
Figure BDA0003678682580000064
说明目标超出中心区域处于监控画面的下边缘,启动云台向下垂直转动β角度;(5) If
Figure BDA0003678682580000063
and
Figure BDA0003678682580000064
It means that the target is beyond the center area and is at the lower edge of the monitoring screen, start the gimbal and rotate it vertically downward by β angle;

(6)如果

Figure BDA0003678682580000065
Figure BDA0003678682580000066
说明目标在水平和垂直两个方向均超出边界,启动云台水平方向转动α角度,水平方向转动β角度;(6) If
Figure BDA0003678682580000065
and
Figure BDA0003678682580000066
It means that the target is beyond the boundary in both the horizontal and vertical directions, start the gimbal to rotate α angle in the horizontal direction, and rotate β angle in the horizontal direction;

第八步:相机的变焦控制是基于改进的KCF算法的目标的实际高度Sp与理想高度Se的差值,以此确定目标的缩放率。设定阈值Stv。镜头变倍程序的启动规则如下:Step 8: The zoom control of the camera is based on the difference between the actual heightSp and the ideal height Se of the target based on the improvedKCF algorithm, so as to determine the zoom ratio of the target. The threshold value Stv is set. The startup rules of the lens zoom program are as follows:

(1)如果|Se-Sp|<Stv,说明目标当前的尺寸满足预期的要求,无需调整焦距;(1) If |Se -Sp |<Stv , it means that the current size of the target meets the expected requirements, and there is no need to adjust the focal length;

(2)如果|Se-Sp|>Stv并且Sp<Se,表明此时目标的大小在监控画面中尺寸较小,需要进行放大;(2) If |Se -Sp |>Stv and Sp <Se , it indicates that the size of the target is smaller in the monitoring screen at this time, and it needs to be enlarged;

(3)如果|Se-Sp|>Stv并且Sp>Se,表明此时目标的大小在监控画面中尺寸较大,需要进行缩小。(3) If |Se -Sp |>Stv and Sp >Se , it means that the size of the target is larger in the monitoring screen at this time and needs to be reduced.

当需要变焦的时候,镜头调焦倍数按照下式计算:When zooming is required, the zoom factor of the lens is calculated according to the following formula:

Figure BDA0003678682580000067
Figure BDA0003678682580000067

调焦后的放大倍数为Z,则:The magnification after focusing is Z, then:

Z=Mul·z。Z=Mul·z.

图5(a)-(b)所示是原始的KCF算法追踪效果图,使用传统的KCF算法目标窗口是固定不变的,在目标远离相机的过程中,很多不相关的背景信息被用作目标信息来提取特征,这就导致无法正确反应目标的位置和大小,从图5(b)可以看出,跟踪框偏离了行人的正确位置,随着行人与相机的距离越来越远,无法控制相机进行正确的旋转定位与大小缩放;如图6(a)-(c)所示,改进的KCF算法引入了尺度自适应策略,在目标跟踪过程中,目标跟踪窗口的大小会随着目标大小的变化而变化。与原KCF算法相比能够很好的反应目标的位置和大小信息,从而可以正确的控制云台相机,当图6(b)目标马上要离开监控视野时,可以控制相机进行变焦,变焦后跟踪框依然可以很好的框住目标。Figure 5(a)-(b) shows the original KCF algorithm tracking effect. Using the traditional KCF algorithm, the target window is fixed. In the process of the target moving away from the camera, a lot of irrelevant background information is used as The target information is used to extract features, which leads to the inability to correctly reflect the position and size of the target. As can be seen from Figure 5(b), the tracking frame deviates from the correct position of the pedestrian. Control the camera to perform correct rotation positioning and size scaling; as shown in Figure 6(a)-(c), the improved KCF algorithm introduces a scale adaptation strategy. During the target tracking process, the size of the target tracking window will vary with the target. changes in size. Compared with the original KCF algorithm, it can better reflect the position and size information of the target, so that the gimbal camera can be correctly controlled. When the target in Figure 6(b) is about to leave the monitoring field of view, the camera can be controlled to zoom and track after zooming. The frame can still frame the target well.

Claims (5)

Translated fromChinese
1.一种基于尺度自适应的目标跟踪方法,其特征在于,包括如下步骤:1. a target tracking method based on scale adaptation, is characterized in that, comprises the steps:步骤1、输入首帧图像,初始化目标的位置pos1和尺度scale1Step 1. Input the first frame image, initialize the target position pos1 and scale scale1 ;步骤2、提取搜索区域的Hog特征和Lab特征,利用循环矩阵的形式来生成更多的样本信息,其中基样本为正样本,其余都为负样本;Step 2. Extract the Hog feature and Lab feature of the search area, and use the form of a cyclic matrix to generate more sample information, where the base sample is a positive sample, and the rest are negative samples;步骤3、通过岭回归分类器训练目标分类器,转化到傅里叶域中训练和更新相关滤波器;Step 3, train the target classifier through the ridge regression classifier, and transform it into the Fourier domain to train and update the correlation filter;步骤4、使用训练好的滤波器来预测目标所在的位置,确定响应峰值得到跟踪目标的位置pos2Step 4, use the trained filter to predict the position of the target, and determine the response peak to obtain the position pos2 of the tracking target;步骤5、构建一个独立的尺度判别分类器,通过对目标尺度进行采样得到一组尺度空间;Step 5. Build an independent scale discriminant classifier, and obtain a set of scale spaces by sampling the target scale;步骤6、计算尺度滤波器的响应值,以响应最大值作为最佳匹配,确定并更新当前尺度scale2Step 6, calculate the response value of the scale filter, and determine and update the current scale scale2 with the maximum response value as the best match;步骤7、利用跟踪算法得到目标矩形框的中心坐标,通过比较两个坐标之间的位置来决定是否进行云台转动,从而保持监控目标始终处于监控画面的中心区域;Step 7, use the tracking algorithm to obtain the center coordinates of the target rectangular frame, and determine whether to rotate the pan/tilt by comparing the position between the two coordinates, so as to keep the monitoring target in the central area of the monitoring screen;步骤8、基于改进的KCF算法输出实时的目标尺度,利用目标的实际高度与理想高度的差值,以此确定目标的缩放率进行相机的变焦控制。Step 8. Output the real-time target scale based on the improved KCF algorithm, and use the difference between the actual height of the target and the ideal height to determine the zoom ratio of the target to control the zoom of the camera.2.如权利要求1所述的基于尺度自适应的目标跟踪方法,其特征在于,步骤3中,通过岭回归训练目标分类器f(x)=wTx,转化到傅里叶域中训练和更新相关滤波器,岭回归的训练过程为:2. The scale-adaptive-based target tracking method according to claim 1, wherein in step 3, the target classifier f(x)=wT x is trained by ridge regression, and converted into Fourier domain for training and updating the relevant filters, the training process for ridge regression is:
Figure FDA0003678682570000011
Figure FDA0003678682570000011
其中,(xi,yi)是一个训练样本和回归目标对应的序列,λ是正则化参数,用于防止过度拟合,w是训练权重。where (xi ,yi ) is a sequence corresponding to a training sample and a regression target, λ is a regularization parameter to prevent overfitting, and w is a training weight.3.如权利要求1所述的基于尺度自适应的目标跟踪方法,其特征在于,步骤5中,构建一个独立的尺度判别分类器,通过对目标尺度进行采样得到一组尺度空间具体为:首先通过位置滤波器估计目标所在的位置,利用当前位置为中心点,建立目标尺度金字塔来提取训练样本,假设训练样本为x1,x2,...,xn,每个样本对应的输出为yi,使用岭回归求解最优尺度滤波器hi,如下:3. The scale-adaptive-based target tracking method according to claim 1, wherein in step 5, an independent scale discriminant classifier is constructed, and a set of scale spaces is obtained by sampling the target scale. Specifically: first The position of the target is estimated by the position filter, and the current position is used as the center point to establish the target scale pyramid to extract the training samples. Assuming that the training samples are x1 , x2 , ..., xn , the output corresponding to each sample is yi , use ridge regression to solve the optimal scaling filterhi , as follows:
Figure FDA0003678682570000021
Figure FDA0003678682570000021
其中,
Figure FDA0003678682570000022
表示卷积;λ为正则化参数,用于控制过度拟合;
in,
Figure FDA0003678682570000022
Represents convolution; λ is a regularization parameter to control overfitting;
用于尺度评估的目标样本的候选尺度参数选取为:The candidate scale parameters of the target sample used for scale evaluation are selected as:anW×anRan W×an R其中,
Figure FDA0003678682570000023
a为比例因子,跟踪对象的比例大小为W×R;
in,
Figure FDA0003678682570000023
a is the scale factor, and the scale of the tracked object is W×R;
获取最终的尺度滤波器的响应最大值即为候选目标的最优尺度,对尺度分类器学习系数
Figure FDA0003678682570000024
和外观模型
Figure FDA0003678682570000025
以学习率η进行更新:
Obtaining the maximum response value of the final scale filter is the optimal scale of the candidate target, and learning coefficients for the scale classifier
Figure FDA0003678682570000024
and appearance model
Figure FDA0003678682570000025
Update with learning rate η:
Figure FDA0003678682570000026
Figure FDA0003678682570000026
Figure FDA0003678682570000027
Figure FDA0003678682570000027
其中,
Figure FDA0003678682570000028
Figure FDA0003678682570000029
分别为尺度分类器的外观模型和尺度分类器的系数矩阵,
Figure FDA00036786825700000210
Figure FDA00036786825700000211
为当前帧中尺度分类器的外观模型和系数矩阵。
in,
Figure FDA0003678682570000028
and
Figure FDA0003678682570000029
are the appearance model of the scale classifier and the coefficient matrix of the scale classifier, respectively,
Figure FDA00036786825700000210
and
Figure FDA00036786825700000211
is the appearance model and coefficient matrix of the scale classifier in the current frame.
4.如权利要求1所述的基于尺度自适应的目标跟踪方法,其特征在于,步骤7中,利用跟踪算法得到目标矩形框的中心坐标(u,v),已知监控画面的中心坐标为
Figure FDA00036786825700000212
通过比较两个坐标之间的位置来决定是否进行云台转动,从而保持监控目标始终处于监控画面的中心区域;
4. the target tracking method based on scale adaptation as claimed in claim 1, is characterized in that, in step 7, utilizes tracking algorithm to obtain the center coordinate (u, v) of target rectangle frame, the center coordinate of known monitoring picture is
Figure FDA00036786825700000212
Determine whether to rotate the pan/tilt by comparing the position between the two coordinates, so as to keep the monitoring target in the central area of the monitoring screen;
目标移动的水平方向的角度α和垂直方向的角度β分别为:The angle α in the horizontal direction and the angle β in the vertical direction of the target movement are:
Figure FDA00036786825700000213
Figure FDA00036786825700000213
其中,u和v分别代表运动目标水平方向和垂直方向上的坐标,Mw×Mh为监控视频画面的分辨率,角度θ1为相机水平最大视角的一半,角度
Figure FDA00036786825700000214
为相机垂直最大视角的一半。
Among them, u and v represent the coordinates of the moving target in the horizontal and vertical directions respectively, Mw ×Mh is the resolution of the monitoring video screen, and the angle θ1 is half of the maximum horizontal viewing angle of the camera.
Figure FDA00036786825700000214
Half of the vertical maximum viewing angle of the camera.
5.如权利要求1所述的基于尺度自适应的目标跟踪方法,其特征在于,步骤8中,当需要变焦时,镜头的调焦倍数按如下公式进行:5. The target tracking method based on scale adaptation as claimed in claim 1, is characterized in that, in step 8, when zooming is required, the focusing multiple of the lens is carried out according to the following formula:
Figure FDA00036786825700000215
Figure FDA00036786825700000215
其中,Se为监控目标在画面中的理想尺寸,Sp为当前追踪算法检测出的目标的实际尺寸。Among them,Se is the ideal size of the monitoring target in the screen, andSp is the actual size of the target detected by the current tracking algorithm.
CN202210628171.3A2022-06-062022-06-06 A Target Tracking Method Based on Scale AdaptivePendingCN115240128A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202210628171.3ACN115240128A (en)2022-06-062022-06-06 A Target Tracking Method Based on Scale Adaptive

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202210628171.3ACN115240128A (en)2022-06-062022-06-06 A Target Tracking Method Based on Scale Adaptive

Publications (1)

Publication NumberPublication Date
CN115240128Atrue CN115240128A (en)2022-10-25

Family

ID=83668856

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202210628171.3APendingCN115240128A (en)2022-06-062022-06-06 A Target Tracking Method Based on Scale Adaptive

Country Status (1)

CountryLink
CN (1)CN115240128A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR101780929B1 (en)*2017-01-102017-09-26(주)예원이엔씨Image surveillence system for moving object
CN108510521A (en)*2018-02-272018-09-07南京邮电大学A kind of dimension self-adaption method for tracking target of multiple features fusion
CN109685073A (en)*2018-12-282019-04-26南京工程学院A kind of dimension self-adaption target tracking algorism based on core correlation filtering
CN109949340A (en)*2019-03-042019-06-28湖北三江航天万峰科技发展有限公司Target scale adaptive tracking method based on OpenCV
CN110764537A (en)*2019-12-252020-02-07中航金城无人系统有限公司Automatic tripod head locking system and method based on motion estimation and visual tracking

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR101780929B1 (en)*2017-01-102017-09-26(주)예원이엔씨Image surveillence system for moving object
CN108510521A (en)*2018-02-272018-09-07南京邮电大学A kind of dimension self-adaption method for tracking target of multiple features fusion
CN109685073A (en)*2018-12-282019-04-26南京工程学院A kind of dimension self-adaption target tracking algorism based on core correlation filtering
CN109949340A (en)*2019-03-042019-06-28湖北三江航天万峰科技发展有限公司Target scale adaptive tracking method based on OpenCV
CN110764537A (en)*2019-12-252020-02-07中航金城无人系统有限公司Automatic tripod head locking system and method based on motion estimation and visual tracking

Similar Documents

PublicationPublication DateTitle
CN108334847B (en) A Face Recognition Method Based on Deep Learning in Real Scenes
CN112836640B (en)Single-camera multi-target pedestrian tracking method
JP5227629B2 (en) Object detection method, object detection apparatus, and object detection program
CN112686928B (en)Moving target visual tracking method based on multi-source information fusion
WO2020125499A1 (en)Operation prompting method and glasses
CN113592911B (en)Apparent enhanced depth target tracking method
CN114581486A (en) Template update target tracking algorithm based on multi-layer features of fully convolutional Siamese network
CN114973317B (en)Pedestrian re-recognition method based on multi-scale adjacent interaction characteristics
CN108320306B (en)Video target tracking method fusing TLD and KCF
CN110555377A (en)pedestrian detection and tracking method based on fisheye camera overlook shooting
CN114724251B (en) A method for elderly behavior recognition based on skeleton sequences in infrared video
CN103985143A (en)Discriminative online target tracking method based on videos in dictionary learning
CN108682022A (en)Based on the visual tracking method and system to anti-migration network
JP5027030B2 (en) Object detection method, object detection apparatus, and object detection program
CN119729207A (en)Photographic focusing control method based on machine vision
CN103077535A (en)Target tracking method on basis of multitask combined sparse representation
CN104036238B (en)The method of the human eye positioning based on active light
CN113052869A (en)Track tracking method and system based on intelligent AI temperature measurement and storage medium
CN108846850A (en)Target tracking method based on T L D algorithm
CN115937254A (en)Multi-air flight target tracking method and system based on semi-supervised learning
CN113128605B (en) Target tracking method based on particle filtering and deep distance metric learning
Li et al.A camera PTZ control algorithm for autonomous mobile inspection robot
JP2009251892A (en)Object detection method, object detection device, and object detection program
CN115240128A (en) A Target Tracking Method Based on Scale Adaptive
CN117213464B (en)Real-time simultaneous localization and mapping system based on implicit characterization

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp