








技术领域technical field
本发明涉及目标追踪技术领域,尤其是一种基于尺度自适应的目标跟踪方法。The invention relates to the technical field of target tracking, in particular to a target tracking method based on scale adaptation.
背景技术Background technique
近年来,随着科技的不断发展,智能监控得到了社会的广泛关注。视频监控系统使用平移-倾斜-变焦(PTZ)摄像头,可以灵活的进行姿态调整和变焦功能。目前这些摄像头主要用于周边监视,其中安保人员必须从远距离监视入侵者,利用PTZ相机进行监控时需要通过手动操控485键盘或者客户端跟踪监控目标,但在操作过程中这个可疑人员可能是人也可能是车或者其他移动目标,这些目标往往移动速度较快,安保人员不仅需要调整云台的速度以尽可能的接近目标移动的速度,而且还需要手动变焦以尽可能的放大监控目标在画面中的比例。而这种人为的操控往往滞后于目标的运动,影响追踪的实时性。而且长时间监控屏幕很容易出现人为失误,使得在监控过程中遗漏很多重要细节。因此,设计出一种尺度自适应的目标跟踪方法是非常重要的。In recent years, with the continuous development of science and technology, intelligent monitoring has received extensive attention from the society. The video surveillance system uses a pan-tilt-zoom (PTZ) camera, which can flexibly adjust the posture and zoom. At present, these cameras are mainly used for peripheral surveillance, in which security personnel must monitor intruders from a long distance. When using PTZ cameras to monitor, they need to manually control the 485 keyboard or the client to track the monitoring target, but during the operation, the suspicious person may be a human being. It may also be a car or other moving targets. These targets tend to move faster. Security personnel not only need to adjust the speed of the gimbal to be as close as possible to the moving speed of the target, but also need to manually zoom to zoom in on the monitoring target as much as possible. ratio in . This artificial manipulation often lags behind the movement of the target, which affects the real-time tracking. And long-term monitoring of the screen is prone to human error, so that many important details are missed in the monitoring process. Therefore, it is very important to design a scale-adaptive target tracking method.
目前,KCF算法由于其优越的追踪性能在实际生活中受到广泛的应用。但由于其追踪框大小固定不变,当目标发生较大尺度变化时,在后续帧中无法达到预期效果。Li等人提出IBCCF算法,采用二维滤波器和一维边界滤波器相结合的框架有效地解决尺度的问题,但是计算冗余影响跟踪速度。SAMF算法利用颜色和梯度互补特性并通过引入尺度池的方法对目标尺度变化问题进行改进,可以在小范围内实现目标尺度自适应追踪。由于算法需要在每一帧对当前目标进行7种尺度的缩放以得到最佳尺度,单就每一帧计算7次响应的过程,时间消耗相当于KCF的7倍,因此SAMF算法的计算成本较高。At present, the KCF algorithm is widely used in real life due to its superior tracking performance. However, since the size of the tracking frame is fixed, when the target changes in a large scale, the expected effect cannot be achieved in subsequent frames. Li et al. proposed the IBCCF algorithm, which uses a framework combining two-dimensional filters and one-dimensional boundary filters to effectively solve the problem of scale, but the computational redundancy affects the tracking speed. The SAMF algorithm uses the complementary characteristics of color and gradient and improves the problem of target scale change by introducing the method of scale pooling, which can achieve target scale adaptive tracking in a small range. Since the algorithm needs to scale the current target with 7 scales in each frame to obtain the optimal scale, the process of calculating 7 responses per frame alone consumes 7 times the time of KCF, so the calculation cost of the SAMF algorithm is higher than that of KCF. high.
发明内容SUMMARY OF THE INVENTION
本发明所要解决的技术问题在于,提供一种基于尺度自适应的目标跟踪方法,能够有效解决跟踪系统自动化程度低的问题。The technical problem to be solved by the present invention is to provide a target tracking method based on scale adaptation, which can effectively solve the problem of low automation degree of the tracking system.
为解决上述技术问题,本发明提供一种基于尺度自适应的目标跟踪方法,包括如下步骤:In order to solve the above technical problems, the present invention provides a target tracking method based on scale adaptation, comprising the following steps:
步骤1、输入首帧图像,初始化目标的位置pos1和尺度scale1;Step 1. Input the first frame image, initialize the target position pos1 and scale scale1 ;
步骤2、提取搜索区域的Hog特征和Lab特征,利用循环矩阵的形式来生成更多的样本信息,其中基样本为正样本,其余都为负样本;
步骤3、通过岭回归训练目标分类器,转化到傅里叶域中训练和更新相关滤波器;Step 3. Train the target classifier through ridge regression, and transform it into the Fourier domain to train and update the correlation filter;
步骤4、使用训练好的滤波器来预测目标所在的位置,确定响应峰值得到跟踪目标的位置pos2;Step 4, use the trained filter to predict the position of the target, and determine the response peak to obtain the position pos2 of the tracking target;
步骤5、构建一个独立的尺度判别分类器,通过对目标尺度进行采样得到一组尺度空间;Step 5. Build an independent scale discriminant classifier, and obtain a set of scale spaces by sampling the target scale;
步骤6、计算尺度滤波器的响应值,以响应最大值作为最佳匹配,确定并更新当前尺度scale2;Step 6, calculate the response value of the scale filter, and determine and update the current scale scale2 with the maximum response value as the best match;
步骤7、利用跟踪算法得到目标矩形框的中心坐标,通过比较两个坐标之间的位置来决定是否进行云台转动,保从而保持监控目标始终处于监控画面的中心区域;Step 7, use the tracking algorithm to obtain the center coordinates of the target rectangular frame, and determine whether to perform the pan-tilt rotation by comparing the position between the two coordinates, so as to ensure that the monitoring target is always in the central area of the monitoring screen;
步骤8、基于改进的KCF算法输出实时的目标尺度,利用目标的实际高度与理想高度的差值,以此确定目标的缩放率进行相机的变焦控制。Step 8. Output the real-time target scale based on the improved KCF algorithm, and use the difference between the actual height of the target and the ideal height to determine the zoom ratio of the target to control the zoom of the camera.
优选的,步骤3中,通过岭回归训练目标分类器f(x)=wTx,转化到傅里叶域中训练和更新相关滤波器,岭回归的训练过程为:Preferably, in step 3, the target classifier f(x)=wT x is trained by ridge regression, and transformed into the Fourier domain to train and update the relevant filters. The training process of ridge regression is:
其中,(xi,yi)是一个训练样本和回归目标对应的序列,λ是正则化参数,用于防止过度拟合,w是训练权重。where (xi ,yi ) is a sequence corresponding to a training sample and a regression target, λ is a regularization parameter to prevent overfitting, and w is a training weight.
优选的,步骤5中,构建一个独立的尺度判别分类器,通过对目标尺度进行采样得到一组尺度空间具体为:首先通过位置滤波器估计目标所在的位置,利用当前位置为中心点,建立目标尺度金字塔来提取训练样本,假设训练样本为x1,x2,...,xn,每个样本对应的输出为yi,使用岭回归求解最优尺度滤波器hi,如下:Preferably, in step 5, an independent scale discriminant classifier is constructed, and a set of scale spaces is obtained by sampling the target scale. Specifically: first, estimate the position of the target through the position filter, and use the current position as the center point to establish the target. The scale pyramid is used to extract training samples. Assuming that the training samples are x1 , x2 , ..., xn , the output corresponding to each sample is yi , and ridge regression is used to solve the optimal scale filter hi , as follows:
其中,表示卷积;λ为正则化参数,用于防止过度拟合;in, Represents convolution; λ is a regularization parameter to prevent overfitting;
用于尺度评估的目标样本的候选尺度参数选取为:The candidate scale parameters of the target sample used for scale evaluation are selected as:
anW×anRan W×an R
其中,a为比例因子,跟踪对象的比例大小为W×R;in, a is the scale factor, and the scale of the tracked object is W×R;
获取最终的尺度滤波器的响应最大值即为候选目标的最优尺度,对尺度分类器学习系数和外观模型以学习率η进行更新:Obtaining the maximum response value of the final scale filter is the optimal scale of the candidate target, and learning coefficients for the scale classifier and appearance model Update with learning rate η:
其中,和分别为尺度分类器的外观模型和尺度分类器的系数矩阵,和为当前帧中尺度分类器的外观模型和系数矩阵。in, and are the appearance model of the scale classifier and the coefficient matrix of the scale classifier, respectively, and is the appearance model and coefficient matrix of the scale classifier in the current frame.
优选的,步骤7中,利用跟踪算法得到目标矩形框的中心坐标(u,v),已知监控画面的中心坐标为通过比较两个坐标之间的位置来决定是否进行云台转动,从而保持监控目标始终处于监控画面的中心区域;Preferably, in step 7, the center coordinates (u, v) of the target rectangular frame are obtained by using a tracking algorithm, and the center coordinates of the known monitoring screen are Determine whether to rotate the pan/tilt by comparing the position between the two coordinates, so as to keep the monitoring target in the central area of the monitoring screen;
目标移动的水平方向的角度α和垂直方向的角度β分别为:The angle α in the horizontal direction and the angle β in the vertical direction of the target movement are:
其中,u和v分别代表运动目标水平方向和垂直方向上的坐标,Mw×Mh为监控视频画面的分辨率,角度θ1为相机水平最大视角的一半,角度为相机垂直最大视角的一半。Among them, u and v represent the coordinates of the moving target in the horizontal and vertical directions respectively, Mw ×Mh is the resolution of the monitoring video screen, and the angle θ1 is half of the maximum horizontal viewing angle of the camera. Half of the vertical maximum viewing angle of the camera.
优选的,步骤8中,当需要变焦时,镜头的调焦倍数按如下公式进行:Preferably, in step 8, when zooming is required, the zoom factor of the lens is carried out according to the following formula:
其中,Se为监控目标在画面中的理想尺寸,Sp为当前追踪算法检测出的目标的实际尺寸。Among them,Se is the ideal size of the monitoring target in the screen, andSp is the actual size of the target detected by the current tracking algorithm.
本发明的有益效果为:(1)本发明增加了一个单独的尺度判别分类器,可以自适应的进行尺度变化,并结合自动变焦的方法,使得摄像机始终能捕捉到目标的清晰图像,工作效率高,实用性强;(2)提出了一种基于PTZ摄像头的自动目标追踪系统,结合改进的KCF算法与提出的PTZ控制算法相结合完成了目标的精准定位和追踪,使监控系统更加的科学化、自动化。The beneficial effects of the present invention are: (1) The present invention adds a separate scale discriminating classifier, which can adaptively change the scale, and combined with the method of automatic zooming, so that the camera can always capture a clear image of the target, and the work efficiency (2) An automatic target tracking system based on PTZ camera is proposed, combined with the improved KCF algorithm and the proposed PTZ control algorithm to complete the precise positioning and tracking of the target, making the monitoring system more scientific ization and automation.
附图说明Description of drawings
图1为本发明的方法流程示意图。FIG. 1 is a schematic flow chart of the method of the present invention.
图2为本发明第t帧跟踪过程示意图。FIG. 2 is a schematic diagram of the t-th frame tracking process of the present invention.
图3为本发明PTZ相机目标追踪系统图。FIG. 3 is a diagram of the target tracking system of the PTZ camera according to the present invention.
图4为本发明监控画面中心与运动目标相对位置示意图。FIG. 4 is a schematic diagram of the relative positions of the center of the monitoring screen and the moving target according to the present invention.
图5(a)为本发明原始KCF算法对于目标追踪的结果图。Figure 5(a) is a result diagram of the original KCF algorithm of the present invention for target tracking.
图5(b)为本发明原始KCF算法对于目标追踪的结果图。Figure 5(b) is a result diagram of the original KCF algorithm of the present invention for target tracking.
图6(a)为本发明对于目标追踪的结果图。FIG. 6( a ) is a result diagram of target tracking according to the present invention.
图6(b)为本发明对于目标追踪的结果图。FIG. 6(b) is a result diagram of the present invention for target tracking.
图6(c)为本发明对于目标追踪的结果图。FIG. 6( c ) is a result diagram of target tracking according to the present invention.
具体实施方式Detailed ways
如图1所示,一种基于尺度自适应的目标跟踪方法,包括如下步骤:As shown in Figure 1, a target tracking method based on scale adaptation includes the following steps:
第一步:依据用户框选的初始目标框,从图2中看到,矩形框的区域即为追踪目标,得到目标的初始位置以及大小;Step 1: According to the initial target frame selected by the user, it can be seen from Figure 2 that the area of the rectangular frame is the tracking target, and the initial position and size of the target are obtained;
第二步:采用4×4的cell提取padding区域的Hog和Lab特征信息,对所有特征根据cell位置进行加权。利用循环矩阵的性质生成更多的样本信息,基样本为正样本,其余都为负样本;Step 2: Use a 4×4 cell to extract the Hog and Lab feature information of the padding area, and weight all the features according to the cell position. Use the properties of the circulant matrix to generate more sample information, the base sample is a positive sample, and the rest are negative samples;
其中矩形框扩大2.5倍变为padding窗口,进行padding操作是为了避免目标因为循环移位被分解,扩大后的padding包含了样本中需要特别学习的背景信息。The rectangular box is expanded by 2.5 times to become a padding window. The padding operation is performed to avoid the target from being decomposed due to cyclic shift. The expanded padding contains the background information in the sample that needs to be specially learned.
第三步:通过岭回归训练目标分类器f(x)=ωTx,训练的目的是最小化样本xi和回归目标yi的平方误差,利用循环矩阵在傅里叶空间可对角化的性质,转化到频域中进行训练和更新;The third step: train the target classifier f(x) = ωT x through ridge regression. The purpose of training is to minimize the squared error between the sample xi and the regression target yi , and use the circulant matrix to diagonalize in the Fourier space. The properties of , are transformed into the frequency domain for training and updating;
第四步:使用训练好的滤波器对新输入的帧进行检测,通过计算响应最大值来确定目标所在的位置,图2中的置信图为本发明对目标追踪时的响应效果可视化图;The fourth step: use the trained filter to detect the newly input frame, and determine the position of the target by calculating the maximum response value. The confidence map in Figure 2 is a visualization of the response effect of the present invention when tracking the target;
第五步:利用当前位置为中心点,建立目标尺度金字塔来提取训练样本,假设训练样本为x1,x2,...,xn,每个样本对应的输出为y1,y2,...,yn。使用岭回归求解最优尺度滤波器hi,如下:Step 5: Use the current position as the center point to establish a target scale pyramid to extract training samples, assuming that the training samples are x1 , x2 ,..., xn , the output corresponding to each sample is y1 , y2 , ..., yn . The optimal scaling filter hi is solved using ridgeregression as follows:
其中,表示卷积;λ为正则化参数,用于防止过度拟合。in, Represents convolution; λ is a regularization parameter to prevent overfitting.
用于尺度评估的目标样本的候选尺度参数选取为:The candidate scale parameters of the target sample used for scale evaluation are selected as:
anW×anRan W×an R
其中,a为比例因子,跟踪对象的比例大小为W×R。in, a is the scale factor, and the scale of the tracked object is W×R.
第六步:获取最终的尺度滤波器的响应最大值即为候选目标的最优尺度。对尺度分类器学习系数和外观模型以学习率η进行更新:Step 6: Obtaining the maximum response value of the final scale filter is the optimal scale of the candidate target. Learning Coefficients for Scale Classifiers and appearance model Update with learning rate η:
其中,和分别为尺度分类器的外观模型和尺度分类器的系数矩阵,和为当前帧中尺度分类器的外观模型和系数矩阵。in, and are the appearance model of the scale classifier and the coefficient matrix of the scale classifier, respectively, and is the appearance model and coefficient matrix of the scale classifier in the current frame.
第七步:参考图4,分别为水平和垂直运动定义了最大和最小逻辑极限mp、Mp、mt和Mt,跟踪算法得到监控目标的实际位置为(u,v),当目标在监控画面中的位置超出设置的极限框时,系统开始启动云台转动程序。设置水平方向的距离阈值垂直方向的阈值为云台转动程序的启动规则如下:Step 7: Referring to Figure 4, the maximum and minimum logical limits mp , Mp , mt and Mt are defined for the horizontal and vertical motions, respectively. The actual position of the monitoring target obtained by the tracking algorithm is (u, v), when the target When the position in the monitoring screen exceeds the set limit frame, the system starts to start the pan-tilt rotation program. Set the distance threshold in the horizontal direction The vertical threshold is The startup rules of the gimbal rotation program are as follows:
(1)如果且说明目标未超出逻辑极限处于中心区域,此时无需启动云台转动程序;(1) If and It means that the target does not exceed the logical limit and is in the center area, and there is no need to start the gimbal rotation program at this time;
(2)如果且说明目标超出中心区域处于监控画面的左边缘,启动云台向左水平转动α角度;(2) If and It means that the target is beyond the center area and is at the left edge of the monitoring screen, start the gimbal and rotate it horizontally to the left by α angle;
(3)如果且说明目标超出中心区域处于监控画面的右边缘,启动云台向右水平转动α角度;(3) If and It means that the target is beyond the center area and is on the right edge of the monitoring screen, start the gimbal and rotate it horizontally to the right by an angle of α;
(4)如果且说明目标超出中心区域处于监控画面的上边缘,启动云台向上垂直转动β角度;(4) If and It means that the target is beyond the center area and is on the upper edge of the monitoring screen, start the gimbal to rotate vertically upward by β angle;
(5)如果且说明目标超出中心区域处于监控画面的下边缘,启动云台向下垂直转动β角度;(5) If and It means that the target is beyond the center area and is at the lower edge of the monitoring screen, start the gimbal and rotate it vertically downward by β angle;
(6)如果且说明目标在水平和垂直两个方向均超出边界,启动云台水平方向转动α角度,水平方向转动β角度;(6) If and It means that the target is beyond the boundary in both the horizontal and vertical directions, start the gimbal to rotate α angle in the horizontal direction, and rotate β angle in the horizontal direction;
第八步:相机的变焦控制是基于改进的KCF算法的目标的实际高度Sp与理想高度Se的差值,以此确定目标的缩放率。设定阈值Stv。镜头变倍程序的启动规则如下:Step 8: The zoom control of the camera is based on the difference between the actual heightSp and the ideal height Se of the target based on the improvedKCF algorithm, so as to determine the zoom ratio of the target. The threshold value Stv is set. The startup rules of the lens zoom program are as follows:
(1)如果|Se-Sp|<Stv,说明目标当前的尺寸满足预期的要求,无需调整焦距;(1) If |Se -Sp |<Stv , it means that the current size of the target meets the expected requirements, and there is no need to adjust the focal length;
(2)如果|Se-Sp|>Stv并且Sp<Se,表明此时目标的大小在监控画面中尺寸较小,需要进行放大;(2) If |Se -Sp |>Stv and Sp <Se , it indicates that the size of the target is smaller in the monitoring screen at this time, and it needs to be enlarged;
(3)如果|Se-Sp|>Stv并且Sp>Se,表明此时目标的大小在监控画面中尺寸较大,需要进行缩小。(3) If |Se -Sp |>Stv and Sp >Se , it means that the size of the target is larger in the monitoring screen at this time and needs to be reduced.
当需要变焦的时候,镜头调焦倍数按照下式计算:When zooming is required, the zoom factor of the lens is calculated according to the following formula:
调焦后的放大倍数为Z,则:The magnification after focusing is Z, then:
Z=Mul·z。Z=Mul·z.
图5(a)-(b)所示是原始的KCF算法追踪效果图,使用传统的KCF算法目标窗口是固定不变的,在目标远离相机的过程中,很多不相关的背景信息被用作目标信息来提取特征,这就导致无法正确反应目标的位置和大小,从图5(b)可以看出,跟踪框偏离了行人的正确位置,随着行人与相机的距离越来越远,无法控制相机进行正确的旋转定位与大小缩放;如图6(a)-(c)所示,改进的KCF算法引入了尺度自适应策略,在目标跟踪过程中,目标跟踪窗口的大小会随着目标大小的变化而变化。与原KCF算法相比能够很好的反应目标的位置和大小信息,从而可以正确的控制云台相机,当图6(b)目标马上要离开监控视野时,可以控制相机进行变焦,变焦后跟踪框依然可以很好的框住目标。Figure 5(a)-(b) shows the original KCF algorithm tracking effect. Using the traditional KCF algorithm, the target window is fixed. In the process of the target moving away from the camera, a lot of irrelevant background information is used as The target information is used to extract features, which leads to the inability to correctly reflect the position and size of the target. As can be seen from Figure 5(b), the tracking frame deviates from the correct position of the pedestrian. Control the camera to perform correct rotation positioning and size scaling; as shown in Figure 6(a)-(c), the improved KCF algorithm introduces a scale adaptation strategy. During the target tracking process, the size of the target tracking window will vary with the target. changes in size. Compared with the original KCF algorithm, it can better reflect the position and size information of the target, so that the gimbal camera can be correctly controlled. When the target in Figure 6(b) is about to leave the monitoring field of view, the camera can be controlled to zoom and track after zooming. The frame can still frame the target well.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210628171.3ACN115240128A (en) | 2022-06-06 | 2022-06-06 | A Target Tracking Method Based on Scale Adaptive |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210628171.3ACN115240128A (en) | 2022-06-06 | 2022-06-06 | A Target Tracking Method Based on Scale Adaptive |
| Publication Number | Publication Date |
|---|---|
| CN115240128Atrue CN115240128A (en) | 2022-10-25 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210628171.3APendingCN115240128A (en) | 2022-06-06 | 2022-06-06 | A Target Tracking Method Based on Scale Adaptive |
| Country | Link |
|---|---|
| CN (1) | CN115240128A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR101780929B1 (en)* | 2017-01-10 | 2017-09-26 | (주)예원이엔씨 | Image surveillence system for moving object |
| CN108510521A (en)* | 2018-02-27 | 2018-09-07 | 南京邮电大学 | A kind of dimension self-adaption method for tracking target of multiple features fusion |
| CN109685073A (en)* | 2018-12-28 | 2019-04-26 | 南京工程学院 | A kind of dimension self-adaption target tracking algorism based on core correlation filtering |
| CN109949340A (en)* | 2019-03-04 | 2019-06-28 | 湖北三江航天万峰科技发展有限公司 | Target scale adaptive tracking method based on OpenCV |
| CN110764537A (en)* | 2019-12-25 | 2020-02-07 | 中航金城无人系统有限公司 | Automatic tripod head locking system and method based on motion estimation and visual tracking |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR101780929B1 (en)* | 2017-01-10 | 2017-09-26 | (주)예원이엔씨 | Image surveillence system for moving object |
| CN108510521A (en)* | 2018-02-27 | 2018-09-07 | 南京邮电大学 | A kind of dimension self-adaption method for tracking target of multiple features fusion |
| CN109685073A (en)* | 2018-12-28 | 2019-04-26 | 南京工程学院 | A kind of dimension self-adaption target tracking algorism based on core correlation filtering |
| CN109949340A (en)* | 2019-03-04 | 2019-06-28 | 湖北三江航天万峰科技发展有限公司 | Target scale adaptive tracking method based on OpenCV |
| CN110764537A (en)* | 2019-12-25 | 2020-02-07 | 中航金城无人系统有限公司 | Automatic tripod head locking system and method based on motion estimation and visual tracking |
| Publication | Publication Date | Title |
|---|---|---|
| CN108334847B (en) | A Face Recognition Method Based on Deep Learning in Real Scenes | |
| CN112836640B (en) | Single-camera multi-target pedestrian tracking method | |
| JP5227629B2 (en) | Object detection method, object detection apparatus, and object detection program | |
| CN112686928B (en) | Moving target visual tracking method based on multi-source information fusion | |
| WO2020125499A1 (en) | Operation prompting method and glasses | |
| CN113592911B (en) | Apparent enhanced depth target tracking method | |
| CN114581486A (en) | Template update target tracking algorithm based on multi-layer features of fully convolutional Siamese network | |
| CN114973317B (en) | Pedestrian re-recognition method based on multi-scale adjacent interaction characteristics | |
| CN108320306B (en) | Video target tracking method fusing TLD and KCF | |
| CN110555377A (en) | pedestrian detection and tracking method based on fisheye camera overlook shooting | |
| CN114724251B (en) | A method for elderly behavior recognition based on skeleton sequences in infrared video | |
| CN103985143A (en) | Discriminative online target tracking method based on videos in dictionary learning | |
| CN108682022A (en) | Based on the visual tracking method and system to anti-migration network | |
| JP5027030B2 (en) | Object detection method, object detection apparatus, and object detection program | |
| CN119729207A (en) | Photographic focusing control method based on machine vision | |
| CN103077535A (en) | Target tracking method on basis of multitask combined sparse representation | |
| CN104036238B (en) | The method of the human eye positioning based on active light | |
| CN113052869A (en) | Track tracking method and system based on intelligent AI temperature measurement and storage medium | |
| CN108846850A (en) | Target tracking method based on T L D algorithm | |
| CN115937254A (en) | Multi-air flight target tracking method and system based on semi-supervised learning | |
| CN113128605B (en) | Target tracking method based on particle filtering and deep distance metric learning | |
| Li et al. | A camera PTZ control algorithm for autonomous mobile inspection robot | |
| JP2009251892A (en) | Object detection method, object detection device, and object detection program | |
| CN115240128A (en) | A Target Tracking Method Based on Scale Adaptive | |
| CN117213464B (en) | Real-time simultaneous localization and mapping system based on implicit characterization |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |