Movatterモバイル変換


[0]ホーム

URL:


CN108765455A - Target stable tracking method based on T L D algorithm - Google Patents

Target stable tracking method based on T L D algorithm
Download PDF

Info

Publication number
CN108765455A
CN108765455ACN201810506760.8ACN201810506760ACN108765455ACN 108765455 ACN108765455 ACN 108765455ACN 201810506760 ACN201810506760 ACN 201810506760ACN 108765455 ACN108765455 ACN 108765455A
Authority
CN
China
Prior art keywords
tracking
module
target
detection module
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810506760.8A
Other languages
Chinese (zh)
Other versions
CN108765455B (en
Inventor
吴润泽
魏宇星
徐智勇
张建林
王全宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Optics and Electronics of CAS
Original Assignee
Institute of Optics and Electronics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Optics and Electronics of CASfiledCriticalInstitute of Optics and Electronics of CAS
Priority to CN201810506760.8ApriorityCriticalpatent/CN108765455B/en
Publication of CN108765455ApublicationCriticalpatent/CN108765455A/en
Application grantedgrantedCritical
Publication of CN108765455BpublicationCriticalpatent/CN108765455B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种基于TLD算法的目标稳定跟踪方法,包括以下步骤:(1)在待跟踪视频的起始帧进行初始化训练。(2)在跟踪过程中,检测模块和跟踪模块独立工作:检测模块扫描当前帧图像获得图像块并依次通过方差分类器、合并分类器、最近邻分类器,然后对图像块进行聚类;跟踪模块采用中值光流法由上一帧跟踪结果预测当前帧目标位置,并计算该两帧目标位置中心点间的欧式距离D,若D大于一个自适应的阈值,则判定当前帧跟踪失败,跟踪模块不输出任何结果;(3)整合模块输出跟踪结果;(4)对当前更新后的目标位置,产生新的正负样本,从而更新检测模块。(4)循环(2)‑(3)直至跟踪结束。该方法可以在一定程度上提高跟踪的稳定性。

The invention discloses a method for stabilizing a target based on a TLD algorithm, which comprises the following steps: (1) performing initialization training on the initial frame of a video to be tracked. (2) During the tracking process, the detection module and the tracking module work independently: the detection module scans the current frame image to obtain image blocks and then passes through the variance classifier, the merge classifier, and the nearest neighbor classifier, and then clusters the image blocks; tracking The module uses the median optical flow method to predict the target position in the current frame from the tracking results of the previous frame, and calculates the Euclidean distance D between the center points of the target positions in the two frames. If D is greater than an adaptive threshold, it is determined that the current frame tracking fails. The tracking module does not output any results; (3) the integration module outputs the tracking results; (4) generates new positive and negative samples for the current updated target position, thereby updating the detection module. (4) Loop (2)-(3) until the end of tracking. This method can improve the tracking stability to a certain extent.

Description

Translated fromChinese
一种基于TLD算法的目标稳定跟踪方法A Target Stable Tracking Method Based on TLD Algorithm

技术领域technical field

本发明涉及一种基于TLD算法的目标稳定跟踪方法,特点是对失败检测机制进行了自适应的阈值设定,应用于计算机视觉、目标检测、目标跟踪等,属于计算机视觉中的目标跟踪领域。The invention relates to a stable target tracking method based on a TLD algorithm, which is characterized in that an adaptive threshold value setting is performed on a failure detection mechanism, and is applied to computer vision, target detection, target tracking, etc., and belongs to the field of target tracking in computer vision.

背景技术Background technique

TLD跟踪算法是由Zdenka Kalal提出的一种单目标长时间跟踪算法。该算法由跟踪模块、检测模块、学习模块三个模块组成。单纯的跟踪算法很难校正跟踪的漂移误差并且会不断累积跟踪的误差,而且一旦目标从视野中消失,跟踪就不可避免地产生失败。单纯的检测算法需要大量的样本进行离线的监督训练,不能应用于未知目标的跟踪任务,并且由于目标模型是离线建立的,所以如果一旦目标出现较大变化,跟踪就很容易产生失败。TLD将检测算法和跟踪算法结合起来并且通过学习来实时更新目标模型。The TLD tracking algorithm is a single target long-term tracking algorithm proposed by Zdenka Kalal. The algorithm consists of three modules: tracking module, detection module and learning module. It is difficult for a simple tracking algorithm to correct tracking drift errors and will continue to accumulate tracking errors, and once the target disappears from the field of view, tracking will inevitably fail. The simple detection algorithm requires a large number of samples for offline supervised training, and cannot be applied to the tracking task of unknown targets. Moreover, since the target model is established offline, if the target changes greatly, the tracking will easily fail. TLD combines the detection algorithm and the tracking algorithm and updates the target model in real time through learning.

对于目标跟踪算法来说,目标被完全遮挡或者消失于视野后又重新出现的情况经常会发生。因此目标跟踪算法能否对因完全遮挡或者消失于视野而造成的跟踪失败作出正确的判断显得尤为重要:如果目标跟踪算法不能做出正确判断,那么即使目标不在视频中,目标跟踪算法也会继续更新目标模型。显然这种被污染的目标模型无法对目标进行有效的表征。因此 TLD算法在跟踪模块加入了失败检测机制来判断目标是否消失或者被完全遮挡。在被判定跟踪失败的视频帧,TLD算法的学习模块不会对其进行模型更新,这样就避免了目标模型被污染。但是在TLD算法中,跟踪失败检测机制的阈值被设定为一个固定的值,很难适应各种目标跟踪场景。特别是在目标发生快速运动的情况下,由于跟踪失败检测机制的阈值小于目标的运动,失败检测机制会判定当前帧发生了跟踪失败,这是一种错误的判断。For object tracking algorithms, it often happens that the object is completely occluded or disappears from the field of view and then reappears. Therefore, whether the target tracking algorithm can make a correct judgment on the tracking failure caused by complete occlusion or disappearing from the field of view is particularly important: if the target tracking algorithm cannot make a correct judgment, then even if the target is not in the video, the target tracking algorithm will continue. Update the target model. Obviously, this kind of polluted target model cannot effectively represent the target. Therefore, the TLD algorithm adds a failure detection mechanism in the tracking module to judge whether the target disappears or is completely occluded. The learning module of the TLD algorithm will not update the model of the video frame that is determined to fail to track, thus avoiding the contamination of the target model. But in the TLD algorithm, the threshold of the tracking failure detection mechanism is set as a fixed value, which is difficult to adapt to various target tracking scenarios. Especially in the case of fast motion of the target, since the threshold of the tracking failure detection mechanism is smaller than the target's motion, the failure detection mechanism will determine that the tracking failure has occurred in the current frame, which is a wrong judgment.

发明内容Contents of the invention

本发明要解决技术问题为:针对TLD算法在目标发生快速运动时跟踪器的失败检测机制误判造成跟踪失败的问题,提出了自适应的跟踪失败检测机制,通过自适应阈值的调整克服原有失败检测机制的误判问题。在公开视频数据集上进行的实验表明,该方法可以在一定程度上提高跟踪的稳定性。The technical problem to be solved by the present invention is: Aiming at the problem that the tracking failure is caused by the misjudgment of the failure detection mechanism of the tracker when the target moves rapidly by the TLD algorithm, an adaptive tracking failure detection mechanism is proposed, and the original tracking failure detection mechanism is overcome by adjusting the adaptive threshold. Misjudgment problem of failure detection mechanism. Experiments on public video datasets show that this method can improve tracking stability to a certain extent.

本发明解决上述技术问题采用的技术方案为:一种基于TLD算法的目标稳定跟踪方法,在待跟踪视频的起始帧,由用户指定跟踪窗口形成正负样本对检测模块进行初始化训练。在跟踪过程中,检测模块和跟踪模块独立工作:检测模块扫描当前帧图像获得图像块并依次通过方差分类器、合并分类器、最近邻分类器。跟踪模块采用中值光流法通过上一帧跟踪预测当前帧目标位置。整合模块综合检测模块和跟踪模块进行跟踪结果的输出。并且对当前更新后的目标位置,产生新的正负样本,从而更新检测模块。The technical solution adopted by the present invention to solve the above technical problems is: a TLD algorithm-based target stable tracking method, in the initial frame of the video to be tracked, the positive and negative samples are formed by the user-specified tracking window to initialize and train the detection module. In the tracking process, the detection module and the tracking module work independently: the detection module scans the current frame image to obtain the image block and passes through the variance classifier, the merge classifier, and the nearest neighbor classifier in turn. The tracking module uses the median optical flow method to predict the current frame target position through the previous frame tracking. The integration module integrates the detection module and the tracking module to output the tracking results. And for the current updated target position, new positive and negative samples are generated, so as to update the detection module.

其中,在待跟踪视频的起始帧中,由用户指定跟踪窗口,然后对距离指定的跟踪窗口最近的扫描网格窗口中选取若干个窗口进行一系列的仿射变换形成初始的正样本,并对远离指定的跟踪窗口随机搜选获得初始的负样本。所获得的正负初始样本用来对检测模块进行初始化训练。Among them, in the initial frame of the video to be tracked, the tracking window is specified by the user, and then several windows are selected from the scanning grid window closest to the specified tracking window to perform a series of affine transformations to form initial positive samples, and Randomly search for initial negative samples away from the specified tracking window. The obtained positive and negative initial samples are used to initialize and train the detection module.

其中,在跟踪过程中,检测模块对当前帧图像进行网格扫描获得图像块后,首先计算各图像块的方差,方差小于某个阈值的图像块被接受,进入合并分类器。通过若干个不同的基本分类器进行的像素比较后得到的平均后验概率值,大于某个阈值的图像块被接受,进入最近邻分类器。通过对进入最近邻分类器的图像块进行灰度的零均值归一化处理,与目标模型中的图像块进行互相关归一化的相似度计算,如果相似度大于某个阈值,则判定当前图像块为目标区域,否则判定为背景。Among them, in the tracking process, after the detection module performs grid scanning on the current frame image to obtain image blocks, it first calculates the variance of each image block, and the image blocks whose variance is less than a certain threshold are accepted and entered into the merge classifier. The average posterior probability value obtained after pixel comparison by several different basic classifiers, the image block greater than a certain threshold is accepted and entered into the nearest neighbor classifier. By performing zero-mean normalization processing on the image block entering the nearest neighbor classifier, and performing cross-correlation normalized similarity calculation with the image block in the target model, if the similarity is greater than a certain threshold, it is judged that the current The image block is the target area, otherwise it is determined as the background.

其中,在跟踪过程中,利用了当前帧前N帧的有关信息对原有的跟踪失败检测机制进行自适应的阈值设定。其中,在跟踪失败检测机制的初始化中,整个视频的前N帧的跟踪失败检测阈值被设置为一个较大值,即默认在视频的前N帧中不会出现跟踪失败。Wherein, in the tracking process, the relevant information of N frames before the current frame is used to set the adaptive threshold for the original tracking failure detection mechanism. Wherein, in the initialization of the tracking failure detection mechanism, the tracking failure detection threshold of the first N frames of the entire video is set to a larger value, that is, no tracking failure occurs in the first N frames of the video by default.

其中,在跟踪过程中,跟踪模块和检测模块独立运行,并采用自适应跟踪失败检测机制进行跟踪失败检测,最后将检测结果与跟踪结果融合输出目标跟踪结果。在每一帧中对当前更新后的目标位置,产生新的正负样本,从而更新目标模型与检测模块。Among them, in the tracking process, the tracking module and the detection module operate independently, and adopt the adaptive tracking failure detection mechanism to detect the tracking failure, and finally combine the detection result and the tracking result to output the target tracking result. In each frame, new positive and negative samples are generated for the current updated target position, thereby updating the target model and detection module.

与现有技术相比,本发明的有益效果是:Compared with prior art, the beneficial effect of the present invention is:

这种方法能够自适应地调整跟踪失败检测机制的阈值,从而使目标跟踪算法在目标发生不同运动时,都能够对由目标被完全遮挡或者离开视野造成的跟踪失败作出正确的判断,进而实现更加稳定的跟踪。This method can adaptively adjust the threshold of the tracking failure detection mechanism, so that the target tracking algorithm can make a correct judgment on the tracking failure caused by the target being completely occluded or leaving the field of view when the target moves differently, and then achieve more Stable tracking.

附图说明Description of drawings

图1为中心点坐标计算示意图;Figure 1 is a schematic diagram of center point coordinate calculation;

图2为不同N取值的情况下成功跟踪帧数的变化;Figure 2 shows the changes in the number of successfully tracked frames under different N values;

图3为实验数据集的成功率(Pascal score)的对比图;Fig. 3 is a comparison chart of the success rate (Pascal score) of the experimental data set;

具体实施方式Detailed ways

下面结合附图意见具体实施方式进一步说明本发明。The present invention will be further described below in conjunction with the detailed description of the accompanying drawings.

首先定义矩形框1与矩形框2之间的距离为:First define the distance between rectangle 1 and rectangle 2 as:

其中(x1,y1)、(x2,y2)分别为矩形框1与矩形框2的中心点坐标。即定义两个矩形框之间的距离为两个矩形框中心点坐标之间的欧式距离。Where (x1 , y1 ) and (x2 , y2 ) are the coordinates of the center points of the rectangular frame 1 and the rectangular frame 2 respectively. That is, the distance between two rectangular boxes is defined as the Euclidean distance between the coordinates of the center points of the two rectangular boxes.

对于如图1所示的矩形框,中心点坐标(x0,y0)的计算公式为:For the rectangular frame shown in Figure 1, the calculation formula of the center point coordinates (x0 , y0 ) is:

基于在未发生目标被完全遮挡或者离开视野的情况下,目标的运动应当是连续的这一事实,本发明认为前N帧跟踪结果的矩形框之间的距离可以反映当前帧i(i=N+1)中目标的运动程度,从而可以基于这一距离进行阈值的自适应设定。为了减少跟踪器错误的影响,本发明采用前N帧跟踪结果矩形框之间距离的平均值对当前帧的阈值进行设定。Based on the fact that the target's motion should be continuous when the target is not completely blocked or leaves the field of view, the present invention considers that the distance between the rectangular frames of the tracking results of the previous N frames can reflect the current frame i (i=N +1), so that the threshold can be adaptively set based on this distance. In order to reduce the impact of the tracker error, the present invention uses the average value of the distance between the rectangular boxes of the tracking results of the previous N frames to set the threshold of the current frame.

在视频帧i(i>N)中,跟踪失败检测的阈值θfailure定义为如式(3)所示:In a video frame i (i>N), the thresholdθfailure for tracking failure detection is defined as shown in formula (3):

其中α为调节系数,定义为:where α is the adjustment coefficient, defined as:

初始化为1。后续会对α进行详细的说明。Initialized to 1. α will be described in detail later.

在视频帧i(3≤i≤N)中,由于跟踪结果较少,为了避免跟踪不稳定造成的错误,本发明以式(5)所示的方式对跟踪失败检测的阈值θfailure进行设置:In the video frame i (3≤i≤N), since there are few tracking results, in order to avoid errors caused by unstable tracking, the present invention sets the threshold θfailure for tracking failure detection in the manner shown in formula (5):

其中θc=10是TLD算法的默认残差阈值,α为调节系数,初始化为1。Where θc =10 is the default residual threshold of the TLD algorithm, and α is the adjustment coefficient, initialized to 1.

在视频帧i(1≤i≤2)中,没有数据可供参考设置。考虑到在实际的视频帧中,目标的运动是连续的,目标不会在第二帧就被完全遮挡或者消失于视野。基于这样一个事实,本发明认为在视频帧i(1≤i≤2)中不会出现因目标被完全遮挡或者消失于视野而造成的跟踪失败。因此,本发明以式(6)来对跟踪失败检测阈值进行初始化设置:In video frame i (1≤i≤2), there is no data available for reference setting. Considering that in the actual video frame, the motion of the target is continuous, the target will not be completely occluded or disappear from view in the second frame. Based on such a fact, the present invention considers that in the video frame i (1≤i≤2), there will be no tracking failure caused by the target being completely occluded or disappearing from the field of view. Therefore, the present invention carries out initialization setting to tracking failure detection threshold with formula (6):

θfailure=10·θc ...(6)θfailure = 10 θc ... (6)

其中θc=10是TLD算法的默认残差阈值。Where θc =10 is the default residual threshold of the TLD algorithm.

出现跟踪失败后,跟踪模块不输出任何结果,学习模块暂停目标模型的更新直至检测模块在全局检测中检测成功并重置跟踪模块,跟踪模块才会重新启动。考虑到这种情况,本发明的自适应阈值的设置在目标发生跟踪失败以后,以前一帧的阈值设置为基准,调整调节系数α的值为1.2,即适当增大跟踪失败检测阈值,以避免因目标突然发生快速运动时造成的失败检测机制误判。After a tracking failure occurs, the tracking module does not output any results, and the learning module suspends the update of the target model until the detection module detects successfully in the global detection and resets the tracking module, then the tracking module will restart. Considering this situation, after the setting of the adaptive threshold of the present invention, the threshold of the previous frame is set as a reference after the target has failed to track, and the value of the adjustment coefficient α is adjusted to 1.2, that is, the detection threshold of the tracking failure is appropriately increased to avoid Misjudgment of the failure detection mechanism caused by sudden and rapid movement of the target.

如前所述,自适应失败检测阈值设置为:As mentioned earlier, the adaptive failure detection threshold is set to:

其中θc=10为TLD算法的默认残差阈值,α∈{1,1.2}为调节系数。Where θc =10 is the default residual threshold of the TLD algorithm, and α∈{1,1.2} is the adjustment coefficient.

采用Deer数据集进行实验,在N∈{3,4,5,6,7,8,9,10,11,12,13}的取值下进行了测试,以期寻找一个合理的值完成自适应阈值调整的失败检测机制的设置。事实上N可以取值为1 或2,即使用前1帧或者前2帧的跟踪结果矩形框之间的距离来对阈值进行设置。但是过少的数据存在着较大的偶然性,因此没有把1和2包含在N的实际取值中。另外考虑到N取值过大会加入过多先前运动结果的影响,而这种影响并不一定是积极的,N>13的值没有被加入测试。The Deer data set was used for experiments, and the test was carried out under the value of N∈{3,4,5,6,7,8,9,10,11,12,13}, in order to find a reasonable value to complete the self-adaptation Settings for the failure detection mechanism for threshold adjustments. In fact, N can take a value of 1 or 2, that is, the threshold is set by using the distance between the tracking result rectangles of the previous frame or the previous two frames. However, there is a large chance of too little data, so 1 and 2 are not included in the actual value of N. In addition, considering that the value of N is too large, the influence of too many previous exercise results will be added, and this effect is not necessarily positive, and the value of N>13 is not included in the test.

图2即为不同N取值的情况下成功跟踪帧数的变化。从图2可以看出,随着N值的增大,成功跟踪帧数在上升后在N≥6时稳定在65左右不再变化。而Deer数据集中总帧数为 71帧。当N≥6时跟踪发生跟踪失败的视频帧中重复的为第27、28、31、32帧,而这几帧正是整个Deer数据集中目标运动最剧烈的。因此尝试增大调节系数α在发生跟踪失败后的调节值,即在发生跟踪失败时增大跟踪失败检测阈值,成功跟踪帧数有细微的提升,但是仍然存在跟踪失败。这是由于α本身的定义导致的不可避免的失败。因为α在最初定义时即定义在发生跟踪失败时取较大值以适应当前帧中目标可能发生的较大的运动,在实际的测试结果也印证了这一点。Figure 2 shows the changes in the number of successfully tracked frames under different N values. It can be seen from Figure 2 that as the value of N increases, the number of successfully tracked frames stabilizes at around 65 and does not change when N≥6. The total number of frames in the Deer dataset is 71 frames. When N≥6, the video frames where the tracking fails are repeated are the 27th, 28th, 31st, and 32nd frames, and these frames are the most violent target motions in the entire Deer dataset. Therefore, try to increase the adjustment value of the adjustment coefficient α after the tracking failure occurs, that is, increase the tracking failure detection threshold when the tracking failure occurs, and the number of successful tracking frames is slightly improved, but there are still tracking failures. This is an inevitable failure due to the definition of α itself. Because α is initially defined to take a larger value when tracking failure occurs to adapt to the larger motion that may occur in the current frame, the actual test results also confirm this point.

因此,本发明选取能够使成功跟踪帧数收敛的最小的N值对式(7)进行设定。即自适应失败检测阈值θfailure为:Therefore, the present invention selects the smallest N value that can converge the number of successful tracking frames to set the formula (7). That is, the adaptive failure detection threshold θfailure is:

其中N=6,θc=10,where N=6, θc =10,

需要注意的是,经过目标在极缓慢运动的视频帧中跟踪结果的训练,θfailure会被训练为一个非常小的值,而后突然出现的快速运动,θfailure需要多帧才能被调节系数α拉到一个合理的水平。It should be noted that after the training of the target tracking results in very slow-moving video frames, θfailure will be trained to a very small value, and then the sudden fast movement, θfailure needs multiple frames to be pulled by the adjustment coefficient α to a reasonable level.

因此,本发明在上述方法中加入了如式(9)所示的机制来增强上述方法的鲁棒性:Therefore, the present invention has added the mechanism shown in formula (9) in above-mentioned method to strengthen the robustness of above-mentioned method:

θfailure=max(θfailurec)...(9)θfailure = max(θfailurec )...(9)

其中θc=10。where θc =10.

加入式(9)的改进,选取包含快速运动(FastMotion)的BlurOwl、BlurBody、Deer、Jumping、BlurCar2数据集进行了测试,测试结果如表1所示。Adding the improvement of formula (9), the BlurOwl, BlurBody, Deer, Jumping, and BlurCar2 datasets including FastMotion were selected for testing. The test results are shown in Table 1.

由表1可以看出,本发明提出的方法可以显著地提高TLD算法的跟踪稳定性。It can be seen from Table 1 that the method proposed by the present invention can significantly improve the tracking stability of the TLD algorithm.

表1实验数据集的统计对比结果Table 1 Statistical comparison results of experimental data sets

本发明未详细阐述部分属于本领域技术人员的公知技术。Parts not described in detail in the present invention belong to the known techniques of those skilled in the art.

本技术领域中的普通技术人员应当认识到,以上的实施例仅是用来说明本发明,而并非用作为对本发明的限定,只要在本发明的实质精神范围内,对以上所述实施例变化、变型都将落在本发明权利要求书的范围内。Those of ordinary skill in the art should recognize that the above embodiments are only used to illustrate the present invention, rather than as a limitation to the present invention, as long as within the scope of the spirit of the present invention, changes to the above embodiments , modification will fall within the scope of the claims of the present invention.

Claims (5)

1. a kind of target tenacious tracking method based on TLD algorithms, it is characterized in that:In the start frame of video to be tracked, by userSpecified tracking window forms positive negative sample and carries out initialization training to detection module, during tracking, detection module and trackingModule works independently:Detection module scanning current frame image obtain image block and pass sequentially through variance grader, merge grader,Nearest neighbor classifier, and to being clustered by the image block of these three graders, tracking module is passed through using intermediate value optical flow methodPrevious frame tracking result predicts present frame target location, and calculate present frame target location and previous frame target location central point itBetween Euclidean distance D, if D be more than an adaptive threshold value, judge present frame tracking failure, tracking module does not export anyAs a result, integrating the output of module synthesis detection module and tracking module progress tracking result and to current updated target positionIt sets, new positive negative sample is generated, to update detection module.
3. a kind of target tenacious tracking method based on TLD algorithms according to claim 1, it is characterized in that:It was trackingCheng Zhong calculates the variance of each image block, variance first after detection module carries out network scanning acquisition image block to current frame imageImage block less than some threshold value is received, and into grader is merged, passes through the picture of basic classification device several different progressThe average posterior probability values that element obtains more afterwards, the image block more than some threshold value is received, into nearest neighbor classifier.Pass throughImage block to entering nearest neighbor classifier carries out the zero-mean normalized of gray scale, is carried out with the image block in object moduleThe normalized similarity calculation of cross-correlation judges current image block for target area if similarity is more than some threshold value, noThen it is determined as background.
CN201810506760.8A2018-05-242018-05-24Target stable tracking method based on TLD algorithmActiveCN108765455B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201810506760.8ACN108765455B (en)2018-05-242018-05-24Target stable tracking method based on TLD algorithm

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201810506760.8ACN108765455B (en)2018-05-242018-05-24Target stable tracking method based on TLD algorithm

Publications (2)

Publication NumberPublication Date
CN108765455Atrue CN108765455A (en)2018-11-06
CN108765455B CN108765455B (en)2021-09-21

Family

ID=64005313

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201810506760.8AActiveCN108765455B (en)2018-05-242018-05-24Target stable tracking method based on TLD algorithm

Country Status (1)

CountryLink
CN (1)CN108765455B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109635657A (en)*2018-11-122019-04-16平安科技(深圳)有限公司Method for tracking target, device, equipment and storage medium
CN109858526A (en)*2019-01-082019-06-07沈阳理工大学Sensor-based multi-target track fusion method in a kind of target following
CN109917818A (en)*2019-01-312019-06-21天津大学 Collaborative search and containment method based on ground robot
CN110472562A (en)*2019-08-132019-11-19新华智云科技有限公司Position ball video clip detection method, device, system and storage medium
CN111627046A (en)*2020-05-152020-09-04北京百度网讯科技有限公司Target part tracking method and device, electronic equipment and readable storage medium
CN113243026A (en)*2019-10-042021-08-10Sk电信有限公司Apparatus and method for high resolution object detection
CN113284167A (en)*2021-05-282021-08-20深圳数联天下智能科技有限公司Face tracking detection method, device, equipment and medium
CN114782496A (en)*2022-06-202022-07-22杭州闪马智擎科技有限公司Object tracking method and device, storage medium and electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103871081A (en)*2014-03-292014-06-18湘潭大学Method for tracking self-adaptive robust on-line target
CN106651909A (en)*2016-10-202017-05-10北京信息科技大学Background weighting-based scale and orientation adaptive mean shift method
CN107679455A (en)*2017-08-292018-02-09平安科技(深圳)有限公司Target tracker, method and computer-readable recording medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103871081A (en)*2014-03-292014-06-18湘潭大学Method for tracking self-adaptive robust on-line target
CN106651909A (en)*2016-10-202017-05-10北京信息科技大学Background weighting-based scale and orientation adaptive mean shift method
CN107679455A (en)*2017-08-292018-02-09平安科技(深圳)有限公司Target tracker, method and computer-readable recording medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZDENEK KALAL 等: "Tracking-Learning-Detection", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》*

Cited By (13)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109635657A (en)*2018-11-122019-04-16平安科技(深圳)有限公司Method for tracking target, device, equipment and storage medium
CN109635657B (en)*2018-11-122023-01-06平安科技(深圳)有限公司Target tracking method, device, equipment and storage medium
CN109858526A (en)*2019-01-082019-06-07沈阳理工大学Sensor-based multi-target track fusion method in a kind of target following
CN109858526B (en)*2019-01-082023-08-18沈阳理工大学 A Sensor-Based Multi-Target Trajectory Fusion Method in Target Tracking
CN109917818B (en)*2019-01-312021-08-13天津大学 Collaborative search and containment method based on ground robot
CN109917818A (en)*2019-01-312019-06-21天津大学 Collaborative search and containment method based on ground robot
CN110472562A (en)*2019-08-132019-11-19新华智云科技有限公司Position ball video clip detection method, device, system and storage medium
US20210286997A1 (en)*2019-10-042021-09-16Sk Telecom Co., Ltd.Method and apparatus for detecting objects from high resolution image
CN113243026A (en)*2019-10-042021-08-10Sk电信有限公司Apparatus and method for high resolution object detection
CN111627046A (en)*2020-05-152020-09-04北京百度网讯科技有限公司Target part tracking method and device, electronic equipment and readable storage medium
CN113284167A (en)*2021-05-282021-08-20深圳数联天下智能科技有限公司Face tracking detection method, device, equipment and medium
CN113284167B (en)*2021-05-282023-03-07深圳数联天下智能科技有限公司Face tracking detection method, device, equipment and medium
CN114782496A (en)*2022-06-202022-07-22杭州闪马智擎科技有限公司Object tracking method and device, storage medium and electronic device

Also Published As

Publication numberPublication date
CN108765455B (en)2021-09-21

Similar Documents

PublicationPublication DateTitle
CN108765455A (en)Target stable tracking method based on T L D algorithm
CN112836640B (en)Single-camera multi-target pedestrian tracking method
CN116152292B (en) A multi-category multi-target tracking method based on cubic matching
US20200074646A1 (en)Method for obtaining image tracking points and device and storage medium thereof
CN116645402A (en)Online pedestrian tracking method based on improved target detection network
JP5166102B2 (en) Image processing apparatus and method
US8243993B2 (en)Method for moving object detection and hand gesture control method based on the method for moving object detection
CN112184759A (en)Moving target detection and tracking method and system based on video
WO2015163830A1 (en)Target localization and size estimation via multiple model learning in visual tracking
ZulkifleyTwo streams multiple-model object tracker for thermal infrared video
CN107392210A (en)Target detection tracking method based on TLD algorithm
JP6750385B2 (en) Image processing program, image processing method, and image processing apparatus
CN104077596A (en)Landmark-free tracking registering method
JP7446060B2 (en) Information processing device, program and information processing method
CN110555377A (en)pedestrian detection and tracking method based on fisheye camera overlook shooting
CN108846850B (en)Target tracking method based on TLD algorithm
CN110084830A (en)A kind of detection of video frequency motion target and tracking
CN110569785A (en)Face recognition method based on fusion tracking technology
CN111192294A (en) A target tracking method and system based on target detection
CN116645396A (en)Track determination method, track determination device, computer-readable storage medium and electronic device
KR102434397B1 (en)Real time multi-object tracking device and method by using global motion
KR101542206B1 (en)Method and system for tracking with extraction object using coarse to fine techniques
CN113781523B (en) A football detection and tracking method and device, electronic equipment, and storage medium
JP2019021297A (en)Image processing device and method, and electronic apparatus
Li et al.An improved mean shift algorithm for moving object tracking

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp