Movatterモバイル変換


[0]ホーム

URL:


CN114663475A - Target tracking method, device, medium and equipment - Google Patents

Target tracking method, device, medium and equipment
Download PDF

Info

Publication number
CN114663475A
CN114663475ACN202210301163.8ACN202210301163ACN114663475ACN 114663475 ACN114663475 ACN 114663475ACN 202210301163 ACN202210301163 ACN 202210301163ACN 114663475 ACN114663475 ACN 114663475A
Authority
CN
China
Prior art keywords
tracking
objects
target
camera
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210301163.8A
Other languages
Chinese (zh)
Inventor
曹睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Zhongke Yuncong Technology Co ltd
Original Assignee
Chongqing Zhongke Yuncong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Zhongke Yuncong Technology Co ltdfiledCriticalChongqing Zhongke Yuncong Technology Co ltd
Priority to CN202210301163.8ApriorityCriticalpatent/CN114663475A/en
Publication of CN114663475ApublicationCriticalpatent/CN114663475A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The invention discloses a target tracking method, which comprises the following steps: acquiring a first target image including one or more first tracking objects captured by a capturing camera; determining an associated camera having an associated relationship with the snapshot camera from the one or more first tracked objects, wherein the associated camera has an overlapping field of view with the snapshot camera; acquiring a second target image captured by the associated camera containing one or more second tracking objects; the first target image and the second target image have the same snapshot time; matching one or more first tracking objects with one or more second tracking objects to determine tracking objects which belong to the same object in the one or more first tracking objects and the one or more second tracking objects and mark the tracking objects as target objects; and determining three-dimensional space position information of the target object based on the position of the snapshot camera and the position of the associated camera so as to realize the tracking of the target object. According to the invention, a plurality of cameras are deployed in a target scene, and the cameras have overlapping view areas, so that the pedestrian in the overlapping view areas can be positioned with higher positioning accuracy by utilizing the principle of triangulation distance measurement.

Description

Translated fromChinese
一种目标跟踪方法、装置、介质及设备A target tracking method, device, medium and equipment

技术领域technical field

本发明涉及图像处理技术领域,具体涉及一种目标跟踪方法、装置、介质及设备。The present invention relates to the technical field of image processing, in particular to a target tracking method, device, medium and device.

背景技术Background technique

随着信息化智能化的迭代升级,越来越多的应用场景需要对其中行人的行动路线进行分析。这需要对行人进行定位,并依据行人的各种模态的特征,对属于同一个人的位置等信息进行跟踪并串接起来形成轨迹路线。现有的解决方案有很多种,一类方案是侵入式的,即让行人携带定位硬件(例如蓝牙定位模块、RTK模块等),进而通过外部基站方位和定位芯片的特征码实现对目标的定位和追踪,但是这种方案必须要行人携带特定的硬件,这就给其部署实施带来了极大的局限性。另一类方案是无感式的,这类方案不对行人做任何要求,因而部署非常灵活。但是无感式方案因为没有终端硬件配合,在定位和追踪上都更加困难。With the iterative upgrade of informatization and intelligence, more and more application scenarios need to analyze the action routes of pedestrians. This requires locating pedestrians, and according to the characteristics of various modes of pedestrians, tracking information such as the location of the same person and concatenating them to form a trajectory route. There are many existing solutions. One type of solution is intrusive, that is, allowing pedestrians to carry positioning hardware (such as Bluetooth positioning module, RTK module, etc.) and tracking, but this solution requires pedestrians to carry specific hardware, which brings great limitations to its deployment and implementation. Another type of scheme is non-inductive, which does not require any pedestrians, so the deployment is very flexible. However, the sensorless solution is more difficult to locate and track because there is no terminal hardware.

发明内容SUMMARY OF THE INVENTION

鉴于以上所述现有技术的缺点,本发明的目的在于提供一种目标跟踪方法、装置、介质及设备,用于解决现有技术中的至少一个缺陷。In view of the above-mentioned shortcomings of the prior art, the purpose of the present invention is to provide a target tracking method, apparatus, medium and device for solving at least one defect in the prior art.

为实现上述目的及其他相关目的,本发明提供一种目标跟踪方法,包括:To achieve the above purpose and other related purposes, the present invention provides a target tracking method, including:

获取由抓拍相机抓拍到的包含一个或多个第一跟踪对象的第一目标图像;acquiring a first target image including one or more first tracking objects captured by the capture camera;

根据所述一个或多个第一跟踪对象在所述第一目标图像中的位置或/和视觉特征确定与抓拍相机具有关联关系的关联相机,其中,所述关联相机与所述抓拍相机具有交叠视野;An associated camera having an associated relationship with the snapshot camera is determined according to the positions or/and visual features of the one or more first tracking objects in the first target image, wherein the associated camera has an interaction with the snapshot camera overlapping field of view;

获取由所述关联相机抓拍到的包含一个或多个第二跟踪对象的第二目标图像;其中,第一目标图像与第二目标图像的抓拍时刻相同;Acquiring a second target image including one or more second tracking objects captured by the associated camera; wherein the first target image and the second target image are captured at the same time;

将所述一个或多个第一跟踪对象与所述一个或多个第二跟踪对象进行匹配,以确定所述一个或多个第一跟踪对象与所述一个或多个第二跟踪对象中属于同一对象的跟踪对象,记为目标对象;Matching the one or more first tracking objects with the one or more second tracking objects to determine whether the one or more first tracking objects belong to the one or more second tracking objects Tracking objects of the same object are recorded as target objects;

基于所述抓拍相机的位置、所述关联相机的位置确定所述目标对象的三维空间位置信息,以实现对所述目标对象的跟踪。The three-dimensional space position information of the target object is determined based on the position of the snapshot camera and the position of the associated camera, so as to realize the tracking of the target object.

可选地,所述基于所述抓拍相机的位置、所述关联相机的位置确定所述目标对象的三维空间位置信息,包括:Optionally, the determining the three-dimensional space position information of the target object based on the position of the snapshot camera and the position of the associated camera includes:

基于所述抓拍相机的位置、所述关联相机的位置,通过三角测距算法计算所述目标对象的三维空间位置信息。Based on the position of the snapshot camera and the position of the associated camera, the three-dimensional space position information of the target object is calculated through a triangulation ranging algorithm.

可选地,确定所述一个或多个第一跟踪对象在所述第一目标图像的位置的步骤,包括:Optionally, the step of determining the position of the one or more first tracking objects in the first target image includes:

对所述一个或多个第一跟踪对象进行人体关键点检测,得到所述一个或多个第一跟踪对象的人体关键点;Performing human body key point detection on the one or more first tracking objects to obtain the human body key points of the one or more first tracking objects;

确定人体关键点的像素位置在第一目标图像的位置,即所述一个或多个第一跟踪对象在所述第一目标图像中的位置。Determine the position of the pixel position of the human body key point in the first target image, that is, the position of the one or more first tracking objects in the first target image.

可选地,所述人体关键点包括以下至少之一:头、手、臀、膝、脚踝。Optionally, the key points of the human body include at least one of the following: head, hand, hip, knee, and ankle.

可选地,将所述一个或多个第一跟踪对象与所述一个或多个第二跟踪对象进行匹配,包括:Optionally, matching the one or more first tracking objects with the one or more second tracking objects includes:

对所述第一跟踪对象与所述第二跟踪对象进行视觉特征提取,得到第一跟踪对象的第一视觉特征和第二跟踪对象的第二视觉特征;performing visual feature extraction on the first tracking object and the second tracking object to obtain a first visual feature of the first tracking object and a second visual feature of the second tracking object;

计算所述第一视觉特征与所述第二视觉特征的特征相似度;calculating the feature similarity between the first visual feature and the second visual feature;

将所述特征相似度超过预设相似度阈值的第一跟踪对象与第二跟踪对象作为候选对象。The first tracking object and the second tracking object whose feature similarity exceeds a preset similarity threshold are used as candidate objects.

可选地,将所述一个或多个第一跟踪对象与所述一个或多个第二跟踪对象进行匹配,包括:Optionally, matching the one or more first tracking objects with the one or more second tracking objects includes:

通过所述抓拍相机与所述关联相机之间的极线约束对所述候选对象进行匹配;matching the candidate objects through epipolar constraints between the snapshot camera and the associated camera;

若所述候选对象满足所述极线约束,则所述第一跟踪对象与所述第二跟踪对象为目标对象。If the candidate object satisfies the epipolar constraint, the first tracking object and the second tracking object are target objects.

可选地,所述方法还包括:Optionally, the method further includes:

根据在第t帧时所述一个或多个第一跟踪对象在所述第一目标图像中的位置,预测得到在第t+1帧时所述一个或多个第一跟踪对象在所述第一目标图像中的位置;According to the positions of the one or more first tracked objects in the first target image at the t-th frame, it is predicted that the one or more first tracked objects are located in the t+1-th frame at the t+1-th frame. a position in the target image;

根据在第t帧时所述一个或多个第二跟踪对象在所述第二目标图像中的位置,预测得到在第t+1帧时所述一个或多个第二跟踪对象在所述第二目标图像中的位置;According to the positions of the one or more second tracked objects in the second target image at the t-th frame, it is predicted that the one or more second tracked objects are located in the t+1-th frame at the t+1-th frame. The position in the target image;

将在第t+1帧时的所述一个或多个第一跟踪对象与所述一个或多个第二跟踪对象进行匹配。The one or more first tracked objects at frame t+1 are matched with the one or more second tracked objects.

可选地,基于第t帧时所述一个或多个第一跟踪对象在第一目标图像的位置,结合卡尔曼滤波算法以及Kuhn-Munkres算法预测得到在第t+1帧时所述一个或多个第一跟踪对象在第一目标图像中的位置;Optionally, based on the position of the one or more first tracking objects in the first target image at the t frame, combined with the Kalman filter algorithm and the Kuhn-Munkres algorithm, it is predicted that the one or more at the t+1 frame is obtained. the positions of the plurality of first tracking objects in the first target image;

基于第t帧时所述一个或多个第二跟踪对象在第二目标图像的位置,结合卡尔曼滤波算法以及Kuhn-Munkres算法预测得到在第t+1帧时所述一个或多个第二跟踪对象在第二目标图像中的位置。Based on the position of the one or more second tracking objects in the second target image at the t frame, the one or more second tracking objects at the t+1 frame are predicted to be obtained by combining the Kalman filter algorithm and the Kuhn-Munkres algorithm. The position of the object in the second target image is tracked.

可选地,基于第t帧时所述一个或多个第一跟踪对象在所述第一目标图像中的位置,利用所述卡尔曼滤波算法对所述一个或多个第一跟踪对象的位置进行预测,得到在第t+1帧时所述一个或多个第一跟踪对象的预测位置;通过所述Kuhn-Munkres算法对在第t帧时所述一个或多个第一跟踪对象在所述第一目标图像中的位置与在第t+1帧时所述一个或多个第一跟踪对象的预测位置进行关联匹配;Optionally, based on the position of the one or more first tracking objects in the first target image at the t-th frame, use the Kalman filter algorithm to determine the position of the one or more first tracking objects. Perform prediction to obtain the predicted positions of the one or more first tracking objects at the t+1th frame; the Kuhn-Munkres algorithm is used to determine the position of the one or more first tracking objects at the tth frame. The position in the first target image is associated and matched with the predicted position of the one or more first tracking objects at the t+1th frame;

基于第t帧时所述一个或多个第二跟踪对象在所述第二目标图像中的位置,利用所述卡尔曼滤波算法对所述一个或多个第二跟踪对象的位置进行预测,得到在第t+1帧时所述一个或多个第二跟踪对象的预测位置;通过所述Kuhn-Munkres算法对在第t帧时所述一个或多个第二跟踪对象在所述第二目标图像中的位置与在第t+1帧时所述一个或多个第二跟踪对象的预测位置进行关联匹配。Based on the positions of the one or more second tracking objects in the second target image at the t-th frame, the Kalman filtering algorithm is used to predict the positions of the one or more second tracking objects, to obtain The predicted positions of the one or more second tracked objects at the t+1th frame; the Kuhn-Munkres algorithm is used to determine the position of the one or more second tracked objects at the second target at the tth frame The positions in the image are associated with the predicted positions of the one or more second tracked objects at frame t+1.

可选地,若所述抓拍相机具有多个关联相机,则所述方法还包括:Optionally, if the snapshot camera has multiple associated cameras, the method further includes:

基于所述抓拍相机与所述多个关联相机得到目标对象的多个三维空间位置信息;Obtaining a plurality of three-dimensional space position information of the target object based on the snapshot camera and the plurality of associated cameras;

对所述目标对象的多个三维空间位置信息进行融合,得到融合位置信息;fusing multiple three-dimensional spatial position information of the target object to obtain fused position information;

在对目标对象跟踪时,基于所述融合位置信息对所述目标对象进行跟踪。When tracking the target object, the target object is tracked based on the fusion position information.

可选地,所述对所述目标对象的多个三维空间位置信息进行融合,得到融合位置信息,包括:Optionally, the fusion of multiple three-dimensional spatial position information of the target object to obtain fusion position information includes:

利用卡尔曼滤波算法对所述目标对象的多个三维空间位置信息进行融合,得到融合位置信息。A Kalman filter algorithm is used to fuse multiple three-dimensional spatial position information of the target object to obtain fused position information.

可选地,所述方法还包括:Optionally, the method further includes:

获取不同目标对象之间的三维空间位置信息与视觉特征相似度;Obtain the three-dimensional spatial position information and visual feature similarity between different target objects;

对同一个目标对象的所述三维空间位置信息与所述视觉特征相似度进行归一化,得到马氏距离或卡方分布;Normalizing the three-dimensional space position information and the visual feature similarity of the same target object to obtain Mahalanobis distance or chi-square distribution;

根据不同目标对象的马氏距离或卡方分布,判断不同目标对象中是否存在相同的目标对象;According to the Mahalanobis distance or chi-square distribution of different target objects, determine whether there is the same target object in different target objects;

在对所述目标对象进行跟踪时,对相同的目标对象中的一个目标对象进行跟踪。When tracking the target objects, one of the same target objects is tracked.

为实现上述目的及其他相关目的,本发明提供一种目标跟踪装置,其特征在于,包括:In order to achieve the above purpose and other related purposes, the present invention provides a target tracking device, characterized in that it includes:

第一图像获取模块,用于获取由抓拍相机抓拍到的包含一个或多个第一跟踪对象的第一目标图像;a first image acquisition module, configured to acquire a first target image captured by a capture camera and including one or more first tracking objects;

关联相机确定模块,根据所述一个或多个第一跟踪对象在所述第一目标图像中的位置或/和视觉特征确定与抓拍相机具有关联关系的关联相机,其中,所述关联相机与所述抓拍相机具有交叠视野;An associated camera determination module, which determines an associated camera having an associated relationship with the snapshot camera according to the position or/and visual characteristics of the one or more first tracking objects in the first target image, wherein the associated camera is associated with the captured camera. The snapshot cameras have overlapping fields of view;

第二图像获取模块,用于获取由所述关联相机抓拍到的包含一个或多个第二跟踪对象的第二目标图像;其中,第一目标图像与第二目标图像的抓拍时刻相同;A second image acquisition module, configured to acquire a second target image including one or more second tracking objects captured by the associated camera; wherein the capture moment of the first target image and the second target image is the same;

匹配模块,用于将所述一个或多个第一跟踪对象与所述一个或多个第二跟踪对象进行匹配,以确定所述一个或多个第一跟踪对象与所述一个或多个第二跟踪对象中属于同一对象的跟踪对象,记为目标对象;A matching module for matching the one or more first tracking objects with the one or more second tracking objects to determine the one or more first tracking objects and the one or more first tracking objects 2. The tracking objects belonging to the same object in the tracking objects are recorded as target objects;

跟踪模块,用于基于所述抓拍相机的位置、所述关联相机的位置确定所述目标对象的三维空间位置信息,以实现对所述目标对象的跟踪。A tracking module, configured to determine the three-dimensional space position information of the target object based on the position of the snapshot camera and the position of the associated camera, so as to realize the tracking of the target object.

为实现上述目的及其他相关目的,本发明提供一种目标跟踪设备,包括:To achieve the above purpose and other related purposes, the present invention provides a target tracking device, including:

一个或多个处理器;和one or more processors; and

其上存储有指令的一个或多个机器可读介质,当所述一个或多个处理器执行时,使得所述设备执行一个或多个所述的方法。One or more machine-readable media having instructions stored thereon that, when executed by the one or more processors, cause the apparatus to perform one or more of the described methods.

为实现上述目的及其他相关目的,本发明提供一个或多个机器可读介质,其上存储有指令,当由一个或多个处理器执行时,使得设备执行一个或多个所述的方法。To achieve the above and other related objects, the present invention provides one or more machine-readable media having stored thereon instructions that, when executed by one or more processors, cause an apparatus to perform one or more of the described methods.

如上所述,本发明的一种目标跟踪方法、装置、介质及设备,具有以下有益效果:As described above, a target tracking method, device, medium and device of the present invention have the following beneficial effects:

本发明的一种目标跟踪方法,包括:获取由抓拍相机抓拍到的包含一个或多个第一跟踪对象的第一目标图像;根据所述一个或多个第一跟踪对象在所述第一目标图像中的位置或/和视觉特征确定与抓拍相机具有关联关系的关联相机,其中,所述关联相机与所述抓拍相机具有交叠视野;获取由所述关联相机抓拍到的包含一个或多个第二跟踪对象的第二目标图像;其中,第一目标图像与第二目标图像的抓拍时刻相同;将所述一个或多个第一跟踪对象与所述一个或多个第二跟踪对象进行匹配,以确定所述一个或多个第一跟踪对象与所述一个或多个第二跟踪对象中属于同一对象的跟踪对象,记为目标对象;基于所述抓拍相机的位置、所述关联相机的位置确定所述目标对象的三维空间位置信息,以实现对所述目标对象的跟踪。本发明在目标场景中部署多个相机,且相机之间具有交叠视野区域,对于处于交叠视野区域的行人,可以利用三角测距原理,实现较高的定位精度;进一步,利用位置估计将行人的视觉特征投影到统一的三维空间中进行高维、多模态的跨镜头关联和追踪,实现对行人目标的实时、无死角跟踪,很好的解决了单镜头中目标跟踪的遮挡难题,可极大提升跟踪的精度和鲁棒性。A target tracking method of the present invention includes: acquiring a first target image including one or more first tracking objects captured by a capture camera; The position or/and visual feature in the image determines an associated camera that has an associated relationship with the snapshot camera, wherein the associated camera and the snapshot camera have an overlapping field of view; the obtained images captured by the associated camera include one or more A second target image of the second tracking object; wherein the first target image and the second target image are captured at the same time; matching the one or more first tracking objects with the one or more second tracking objects , to determine the one or more first tracking objects and the one or more second tracking objects that belong to the same object, and record it as the target object; The position determines the three-dimensional space position information of the target object, so as to realize the tracking of the target object. In the present invention, multiple cameras are deployed in the target scene, and the cameras have overlapping field of view areas. For pedestrians in the overlapping field of view, the principle of triangulation can be used to achieve higher positioning accuracy; The visual features of pedestrians are projected into a unified three-dimensional space for high-dimensional, multi-modal cross-lens correlation and tracking, to achieve real-time, dead-end tracking of pedestrian targets, and to solve the occlusion problem of target tracking in a single lens. It can greatly improve the accuracy and robustness of tracking.

图附说明Description of figures

图1为本发明一实施例一种目标跟踪方法的流程图;1 is a flowchart of a target tracking method according to an embodiment of the present invention;

图2为本发明一实施例多个抓拍与关联相机的关系图;FIG. 2 is a relationship diagram of multiple snapshots and associated cameras according to an embodiment of the present invention;

图3为本发明一实施例确定第一跟踪对象的位置的方法流程图;3 is a flowchart of a method for determining a position of a first tracking object according to an embodiment of the present invention;

图4为本发明一实施例对第一跟踪对象与第二跟踪对象进行匹配的方法的流程图;4 is a flowchart of a method for matching a first tracking object with a second tracking object according to an embodiment of the present invention;

图5为本发明另一实施例对第一跟踪对象与第二跟踪对象进行匹配的方法的流程图;5 is a flowchart of a method for matching a first tracking object with a second tracking object according to another embodiment of the present invention;

图6为本发明一实施例一种目标跟踪装置的硬件结构示意图;6 is a schematic diagram of the hardware structure of a target tracking device according to an embodiment of the present invention;

图7为本发明一实施例中终端设备的硬件结构示意图;7 is a schematic diagram of a hardware structure of a terminal device in an embodiment of the present invention;

图8为本发明一实施例中终端设备的硬件结构示意图。FIG. 8 is a schematic diagram of a hardware structure of a terminal device in an embodiment of the present invention.

具体实施方式Detailed ways

以下通过特定的具体实例说明本发明的实施方式,本领域技术人员可由本说明书所揭露的内容轻易地了解本发明的其他优点与功效。本发明还可以通过另外不同的具体实施方式加以实施或应用,本说明书中的各项细节也可以基于不同观点与应用,在没有背离本发明的精神下进行各种修饰或改变。需说明的是,在不冲突的情况下,以下实施例及实施例中的特征可以相互组合。The embodiments of the present invention are described below through specific specific examples, and those skilled in the art can easily understand other advantages and effects of the present invention from the contents disclosed in this specification. The present invention can also be implemented or applied through other different specific embodiments, and various details in this specification can also be modified or changed based on different viewpoints and applications without departing from the spirit of the present invention. It should be noted that the following embodiments and features in the embodiments may be combined with each other under the condition of no conflict.

需要说明的是,以下实施例中所提供的图示仅以示意方式说明本发明的基本构想,遂图式中仅显示与本发明中有关的组件而非按照实际实施时的组件数目、形状及尺寸绘制,其实际实施时各组件的型态、数量及比例可为一种随意的改变,且其组件布局型态也可能更为复杂。It should be noted that the drawings provided in the following embodiments are only used to illustrate the basic concept of the present invention in a schematic way, so the drawings only show the components related to the present invention rather than the number, shape and number of components in actual implementation. For dimension drawing, the type, quantity and proportion of each component can be changed at will in actual implementation, and the component layout may also be more complicated.

在对人体进行跟踪时,现有的解决方案有两种,一种是侵入式的,另一个是无感式的。对于无感式的跟踪方案,不对行人做任何要求,因而部署非常灵活。但是无感式方案因为没有终端硬件配合,在定位和追踪上都更加困难。When tracking the human body, there are two existing solutions, one is invasive and the other is non-inductive. For the sensorless tracking solution, there is no requirement for pedestrians, so the deployment is very flexible. However, the sensorless solution is more difficult to locate and track because there is no terminal hardware.

在目标定位方面,传统的相机只能感知二维画面,无法准确获悉目标距离,为了解决该问题,可以引入三维测距硬件例如结构光相机、TOF相机、雷达等设备进行辅助测距并定位,但是这会导致整套解决方案的硬件成本倍增。而在目标追踪方面,必须要依据行人的各类特征来对不同时刻之间、不同相机之间的同一个人进行关联,在无感式方案中,主要依靠行人的视觉特征和位置特征来进行关联和追踪。而视觉目标追踪技术主要研究的是在单个镜头内进行目标追踪。在场景复杂、行人众多的情况下,非常容易发生一个人长时间地被其他行人、建筑遮挡,并造成轨迹终断,轨迹错乱等问题。基于上述缺陷,本申请实施例提供一种纯视觉无感式跨镜头目标定位标及跟踪系统,具有更好的定位精度,更好的追踪效果。如图1所示,一种目标跟踪方法,包括以下步骤:In terms of target positioning, traditional cameras can only perceive two-dimensional images and cannot accurately know the target distance. In order to solve this problem, three-dimensional ranging hardware such as structured light cameras, TOF cameras, radars and other equipment can be introduced to assist ranging and positioning. But this doubles the hardware cost of the complete solution. In terms of target tracking, it is necessary to associate the same person at different times and between different cameras according to various characteristics of pedestrians. and tracking. The visual target tracking technology mainly studies target tracking within a single shot. In the case of complex scenes and many pedestrians, it is very easy for a person to be blocked by other pedestrians and buildings for a long time, resulting in the termination of the trajectory and the disorder of the trajectory. Based on the above defects, the embodiments of the present application provide a pure visual sensorless cross-lens target locating marker and tracking system, which has better positioning accuracy and better tracking effect. As shown in Figure 1, a target tracking method includes the following steps:

S100获取由抓拍相机抓拍到的包含一个或多个第一跟踪对象的第一目标图像;S100 obtains a first target image including one or more first tracking objects captured by the capture camera;

S200根据所述一个或多个第一跟踪对象在所述第一目标图像中的位置或/和视觉特征确定与抓拍相机具有关联关系的关联相机,其中,所述关联相机与所述抓拍相机具有交叠视野;S200 Determine an associated camera having an associated relationship with the snapshot camera according to the positions or/and visual features of the one or more first tracking objects in the first target image, wherein the associated camera and the snapshot camera have overlapping fields of view;

S300获取由所述关联相机抓拍到的包含一个或多个第二跟踪对象的第二目标图像;其中,第一目标图像与第二目标图像的抓拍时刻相同;S300 obtains a second target image captured by the associated camera and includes one or more second tracking objects; wherein, the first target image and the second target image are captured at the same time;

S400将所述一个或多个第一跟踪对象与所述一个或多个第二跟踪对象进行匹配,以确定所述一个或多个第一跟踪对象与所述一个或多个第二跟踪对象中属于同一对象的跟踪对象,记为目标对象;S400 Match the one or more first tracking objects with the one or more second tracking objects to determine whether the one or more first tracking objects and the one or more second tracking objects are in the Tracking objects belonging to the same object are recorded as target objects;

S500基于所述抓拍相机的位置、所述关联相机的位置确定所述目标对象的三维空间位置信息,以实现对所述目标对象的跟踪。S500 determines three-dimensional space position information of the target object based on the position of the snapshot camera and the position of the associated camera, so as to realize the tracking of the target object.

在本发明中,在目标场景中部署多个相机,且相机之间具有交叠视野区域,对于处于交叠视野区域的行人,可以利用三角测距原理,实现较高的定位精度。进一步地,利用位置估计将行人的视觉特征投影到统一的三维空间中进行高维、多模态的跨镜头关联和追踪,实现对行人目标的实时、无死角跟踪,很好的解决了单镜头中目标跟踪的遮挡难题,可极大提升跟踪的精度和鲁棒性。本方法同时利用了多个相机的观测结果,即使行人在某个镜头下被其他行人或者建筑场景遮挡,在其他镜头中还能被持续地观测到,因此极大降低了轨迹终断和错乱的可能。In the present invention, multiple cameras are deployed in the target scene, and the cameras have overlapping field of view areas. For pedestrians in the overlapping field of view areas, the principle of triangulation can be used to achieve higher positioning accuracy. Further, using position estimation to project the visual features of pedestrians into a unified three-dimensional space for high-dimensional, multi-modal cross-lens correlation and tracking, to achieve real-time, dead-end tracking of pedestrian targets, which is a good solution to single-lens. The occlusion problem of medium target tracking can greatly improve the accuracy and robustness of tracking. This method uses the observation results of multiple cameras at the same time. Even if pedestrians are occluded by other pedestrians or building scenes in a certain lens, they can still be continuously observed in other lenses, thus greatly reducing the trajectory termination and confusion. possible.

以下对本发明的各步骤进行详细说明。Each step of the present invention will be described in detail below.

在利用本方法中的抓拍相机进行目标对象抓拍前,需要对抓拍相机进行部置。具体地,在场景内部署抓拍相机,确保感兴趣的区域都能被相机视野覆盖。尽量保证所有感兴趣的区域都能被至少两个相机从任意角度实现视野覆盖;本发明并不要求严格的视野冗余交叠才能工作,因此抓拍相机部署相对较为随意,在无视野覆盖、无冗余交叠的区域,可通过卡尔曼滤波,单目定位追踪等手段进行持续的位置估计和跟踪。抓拍相机的部署如图2所示。Before using the capture camera in the method to capture the target object, the capture camera needs to be positioned. Specifically, a snapshot camera is deployed in the scene to ensure that the area of interest can be covered by the camera's field of view. Try to ensure that all areas of interest can be covered by at least two cameras from any angle; the present invention does not require strict redundant overlapping of the field of view to work, so the deployment of the snapshot cameras is relatively random, and when there is no field of view coverage, no For redundant overlapping areas, continuous position estimation and tracking can be performed by means of Kalman filtering, monocular positioning and tracking, etc. The deployment of the capture camera is shown in Figure 2.

部署完相应的抓拍相机后,利用相机标定手段,对相机的内参和外参进行标定,内参包括畸变系数,焦距、主点位置,外参包括相机在场景空间中的三维坐标位置和三维方位角姿态。After deploying the corresponding capture camera, use the camera calibration method to calibrate the camera's internal and external parameters. The internal parameters include distortion coefficient, focal length, and the position of the principal point. The external parameters include the camera's 3D coordinate position and 3D azimuth in the scene space. attitude.

接下来,将具有交叠视野的两个相机分为一个相机组,例如图2中的三个相机可分为1-2,1-n,2-n三个组。接下来,可以对场景空间进行网格划分,利用逐格搜索算法来记录每个网格的可见相机。需要说明的是,在一个相机组中,包括抓拍相机和关联相机。Next, the two cameras with overlapping fields of view are divided into a camera group, for example, the three cameras in Figure 2 can be divided into three groups of 1-2, 1-n, and 2-n. Next, the scene space can be meshed, using a grid-by-grid search algorithm to record the visible cameras for each grid. It should be noted that a camera group includes a snapshot camera and an associated camera.

在步骤S100中,获取由抓拍相机抓拍到的包含一个或多个第一跟踪对象的第一目标图像;In step S100, acquiring a first target image captured by a capture camera and including one or more first tracking objects;

其中,抓拍相机可以复用已安装好的监控相机,这样可以使得部署硬件成本较低。Among them, the snapshot camera can reuse the installed surveillance camera, which can make the deployment hardware cost lower.

按照统一的时间戳同步对场景内所有的抓拍相机进行画面抓取,并为每一帧图像按照统一的时钟打上时间戳,以方便后续算法中,按照同一时间戳来处理所有相机的拍摄内容。在抓拍相机以及关联相机的抓拍画面中,一般包括一个或多个跟踪对象。All snapshot cameras in the scene are synchronously captured according to a unified time stamp, and each frame of images is time stamped according to a unified clock, so that the subsequent algorithm can process the shooting content of all cameras according to the same time stamp. One or more tracking objects are generally included in the snapshot camera and the snapshot image of the associated camera.

在获取到抓拍相机抓拍到的画面后,需要对每一个抓拍相机的抓拍画面进行人体检测,得到每一个抓拍相机中每一个跟踪对象的人体框。具体地,对于每一抓拍相机的抓拍画面,利用深度神经网络人体检测模型进行人体检测,得到每一抓拍相机的抓拍中的人体框,每一个人体框代表一个跟踪对象。After the images captured by the snapshot cameras are acquired, it is necessary to perform human body detection on the captured images of each snapshot camera to obtain the human body frame of each tracked object in each snapshot camera. Specifically, for the captured images of each capture camera, a deep neural network human body detection model is used to perform human body detection, and a human body frame captured by each capture camera is obtained, and each human body frame represents a tracking object.

在步骤S200中,根据所述一个或多个第一跟踪对象在所述第一目标图像中的位置或/和视觉特征确定与抓拍相机具有关联关系的关联相机,其中,所述关联相机与所述抓拍相机具有交叠视野;In step S200, an associated camera having an associated relationship with the snapshot camera is determined according to the position or/and visual feature of the one or more first tracking objects in the first target image, wherein the associated camera is associated with the captured camera. The snapshot cameras have overlapping fields of view;

其中,视觉特征可以包括服饰的样式、服饰的颜色等。The visual features may include the style of clothing, the color of clothing, and the like.

在前述步骤中,在获取到第一跟踪对象在第一目标图像中的位置后,可以通过该位置来查找能够抓拍到该位置上的第一跟踪对象的其他相机,即关联相机。由于在相机部署时,对场景空间进行了网格划分,且利用逐格搜索算法记录了每个网格的可见相机,因此,在查找关联相机时,可以根据第一跟踪对象的位置快速的查找出与抓拍相机具有交叠视野的关联相机。In the foregoing steps, after the position of the first tracking object in the first target image is acquired, other cameras that can capture the first tracking object at the position can be searched through the position, that is, an associated camera. When the camera is deployed, the scene space is divided into grids, and the visible camera of each grid is recorded by the grid-by-grid search algorithm. Therefore, when searching for the associated camera, it can be quickly searched according to the position of the first tracking object. Displays an associated camera that has an overlapping field of view with the snapshot camera.

当前述步骤中,在获取到第一跟踪对象在第一目标图像中的视觉特征后,可以通过该视觉特征来查找能够抓拍到具有该视觉特征的跟踪对象的其他相机,即关联相机。关联相机可以复用已安装好的监控相机,这样可以使得部署硬件成本较低。In the foregoing steps, after the visual feature of the first tracking object in the first target image is acquired, other cameras that can capture the tracking object with the visual feature can be searched by the visual feature, that is, the associated camera. The associated cameras can reuse the installed surveillance cameras, which can make the deployment hardware cost less.

在一实施例中,如图3所示,确定所述一个或多个第一跟踪对象在所述第一目标图像的位置的步骤,包括:In one embodiment, as shown in FIG. 3 , the step of determining the position of the one or more first tracking objects in the first target image includes:

S301对所述一个或多个第一跟踪对象进行人体关键点检测,得到所述一个或多个第一跟踪对象的人体关键点;其中,所述人体关键点包括以下至少之一:头、手、臀、膝、脚踝。S301 Detects human body key points on the one or more first tracking objects, and obtains human body key points of the one or more first tracking objects; wherein, the human body key points include at least one of the following: a head, a hand , hip, knee, ankle.

具体地,可以利用深度神经网络人体关键点检测模型对每一个人体框进行检测,得到每一个第一跟踪对象的关键点。在第一目标图像中,第一跟踪对象在所述第一目标图像中的位置可以通过第一跟踪对象的人体关键点的像素位置来确定。Specifically, a deep neural network human body key point detection model can be used to detect each human body frame to obtain a key point of each first tracked object. In the first target image, the position of the first tracking object in the first target image may be determined by the pixel positions of the human body key points of the first tracking object.

当然,在确定人体关键点的像素位置前,可以利用相机的畸变系数对相机画面上的人体关键点进行畸变矫正,使之符合小孔成像光路原理。Of course, before determining the pixel position of the key points of the human body, the distortion coefficient of the camera can be used to correct the distortion of the key points of the human body on the camera screen to make it conform to the principle of the pinhole imaging optical path.

S302确定所述人体关键点的像素位置在所述第一目标图像中的位置,即所述一个或多个第一跟踪对象在所述第一目标图像中的位置。S302 determines the positions of the pixel positions of the human body key points in the first target image, that is, the positions of the one or more first tracking objects in the first target image.

在步骤S300中,获取同一时刻由所述关联相机抓拍到的包含一个或多个第二跟踪对象的第二目标图像;In step S300, acquiring a second target image including one or more second tracking objects captured by the associated camera at the same moment;

按照统一的时间戳同步对场景内所有的关联相机进行画面抓取,并为每一帧图像按照统一的时钟打上时间戳,以方便后续算法中,按照同一时间戳来处理所有相机的拍摄内容。在关联相机的抓拍画面中,一般包括一个或多个跟踪对象。Synchronously capture images of all associated cameras in the scene according to a unified time stamp, and stamp each frame of images with a time stamp according to a unified clock, so that subsequent algorithms can process the shooting content of all cameras according to the same time stamp. Generally, one or more tracked objects are included in the snapshot of the associated camera.

在获取到关联相机抓拍到的画面后,需要对每一个关联相机的抓拍画面进行人体检测,得到每一个关联相机中每一个跟踪对象的人体框。具体地,对于每一关联相机的抓拍画面,利用深度神经网络人体检测模型进行人体检测,得到每一关联相机的抓拍中的人体框,每一个人体框代表一个跟踪对象。After acquiring the images captured by the associated cameras, it is necessary to perform human body detection on the captured images of each associated camera to obtain the human body frame of each tracking object in each associated camera. Specifically, for the captured images of each associated camera, a deep neural network human body detection model is used to perform human body detection, and a human body frame in the snapshot of each associated camera is obtained, and each human body frame represents a tracking object.

在S400中,将所述一个或多个第一跟踪对象与所述一个或多个第二跟踪对象进行匹配,以确定所述一个或多个第一跟踪对象与所述一个或多个第二跟踪对象中属于同一对象的跟踪对象,记为目标对象;In S400, the one or more first tracking objects and the one or more second tracking objects are matched to determine the one or more first tracking objects and the one or more second tracking objects Tracking objects belonging to the same object in the tracking objects are recorded as target objects;

在一实施例中,如图4所示,将所述一个或多个第一跟踪对象与所述一个或多个第二跟踪对象进行匹配,包括:In one embodiment, as shown in FIG. 4 , matching the one or more first tracking objects with the one or more second tracking objects includes:

S401对所述第一跟踪对象与所述第二跟踪对象进行视觉特征提取,得到第一跟踪对象的第一视觉特征和第二跟踪对象的第二视觉特征;S401 performs visual feature extraction on the first tracking object and the second tracking object to obtain a first visual feature of the first tracking object and a second visual feature of the second tracking object;

S402计算所述第一视觉特征与所述第二视觉特征的特征相似度;S402 calculates the feature similarity between the first visual feature and the second visual feature;

S403将所述特征相似度超过预设相似度阈值的第一跟踪对象与第二跟踪对象作为候选对象。S403 takes the first tracking object and the second tracking object whose feature similarity exceeds a preset similarity threshold as candidate objects.

其中,视觉特征可以包括服饰的样式、服饰的颜色。The visual features may include the style of clothing and the color of clothing.

比如,第一跟踪对象穿着红色的羽绒服,第二跟踪对象同时也穿着红色的羽绒服,且服饰的样式相同。那么,此时第一视觉特征与第二视觉特征的相似度就超过预设相似度阈值,对可以将第一跟踪对象与第二跟踪对象作为同一个跟踪对象。但由于在第一目标图像和第二目标图像中可能还存在同样身穿红色羽绒服的目标,此时,可能出现两个属于同一个跟踪对象的目标。因此,将匹配出的属于同一个对象的目标作为候选对象。For example, the first tracking object is wearing a red down jacket, and the second tracking object is also wearing a red down jacket at the same time, and the clothing styles are the same. Then, at this time, the similarity between the first visual feature and the second visual feature exceeds the preset similarity threshold, and the first tracking object and the second tracking object can be regarded as the same tracking object. However, since there may be a target also wearing a red down jacket in the first target image and the second target image, at this time, two targets belonging to the same tracking object may appear. Therefore, the matched targets belonging to the same object are used as candidate objects.

具体地,假如在第一目标图像中存在跟踪对象A、B,在第二目标图像中存在跟踪对象C、D,跟踪对象C、D的视觉特征相同,若跟踪对象A的视觉特征与跟踪对象C的视觉特征相似度超过预设相似度阈值,那么此时判断跟踪对象A与跟踪对象C实际上为同一个目标。但由于跟踪对象C的视觉特征与跟踪对象D的视觉特征相同,则此时,可以判断跟踪对象A与跟踪对象D也为同一个目标,即此时,出现两个需要跟踪的目标,即候选目标。Specifically, if there are tracking objects A and B in the first target image, and tracking objects C and D exist in the second target image, and the visual features of the tracking objects C and D are the same, if the visual features of the tracking object A are the same as the tracking objects The visual feature similarity of C exceeds the preset similarity threshold, then it is determined that the tracking object A and the tracking object C are actually the same target at this time. However, since the visual features of the tracking object C are the same as the visual features of the tracking object D, it can be determined that the tracking object A and the tracking object D are also the same target at this time. Target.

在一实施例中,如图5所示,将所述一个或多个第一跟踪对象与所述一个或多个第二跟踪对象进行匹配,还包括:In an embodiment, as shown in FIG. 5 , matching the one or more first tracking objects with the one or more second tracking objects further includes:

S501通过所述抓拍相机与所述关联相机之间的极线约束对所述候选对象进行匹配;S501 matches the candidate objects through epipolar constraints between the snapshot camera and the associated camera;

S502若所述候选对象满足所述极线约束,则所述第一跟踪对象与所述第二跟踪对象为目标对象。S502 If the candidate object satisfies the epipolar constraint, the first tracking object and the second tracking object are target objects.

其中,极线约束为:where the epipolar constraints are:

三维空间中一点p,投影到两个不同的平面I1、I2,投影点分别为p1,p2;p、p1、p2在三维空间内构成一个平面S。S与面I1的交线L1过p1点,称之为对应于p2的极线;同理S与I2的交线称之为对应于p1的极线。所谓极线约束就是说同一个点在两幅图像上的映射,已知在平面I1映射点p1,那么在平面I2映射点p2一定在相对于p1的极线上。A point p in the three-dimensional space is projected to two different planes I1 and I2, and the projection points are p1 and p2 respectively; p, p1, and p2 form a plane S in the three-dimensional space. The intersection line L1 of S and plane I1 passes through the point p1, and is called the epipolar line corresponding to p2; similarly, the intersection line of S and I2 is called the epipolar line corresponding to p1. The so-called epipolar constraint refers to the mapping of the same point on two images. If it is known that the point p1 is mapped on the plane I1, then the mapped point p2 on the plane I2 must be on the polar line relative to p1.

在一实施例中,所述方法还包括:In one embodiment, the method further includes:

根据在第t帧时所述一个或多个第一跟踪对象在所述第一目标图像中的位置,预测得到在第t+1帧时所述一个或多个第一跟踪对象在所述第一目标图像中的位置;According to the positions of the one or more first tracked objects in the first target image at the t-th frame, it is predicted that the one or more first tracked objects are located in the t+1-th frame at the t+1-th frame. a position in the target image;

根据在第t帧时所述一个或多个第二跟踪对象在所述第二目标图像中的位置,预测得到在第t+1帧时所述一个或多个第二跟踪对象在所述第二目标图像中的位置;According to the positions of the one or more second tracked objects in the second target image at the t-th frame, it is predicted that the one or more second tracked objects are located in the t+1-th frame at the t+1-th frame. The position in the target image;

将在第t+1帧时的所述一个或多个第一跟踪对象与所述一个或多个第二跟踪对象进行匹配。The one or more first tracked objects at frame t+1 are matched with the one or more second tracked objects.

具体地,基于第t帧时所述一个或多个第一跟踪对象在第一目标图像的位置,结合卡尔曼滤波算法以及Kuhn-Munkres算法预测得到在第t+1帧时所述一个或多个第一跟踪对象在第一目标图像中的位置;Specifically, based on the positions of the one or more first tracking objects in the first target image at the t-th frame, the one or more first tracking objects at the t+1-th frame are predicted to be obtained in combination with the Kalman filter algorithm and the Kuhn-Munkres algorithm. the position of the first tracking object in the first target image;

基于第t帧时所述一个或多个第二跟踪对象在第二目标图像的位置,结合卡尔曼滤波算法以及Kuhn-Munkres算法预测得到在第t+1帧时所述一个或多个第二跟踪对象在第二目标图像中的位置。Based on the position of the one or more second tracking objects in the second target image at the t frame, the one or more second tracking objects at the t+1 frame are predicted to be obtained by combining the Kalman filter algorithm and the Kuhn-Munkres algorithm. The position of the object in the second target image is tracked.

其中,第一跟踪对象在第一目标图像的位置与第二跟踪对象在第二目标图像的位置的确定方法可以参考如图3所示的方法,此处不再进一步赘述。The method for determining the position of the first tracking object in the first target image and the position of the second tracking object in the second target image may refer to the method shown in FIG. 3 , which will not be described further herein.

更加具体地,基于第t帧时所述一个或多个第一跟踪对象在所述第一目标图像中的位置,利用所述卡尔曼滤波算法对所述一个或多个第一跟踪对象的位置进行预测,得到在第t+1帧时所述一个或多个第一跟踪对象的预测位置;需要说明的,在本实施例中,是对多目标进行跟踪,那么在利用卡尔曼滤波算法对多个目标的位置进行预测后,在第t+1帧的第一目标图像中会同时存在多个目标。那么,此时,需要对第t帧的第一目标图像中的多个目标与第t+1帧的第一目标图像中的多个目标进行匹配,确定出属于同一目标的第一跟踪对象,因此可以通过所述Kuhn-Munkres算法对在第t帧时所述一个或多个第一跟踪对象在所述第一目标图像中的位置与在第t+1帧时所述一个或多个第一跟踪对象的预测位置进行关联匹配。More specifically, based on the position of the one or more first tracking objects in the first target image at the t-th frame, use the Kalman filtering algorithm to perform a calculation on the positions of the one or more first tracking objects. Prediction is performed to obtain the predicted positions of the one or more first tracking objects at the t+1th frame; it should be noted that in this embodiment, multiple objects are tracked, then the Kalman filtering algorithm is used to After the positions of multiple objects are predicted, there will be multiple objects in the first object image of the t+1th frame at the same time. Then, at this time, it is necessary to match multiple targets in the first target image of the t-th frame with multiple targets in the first target image of the t+1-th frame to determine the first tracking object belonging to the same target, Therefore, the Kuhn-Munkres algorithm can be used to compare the position of the one or more first tracking objects in the first target image at the t-th frame with the one or more first tracking objects at the t+1-th frame. A tracked object's predicted location for correlation matching.

基于第t帧时所述一个或多个第二跟踪对象在所述第二目标图像中的位置,利用所述卡尔曼滤波算法对所述一个或多个第二跟踪对象的位置进行预测,得到在第t+1帧时所述一个或多个第二跟踪对象的预测位置;需要说明的,在本实施例中,是对多目标进行跟踪,那么在利用卡尔曼滤波算法对多个目标的位置进行预测后,在第t+1帧的第二目标图像中会同时存在多个目标。那么,此时,需要对第t帧的第二目标图像中的多个目标与第t+1帧的第二目标图像中的多个目标进行匹配,确定出属于同一目标的第二跟踪对象。因此,可以通过所述Kuhn-Munkres算法对在第t帧时所述一个或多个第二跟踪对象在所述第二目标图像中的位置与在第t+1帧时所述一个或多个第二跟踪对象的预测位置进行关联匹配。Based on the positions of the one or more second tracking objects in the second target image at the t-th frame, the Kalman filtering algorithm is used to predict the positions of the one or more second tracking objects, to obtain The predicted positions of the one or more second tracking objects at the t+1th frame; it should be noted that in this embodiment, multiple targets are tracked, then the Kalman filtering algorithm is used to track the multiple targets. After the position is predicted, there will be multiple targets in the second target image of the t+1th frame at the same time. Then, at this time, it is necessary to match multiple targets in the second target image of the t-th frame with multiple targets in the second target image of the t+1-th frame to determine the second tracking object belonging to the same target. Therefore, the position of the one or more second tracking objects in the second target image at the t-th frame and the one or more second tracking objects at the t+1-th frame can be determined by the Kuhn-Munkres algorithm. The predicted position of the second tracking object is correlated and matched.

在前述步骤中,获得了第一目标图像中的多个第一跟踪对象,第二目标图像中的多个第二跟踪对象。由于第一跟踪对象与第二跟踪对象理论上应该属于同一个跟踪对象且在不同图像中的表现。因此,在对目标进行跟踪时,需要确定第一跟踪对象与第二跟踪对象是否是同一个对象。因此,将所述多个第一跟踪对象与所述多个第二跟踪对象进行匹配,以确定所述多个第一跟踪对象与所述多个第二跟踪对象中属于同一对象的跟踪对象,记为目标对象。In the foregoing steps, a plurality of first tracking objects in the first target image and a plurality of second tracking objects in the second target image are obtained. Because the first tracking object and the second tracking object should theoretically belong to the same tracking object and behave in different images. Therefore, when tracking the target, it is necessary to determine whether the first tracking object and the second tracking object are the same object. Therefore, matching the plurality of first tracking objects with the plurality of second tracking objects to determine the tracking objects belonging to the same object among the plurality of first tracking objects and the plurality of second tracking objects, Note as the target object.

在步骤S500中,基于所述抓拍相机的位置、所述关联相机的位置确定所述目标对象的三维空间位置信息,以实现对所述目标对象的跟踪。In step S500, the three-dimensional space position information of the target object is determined based on the position of the snapshot camera and the position of the associated camera, so as to realize the tracking of the target object.

具体地,所述基于所述抓拍相机的位置、所述关联相机的位置确定所述目标对象的三维空间位置信息,包括:Specifically, the determining the three-dimensional space position information of the target object based on the position of the snapshot camera and the position of the associated camera includes:

基于所述抓拍相机的位置、所述关联相机的位置,通过三角测距算法计算所述目标对象的三维空间位置信息。Based on the position of the snapshot camera and the position of the associated camera, the three-dimensional space position information of the target object is calculated through a triangulation ranging algorithm.

当然,如果抓拍相机与关联相机之间的距离很近,此处可以利用PatchMatch算法或基于深度神经网络模型直接得到所有像素点上的距离估计,但是这样做不仅计算量巨大,而且对相机的部署要求很高,难以大规模应用。Of course, if the distance between the capture camera and the associated camera is very close, the PatchMatch algorithm or the deep neural network model can be used to directly obtain the distance estimation on all pixels, but this is not only computationally expensive, but also requires a lot of attention for the deployment of the camera. It is very demanding and difficult to apply on a large scale.

在前述步骤中,通过一个抓拍相机与一个关联相机得到了目标对象的三维空间位置信息。但对于一个行人,有可能被两个以上的相机同时观测到,形成多个相机组,此时会得到多个三维空间位置信息,为了对目标对象的位置进行更加精确的预测,因此,在一实施例中,若所述抓拍相机具有多个关联相机,则所述方法还包括:In the foregoing steps, the three-dimensional space position information of the target object is obtained through a snapshot camera and an associated camera. However, for a pedestrian, it may be observed by more than two cameras at the same time, forming multiple camera groups. At this time, multiple three-dimensional space position information will be obtained. In order to predict the position of the target object more accurately, in a In an embodiment, if the snapshot camera has multiple associated cameras, the method further includes:

基于所述抓拍相机与所述多个关联相机得到目标对象的多个三维空间位置信息;Obtaining a plurality of three-dimensional space position information of the target object based on the snapshot camera and the plurality of associated cameras;

对所述目标对象的多个三维空间位置信息进行融合,得到融合位置信息;fusing multiple three-dimensional spatial position information of the target object to obtain fused position information;

在对目标对象跟踪时,基于所述融合位置信息对所述目标对象进行跟踪。When tracking the target object, the target object is tracked based on the fusion position information.

具体地,所述对所述目标对象的多个三维空间位置信息进行融合,得到融合位置信息,包括:利用卡尔曼滤波算法对所述目标对象的多个三维空间位置信息进行融合,得到融合位置信息。Specifically, the fusion of multiple three-dimensional space position information of the target object to obtain the fusion position information includes: using a Kalman filter algorithm to fuse the multiple three-dimensional space position information of the target object to obtain the fusion position information.

对于有多个估计结果的情况,进一步利用卡尔曼滤波中的多传感器融合算法,将多个估计结果(三维空间位置信息)及协方差矩阵融合成为一个更高精度的估计结果和协方差矩阵。该算法的估计精度仅受限于人体关键点的检测精度和相机分辨率,与目标到相机的距离成正比,具有很好的精度,在室内场景中平均可达厘米级。In the case of multiple estimation results, the multi-sensor fusion algorithm in Kalman filter is further utilized to fuse multiple estimation results (three-dimensional spatial position information) and covariance matrix into a higher-precision estimation result and covariance matrix. The estimation accuracy of this algorithm is only limited by the detection accuracy of human key points and the resolution of the camera, which is proportional to the distance from the target to the camera, and has good accuracy, which can reach the centimeter level on average in indoor scenes.

在得到目标对象的三维空间位置信息后,可以把所有相机观测到的结果投影到统一的三维空间坐标系上,这样就可以跨相机地对整个场景中的行人观测进行统一分析和处理。After obtaining the three-dimensional spatial position information of the target object, the results observed by all cameras can be projected onto a unified three-dimensional spatial coordinate system, so that the pedestrian observations in the entire scene can be analyzed and processed uniformly across cameras.

在一实施例中,所述方法还包括:In one embodiment, the method further includes:

获取不同目标对象之间的三维空间位置信息与视觉特征相似度;Obtain the three-dimensional spatial position information and visual feature similarity between different target objects;

对同一个目标对象的所述三维空间位置信息与所述视觉特征相似度进行归一化,得到马氏距离或卡方分布;Normalizing the three-dimensional space position information and the visual feature similarity of the same target object to obtain Mahalanobis distance or chi-square distribution;

根据不同目标对象的马氏距离或卡方分布,判断不同目标对象中是否存在相同的目标对象;According to the Mahalanobis distance or chi-square distribution of different target objects, determine whether there is the same target object in different target objects;

在对所述目标对象进行跟踪时,对相同的目标对象中的一个目标对象进行跟踪。When tracking the target objects, one of the same target objects is tracked.

如图6所示,本申请实施例提供一种目标跟踪装置,包括:As shown in FIG. 6 , an embodiment of the present application provides a target tracking device, including:

第一图像获取模块100,用于获取由抓拍相机抓拍到的包含一个或多个第一跟踪对象的第一目标图像;A firstimage acquisition module 100, configured to acquire a first target image including one or more first tracking objects captured by a capture camera;

关联相机确定模块200,根据所述一个或多个第一跟踪对象在所述第一目标图像中的位置或/和视觉特征确定与抓拍相机具有关联关系的关联相机,其中,所述关联相机与所述抓拍相机具有交叠视野;The associatedcamera determination module 200 determines, according to the position or/and visual characteristics of the one or more first tracking objects in the first target image, an associated camera that has an associated relationship with the snapshot camera, wherein the associated camera is associated with The snapshot cameras have overlapping fields of view;

第二图像获取模块300,用于获取由所述关联相机抓拍到的包含一个或多个第二跟踪对象的第二目标图像;其中,第一目标图像与第二目标图像的抓拍时刻相同;The secondimage acquisition module 300 is configured to acquire a second target image including one or more second tracking objects captured by the associated camera; wherein, the capture time of the first target image and the second target image is the same;

匹配模块400,用于将所述一个或多个第一跟踪对象与所述一个或多个第二跟踪对象进行匹配,以确定所述一个或多个第一跟踪对象与所述一个或多个第二跟踪对象中属于同一对象的跟踪对象,记为目标对象;Amatching module 400, configured to match the one or more first tracking objects with the one or more second tracking objects to determine the one or more first tracking objects and the one or more first tracking objects The tracking object belonging to the same object in the second tracking object is recorded as the target object;

跟踪模块500,用于基于所述抓拍相机的位置、所述关联相机的位置确定所述目标对象的三维空间位置信息,以实现对所述目标对象的跟踪。Thetracking module 500 is configured to determine the three-dimensional space position information of the target object based on the position of the snapshot camera and the position of the associated camera, so as to realize the tracking of the target object.

由于上述装置实施例与方法实施例相对应,在装置实施例中各模块的功能实现可以参照方法实施例的实现方式,此处不再赘述。Since the foregoing apparatus embodiments correspond to the method embodiments, the function implementation of each module in the apparatus embodiments may refer to the implementation manners of the method embodiments, which will not be repeated here.

本申请实施例还提供了一种设备,该设备可以包括:一个或多个处理器;和其上存储有指令的一个或多个机器可读介质,当由所述一个或多个处理器执行时,使得所述设备执行图1所述的方法。在实际应用中,该设备可以作为终端设备,也可以作为服务器,终端设备的例子可以包括:智能手机、平板电脑、电子书阅读器、MP3(动态影像专家压缩标准语音层面3,Moving Picture Experts GroupAudio Layer III)播放器、MP4(动态影像专家压缩标准语音层面4,Moving Picture Experts Group Audio Layer IV)播放器、膝上型便携计算机、车载电脑、台式计算机、机顶盒、智能电视机、可穿戴设备等等,本申请实施例对于具体的设备不加以限制。Embodiments of the present application also provide a device, which may include: one or more processors; and one or more machine-readable media on which instructions are stored, when executed by the one or more processors When , the device is caused to execute the method described in FIG. 1 . In practical applications, the device can be used as a terminal device or a server. Examples of terminal devices can include: smart phones, tablet computers, e-book readers, MP3 (Motion Picture Experts Compression Standard Voice Layer 3, Moving Picture Experts GroupAudio Layer III) players, MP4 (Moving Picture Experts Group Audio Layer IV) players, laptop computers, car computers, desktop computers, set-top boxes, smart TVs, wearable devices, etc. etc., the embodiments of the present application do not limit specific devices.

本申请实施例还提供了一种非易失性可读存储介质,该存储介质中存储有一个或多个模块(programs),该一个或多个模块被应用在设备时,可以使得该设备执行本申请实施例的图1中方法所包含步骤的指令(instructions)。Embodiments of the present application further provide a non-volatile readable storage medium, where one or more modules (programs) are stored in the storage medium, and when the one or more modules are applied to a device, the device can be executed by the device. Instructions for steps included in the method in FIG. 1 of the embodiment of the present application.

图7为本申请一实施例提供的终端设备的硬件结构示意图。如图所示,该终端设备可以包括:输入设备1100、第一处理器1101、输出设备1102、第一存储器1103和至少一个通信总线1104。通信总线1104用于实现元件之间的通信连接。第一存储器1103可能包含高速RAM存储器,也可能还包括非易失性存储NVM,例如至少一个磁盘存储器,第一存储器1103中可以存储各种程序,用于完成各种处理功能以及实现本实施例的方法步骤。FIG. 7 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present application. As shown in the figure, the terminal device may include: aninput device 1100 , afirst processor 1101 , anoutput device 1102 , afirst memory 1103 and at least onecommunication bus 1104 . Acommunication bus 1104 is used to enable communication connections between elements. Thefirst memory 1103 may include a high-speed RAM memory, and may also include a non-volatile storage NVM, such as at least one disk memory, and various programs may be stored in thefirst memory 1103 for completing various processing functions and implementing this embodiment. method steps.

可选的,上述第一处理器1101例如可以为中央处理器(Central ProcessingUnit,简称CPU)、应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,该第一处理器1101通过有线或无线连接耦合到上述输入设备1100和输出设备1102。Optionally, thefirst processor 1101 may be, for example, a central processing unit (Central Processing Unit, CPU for short), an application specific integrated circuit (ASIC), a digital signal processor (DSP), a digital signal processing device (DSPD), a programmable Logic device (PLD), Field Programmable Gate Array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation, thefirst processor 1101 is coupled to the above-mentionedinput device 1100 and output through a wired orwireless connection device 1102.

可选的,上述输入设备1100可以包括多种输入设备,例如可以包括面向用户的用户接口、面向设备的设备接口、软件的可编程接口、摄像头、传感器中至少一种。可选的,该面向设备的设备接口可以是用于设备与设备之间进行数据传输的有线接口、还可以是用于设备与设备之间进行数据传输的硬件插入接口(例如USB接口、串口等);可选的,该面向用户的用户接口例如可以是面向用户的控制按键、用于接收语音输入的语音输入设备以及用户接收用户触摸输入的触摸感知设备(例如具有触摸感应功能的触摸屏、触控板等);可选的,上述软件的可编程接口例如可以是供用户编辑或者修改程序的入口,例如芯片的输入引脚接口或者输入接口等;输出设备1102可以包括显示器、音响等输出设备。Optionally, the above-mentionedinput device 1100 may include various input devices, for example, may include at least one of a user-oriented user interface, a device-oriented device interface, a software programmable interface, a camera, and a sensor. Optionally, the device-oriented device interface may be a wired interface for data transmission between devices, or a hardware plug-in interface (such as a USB interface, serial port, etc.) for data transmission between devices. ); optionally, the user-oriented user interface may be, for example, a user-oriented control button, a voice input device for receiving voice input, and a touch-sensing device (such as a touch screen with a touch-sensing function, a touch-sensitive device for receiving user touch input) Control panel, etc.); Optionally, the programmable interface of the above-mentioned software can be, for example, an entry for users to edit or modify programs, such as an input pin interface or an input interface of a chip, etc.; theoutput device 1102 can include output devices such as a display and audio .

在本实施例中,该终端设备的处理器包括用于执行各设备中各模块的功能,具体功能和技术效果参照上述实施例即可,此处不再赘述。In this embodiment, the processor of the terminal device includes a function for executing each module in each device, and the specific functions and technical effects may refer to the above-mentioned embodiments, which will not be repeated here.

图8为本申请的一个实施例提供的终端设备的硬件结构示意图。图8是对图7在实现过程中的一个具体的实施例。如图所示,本实施例的终端设备可以包括第二处理器1201以及第二存储器1202。FIG. 8 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present application. FIG. 8 is a specific embodiment of the implementation process of FIG. 7 . As shown in the figure, the terminal device in this embodiment may include asecond processor 1201 and asecond memory 1202 .

第二处理器1201执行第二存储器1202所存放的计算机程序代码,实现上述实施例中图1所述方法。Thesecond processor 1201 executes the computer program code stored in thesecond memory 1202 to implement the method described in FIG. 1 in the above embodiment.

第二存储器1202被配置为存储各种类型的数据以支持在终端设备的操作。这些数据的示例包括用于在终端设备上操作的任何应用程序或方法的指令,例如消息,图片,视频等。第二存储器1202可能包含随机存取存储器(random access memory,简称RAM),也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。Thesecond memory 1202 is configured to store various types of data to support operations at the terminal device. Examples of such data include instructions for any application or method operating on the end device, such as messages, pictures, videos, etc. Thesecond memory 1202 may include random access memory (random access memory, RAM for short), and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.

可选地,第二处理器1201设置在处理组件1200中。该终端设备还可以包括:通信组件1203,电源组件1204,多媒体组件1205,语音组件1206,输入/输出接口1207和/或传感器组件1208。终端设备具体所包含的组件等依据实际需求设定,本实施例对此不作限定。Optionally, thesecond processor 1201 is provided in theprocessing component 1200 . The terminal device may further include: acommunication component 1203 , apower supply component 1204 , amultimedia component 1205 , avoice component 1206 , an input/output interface 1207 and/or asensor component 1208 . Components and the like specifically included in the terminal device are set according to actual requirements, which are not limited in this embodiment.

处理组件1200通常控制终端设备的整体操作。处理组件1200可以包括一个或多个第二处理器1201来执行指令,以完成上述数据处理方法中的全部或部分步骤。此外,处理组件1200可以包括一个或多个模块,便于处理组件1200和其他组件之间的交互。例如,处理组件1200可以包括多媒体模块,以方便多媒体组件1205和处理组件1200之间的交互。Theprocessing component 1200 generally controls the overall operation of the terminal device. Theprocessing component 1200 may include one or moresecond processors 1201 to execute instructions to complete all or part of the steps in the above data processing method. Additionally,processing component 1200 may include one or more modules that facilitate interaction betweenprocessing component 1200 and other components. For example,processing component 1200 may include a multimedia module to facilitate interaction betweenmultimedia component 1205 andprocessing component 1200.

电源组件1204为终端设备的各种组件提供电力。电源组件1204可以包括电源管理系统,一个或多个电源,及其他与为终端设备生成、管理和分配电力相关联的组件。Power component 1204 provides power to various components of the terminal device.Power components 1204 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to end devices.

多媒体组件1205包括在终端设备和用户之间的提供一个输出接口的显示屏。在一些实施例中,显示屏可以包括液晶显示器(LCD)和触摸面板(TP)。如果显示屏包括触摸面板,显示屏可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。Themultimedia component 1205 includes a display screen that provides an output interface between the terminal device and the user. In some embodiments, the display screen may include a liquid crystal display (LCD) and a touch panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.

语音组件1206被配置为输出和/或输入语音信号。例如,语音组件1206包括一个麦克风(MIC),当终端设备处于操作模式,如语音识别模式时,麦克风被配置为接收外部语音信号。所接收的语音信号可以被进一步存储在第二存储器1202或经由通信组件1203发送。在一些实施例中,语音组件1206还包括一个扬声器,用于输出语音信号。Speech component 1206 is configured to output and/or input speech signals. For example, thespeech component 1206 includes a microphone (MIC) that is configured to receive external speech signals when the terminal device is in an operational mode, such as a speech recognition mode. The received voice signal may be further stored in thesecond memory 1202 or transmitted via thecommunication component 1203 . In some embodiments, thespeech component 1206 also includes a speaker for outputting speech signals.

输入/输出接口1207为处理组件1200和外围接口模块之间提供接口,上述外围接口模块可以是点击轮,按钮等。这些按钮可包括但不限于:音量按钮、启动按钮和锁定按钮。The input/output interface 1207 provides an interface between theprocessing component 1200 and a peripheral interface module, which may be a click wheel, a button, or the like. These buttons may include, but are not limited to, volume buttons, start buttons, and lock buttons.

传感器组件1208包括一个或多个传感器,用于为终端设备提供各个方面的状态评估。例如,传感器组件1208可以检测到终端设备的打开/关闭状态,组件的相对定位,用户与终端设备接触的存在或不存在。传感器组件1208可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在,包括检测用户与终端设备间的距离。在一些实施例中,该传感器组件1208还可以包括摄像头等。Sensor component 1208 includes one or more sensors for providing various aspects of the status assessment for the end device. For example, thesensor component 1208 may detect the open/closed state of the end device, the relative positioning of components, the presence or absence of user contact with the end device. Thesensor assembly 1208 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact, including detecting the distance between the user and the end device. In some embodiments, thesensor assembly 1208 may also include a camera or the like.

通信组件1203被配置为便于终端设备和其他设备之间有线或无线方式的通信。终端设备可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个实施例中,该终端设备中可以包括SIM卡插槽,该SIM卡插槽用于插入SIM卡,使得终端设备可以登录GPRS网络,通过互联网与服务器建立通信。Communication component 1203 is configured to facilitate wired or wireless communications between end devices and other devices. Terminal devices can access wireless networks based on communication standards, such as WiFi, 2G or 3G, or a combination thereof. In one embodiment, the terminal device may include a SIM card slot, and the SIM card slot is used for inserting a SIM card, so that the terminal device can log in to the GPRS network and establish communication with the server through the Internet.

由上可知,在图8实施例中所涉及的通信组件1203、语音组件1206以及输入/输出接口1207、传感器组件1208均可以作为图7实施例中的输入设备的实现方式。As can be seen from the above, thecommunication component 1203, thevoice component 1206, the input/output interface 1207, and thesensor component 1208 involved in the embodiment of FIG. 8 can all be implemented as the input device in the embodiment of FIG. 7.

上述实施例仅例示性说明本发明的原理及其功效,而非用于限制本发明。任何熟悉此技术的人士皆可在不违背本发明的精神及范畴下,对上述实施例进行修饰或改变。因此,举凡所属技术领域中具有通常知识者在未脱离本发明所揭示的精神与技术思想下所完成的一切等效修饰或改变,仍应由本发明的权利要求所涵盖。The above-mentioned embodiments merely illustrate the principles and effects of the present invention, but are not intended to limit the present invention. Anyone skilled in the art can modify or change the above embodiments without departing from the spirit and scope of the present invention. Therefore, all equivalent modifications or changes made by those with ordinary knowledge in the technical field without departing from the spirit and technical idea disclosed in the present invention should still be covered by the claims of the present invention.

Claims (15)

CN202210301163.8A2022-03-252022-03-25Target tracking method, device, medium and equipmentPendingCN114663475A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202210301163.8ACN114663475A (en)2022-03-252022-03-25Target tracking method, device, medium and equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202210301163.8ACN114663475A (en)2022-03-252022-03-25Target tracking method, device, medium and equipment

Publications (1)

Publication NumberPublication Date
CN114663475Atrue CN114663475A (en)2022-06-24

Family

ID=82030997

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202210301163.8APendingCN114663475A (en)2022-03-252022-03-25Target tracking method, device, medium and equipment

Country Status (1)

CountryLink
CN (1)CN114663475A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115861387A (en)*2022-12-142023-03-28之江实验室 Robot target tracking method, device, robot and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101226640A (en)*2007-12-212008-07-23西北工业大学 Motion capture method based on multi-binocular stereo vision
CN107240124A (en)*2017-05-192017-10-10清华大学Across camera lens multi-object tracking method and device based on space-time restriction
CN107666590A (en)*2016-07-292018-02-06华为终端(东莞)有限公司A kind of target monitoring method, camera, controller and target monitor system
CN108020158A (en)*2016-11-042018-05-11浙江大华技术股份有限公司A kind of three-dimensional position measuring method and device based on ball machine
CN110443828A (en)*2019-07-312019-11-12腾讯科技(深圳)有限公司Method for tracing object and device, storage medium and electronic device
CN111222579A (en)*2020-01-092020-06-02北京百度网讯科技有限公司 Inter-camera obstacle correlation method, apparatus, device, electronic system and medium
CN113011445A (en)*2019-12-192021-06-22斑马智行网络(香港)有限公司Calibration method, identification method, device and equipment
CN113379801A (en)*2021-06-152021-09-10江苏科技大学High-altitude parabolic monitoring and positioning method based on machine vision

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101226640A (en)*2007-12-212008-07-23西北工业大学 Motion capture method based on multi-binocular stereo vision
CN107666590A (en)*2016-07-292018-02-06华为终端(东莞)有限公司A kind of target monitoring method, camera, controller and target monitor system
CN108020158A (en)*2016-11-042018-05-11浙江大华技术股份有限公司A kind of three-dimensional position measuring method and device based on ball machine
CN107240124A (en)*2017-05-192017-10-10清华大学Across camera lens multi-object tracking method and device based on space-time restriction
CN110443828A (en)*2019-07-312019-11-12腾讯科技(深圳)有限公司Method for tracing object and device, storage medium and electronic device
CN113011445A (en)*2019-12-192021-06-22斑马智行网络(香港)有限公司Calibration method, identification method, device and equipment
CN111222579A (en)*2020-01-092020-06-02北京百度网讯科技有限公司 Inter-camera obstacle correlation method, apparatus, device, electronic system and medium
CN113379801A (en)*2021-06-152021-09-10江苏科技大学High-altitude parabolic monitoring and positioning method based on machine vision

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115861387A (en)*2022-12-142023-03-28之江实验室 Robot target tracking method, device, robot and storage medium

Similar Documents

PublicationPublication DateTitle
US11145083B2 (en)Image-based localization
US11668571B2 (en)Simultaneous localization and mapping (SLAM) using dual event cameras
CN109947886B (en)Image processing method, image processing device, electronic equipment and storage medium
CN109887003B (en)Method and equipment for carrying out three-dimensional tracking initialization
EP4026092B1 (en)Scene lock mode for capturing camera images
US20170337701A1 (en)Method and system for 3d capture based on structure from motion with simplified pose detection
CN111062263B (en)Method, apparatus, computer apparatus and storage medium for hand gesture estimation
WO2023016271A1 (en)Attitude determining method, electronic device, and readable storage medium
CN110866497B (en)Robot positioning and mapping method and device based on dotted line feature fusion
CN114063098A (en)Multi-target tracking method, device, computer equipment and storage medium
CN110349212A (en)Immediately optimization method and device, medium and the electronic equipment of positioning and map structuring
CN115035158B (en) Target tracking method and device, electronic equipment and storage medium
CN114690900A (en) Input recognition method, device and storage medium in a virtual scene
CN117896626B (en) Method, device, equipment and storage medium for detecting motion trajectory with multiple cameras
WO2023016182A1 (en)Pose determination method and apparatus, electronic device, and readable storage medium
CN113378705B (en)Lane line detection method, device, equipment and storage medium
CN109785444A (en)Recognition methods, device and the mobile terminal of real plane in image
WO2024230379A1 (en)Method, apparatus and system for measuring road surface roughness, and electronic device and medium
WO2023168957A1 (en)Pose determination method and apparatus, electronic device, storage medium, and program
CN114550086A (en)Crowd positioning method and device, electronic equipment and storage medium
CN116168383A (en)Three-dimensional target detection method, device, system and storage medium
CN114663475A (en)Target tracking method, device, medium and equipment
CN111489376B (en)Method, device, terminal equipment and storage medium for tracking interaction equipment
CN117132648A (en)Visual positioning method, electronic equipment and computer readable storage medium
CN110660134B (en)Three-dimensional map construction method, three-dimensional map construction device and terminal equipment

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp