Movatterモバイル変換


[0]ホーム

URL:


CN113628237A - Trajectory tracking method, system, electronic device and computer readable medium - Google Patents

Trajectory tracking method, system, electronic device and computer readable medium
Download PDF

Info

Publication number
CN113628237A
CN113628237ACN202010334554.0ACN202010334554ACN113628237ACN 113628237 ACN113628237 ACN 113628237ACN 202010334554 ACN202010334554 ACN 202010334554ACN 113628237 ACN113628237 ACN 113628237A
Authority
CN
China
Prior art keywords
target object
video
information
acquiring
trajectory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010334554.0A
Other languages
Chinese (zh)
Inventor
杨鸣鹤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lynxi Technology Co Ltd
Original Assignee
Beijing Lynxi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lynxi Technology Co LtdfiledCriticalBeijing Lynxi Technology Co Ltd
Priority to CN202010334554.0ApriorityCriticalpatent/CN113628237A/en
Publication of CN113628237ApublicationCriticalpatent/CN113628237A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本公开涉及一种轨迹跟踪方法、系统、电子设备及计算机可读介质。该方法包括:获取目标对象的初始图像;获取所述目标对象的第一视频,并基于所述第一视频对所述目标对象的移动路线进行跟踪生成轨迹信息;基于所述初始图像获取所述目标对象的第二视频,并基于所述第二视频生成补充信息;以及基于所述补充信息对所述轨迹信息进行调整以进行所述目标对象的轨迹跟踪。本公开涉及的轨迹跟踪方法、系统、电子设备及计算机可读介质,能够在复杂室内环境中,准确定位目标对象并确定目标对象的运动轨迹。

Figure 202010334554

The present disclosure relates to a trajectory tracking method, system, electronic device, and computer-readable medium. The method includes: acquiring an initial image of a target object; acquiring a first video of the target object, and tracking a moving route of the target object based on the first video to generate trajectory information; acquiring the target object based on the initial image a second video of the target object, and generating supplementary information based on the second video; and adjusting the trajectory information based on the supplementary information to track the trajectory of the target object. The trajectory tracking method, system, electronic device and computer-readable medium involved in the present disclosure can accurately locate a target object and determine the motion trajectory of the target object in a complex indoor environment.

Figure 202010334554

Description

Trajectory tracking method, system, electronic device and computer readable medium
Technical Field
The present disclosure relates to the field of computer information processing, and in particular, to a trajectory tracking method, system, electronic device, and computer readable medium.
Background
The indoor positioning technology is used for assisting positioning of satellite positioning when satellite positioning cannot be used in an indoor environment, and the problems that satellite signals are weak and cannot penetrate through buildings when reaching the ground are solved. And finally, positioning the current position of the object. With the rapid development of times, the quality and efficiency of information services are improved, the interference degree is low, and the indoor positioning technology plays a very important role in the life work and scientific research of people. The indoor positioning technology is very practical, has a large expansion space, has a wide application range, and can realize quick positioning of personnel and articles in complex environments such as libraries, gymnasiums, underground garages, goods warehouses, unmanned supermarkets, airports, railway stations and the like.
The indoor positioning technology mainly adopts multiple technologies such as wireless communication, base station positioning, inertial navigation positioning and the like to integrate and form an indoor position positioning system, thereby realizing the position monitoring of personnel, objects and the like in the indoor space. Common indoor positioning techniques include: Wi-Fi, Bluetooth, infrared, RFID, Zigbee, etc. However, the above indoor positioning techniques have low accuracy in positioning pedestrians and tracking in a complex internal environment.
Therefore, there is a need for a new trajectory tracking method, system, electronic device, and computer readable medium.
The above information disclosed in this background section is only for enhancement of understanding of the background of the disclosure and therefore it may contain information that does not constitute prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
In view of the above, the present disclosure provides a trajectory tracking method, system, electronic device and computer readable medium, which can accurately locate a target object and determine a motion trajectory of the target object in a complex indoor environment.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to an aspect of the present disclosure, a trajectory tracking method is provided, which includes: acquiring an initial image of a target object; acquiring a first video of the target object, and tracking the moving route of the target object based on the first video to generate track information; acquiring a second video of the target object based on the initial image, and generating supplementary information based on the second video; and adjusting the trajectory information based on the supplemental information to perform trajectory tracking of the target object.
In an exemplary embodiment of the present disclosure, acquiring an initial image of a target object includes:
acquiring an initial image of the target object acquired by an initial camera device, wherein the initial image comprises: a user face image and/or appearance image and/or gait image.
In an exemplary embodiment of the present disclosure, acquiring a first video of the target object, and tracking a moving route of the target object based on the first video to generate trajectory information includes: when the target object moves, acquiring a plurality of first videos generated by tracking and collecting the target object through a plurality of first camera devices; and generating the trajectory information based on the plurality of first videos.
In an exemplary embodiment of the present disclosure, acquiring a plurality of first videos generated by tracking and capturing the target object by a plurality of first cameras while the target object is moving includes: controlling one of the plurality of first camera devices to acquire a first video of the target object; determining the moving direction of the target object according to the first video; and controlling another first camera device to acquire another first video of the target object according to the moving direction.
In an exemplary embodiment of the present disclosure, generating the trajectory information based on the plurality of first videos includes: determining a plurality of movement routes of the target object in the plurality of first videos; the trajectory information is generated based on a positional relationship between the plurality of movement routes and the plurality of first image pickup devices.
In an exemplary embodiment of the present disclosure, acquiring a second video of the target object based on the initial image includes: acquiring a plurality of real-time videos acquired by a plurality of second camera devices; and performing target recognition on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object.
In an exemplary embodiment of the present disclosure, performing target recognition on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object includes: performing face recognition on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object; and/or performing video structural calculation on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object; and/or performing gait recognition on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object.
In an exemplary embodiment of the present disclosure, generating the supplementary information based on the second video includes: determining a real-time location of the target object based on the second video; and generating the supplementary information based on a positional relationship between the real-time position and the plurality of second image pickup devices.
In an exemplary embodiment of the present disclosure, further comprising: storing a plurality of historical track information of the target object; and analyzing and processing the behavior of the target object based on the plurality of historical track information.
In an exemplary embodiment of the present disclosure, further comprising: and generating early warning information when the track information of the target object meets a preset condition.
In an exemplary embodiment of the present disclosure, further comprising: acquiring characteristic information of the target object through an information acquisition device, wherein the characteristic information comprises: two-dimensional codes and/or radio frequency identification codes; and storing a plurality of historical track information of the target object according to the characteristic information.
According to an aspect of the present disclosure, a trajectory tracking system is provided, the system comprising: the initial camera device is used for acquiring an initial image of the target object; a plurality of first image pickup devices for acquiring a first video of the target object; a plurality of second camera devices for acquiring a second video; and the processing device is used for tracking the moving route of the target object based on the first video to generate track information, determining a second video of the target object based on the initial image, generating supplementary information according to the second video, and adjusting the track information based on the supplementary information to track the target object.
In an exemplary embodiment of the present disclosure, the plurality of first image pickup devices are disposed at predetermined positions according to a first rule, and the first image pickup devices include fisheye cameras.
In one exemplary embodiment of the present disclosure, the plurality of second image pickup devices are disposed at predetermined positions according to a second rule.
According to an aspect of the present disclosure, an electronic device is provided, the electronic device including: one or more processors; storage means for storing one or more programs; when executed by one or more processors, cause the one or more processors to implement a method as above.
According to an aspect of the disclosure, a computer-readable medium is proposed, on which a computer program is stored, which program, when being executed by a processor, carries out the method as above.
According to the trajectory tracking method, system, electronic device and computer readable medium of the present disclosure, an initial image of a target object is obtained; acquiring a first video of the target object, and tracking the moving route of the target object based on the first video to generate track information; acquiring a second video of the target object based on the initial image, and generating supplementary information based on the second video; and adjusting the trajectory information based on the supplementary information to track the trajectory of the target object, so that the target object can be accurately positioned and the motion trajectory of the target object can be determined in a complex indoor environment.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings. The drawings described below are merely some embodiments of the present disclosure, and other drawings may be derived from those drawings by those of ordinary skill in the art without inventive effort.
FIG. 1 is a block diagram illustrating an application scenario of a trajectory tracking system in accordance with an exemplary embodiment.
FIG. 2 is a schematic diagram illustrating an application scenario of a trajectory tracking system according to an exemplary embodiment.
FIG. 3 is a flow chart illustrating a trajectory tracking method according to an exemplary embodiment.
FIG. 4 is a flow chart illustrating a trajectory tracking method according to another exemplary embodiment.
FIG. 5 is a flow chart illustrating a trajectory tracking method according to another exemplary embodiment.
FIG. 6 is a block diagram illustrating a trajectory tracking system in accordance with an exemplary embodiment.
FIG. 7 is a block diagram illustrating an electronic device in accordance with an example embodiment.
FIG. 8 is a block diagram illustrating a computer-readable medium in accordance with an example embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals denote the same or similar parts in the drawings, and thus, a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, systems, steps, and so forth. In other instances, well-known methods, systems, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various components, these components should not be limited by these terms. These terms are used to distinguish one element from another. Thus, a first component discussed below may be termed a second component without departing from the teachings of the disclosed concept. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
It is to be understood by those skilled in the art that the drawings are merely schematic representations of exemplary embodiments, and that the blocks or processes shown in the drawings are not necessarily required to practice the present disclosure and are, therefore, not intended to limit the scope of the present disclosure.
As described above, common indoor positioning techniques are: Wi-Fi, Bluetooth, infrared, RFID, Zigbee, etc. The inventor of the present disclosure finds that the indoor positioning technology has the following disadvantages in practical application:
Wi-Fi positioning technology: the WIFI access point usually only can cover an area with a radius of about 90 meters, and is easily interfered by other signals, so that the precision of the WIFI access point is influenced, and the energy consumption of the locator is high.
Bluetooth positioning: for a complex space environment, the stability of the bluetooth positioning system is slightly poor, and the interference of noise signals is large.
Infrared technology: indoor location is through installing indoor optical sensor, receives each mobile device (infrared ray IR sign) and transmits the infrared ray of modulation and fix a position, however, because light can not pass the barrier for infrared ray can only look the distance and spread, receives other light interference easily, and the transmission distance of infrared ray is shorter, makes its indoor effect of fixing a position very poor. When the mobile device is placed in a pocket or is shielded by a wall, the mobile device cannot work normally, and a receiving antenna needs to be installed in each room or corridor, so that the overall cost is high.
The RFID positioning technology utilizes a radio frequency mode to perform non-contact bidirectional communication to exchange data, and achieves the purposes of mobile equipment identification and positioning. Has the disadvantages of short action distance and the like.
In order to overcome the defects in the existing indoor positioning technology, the invention provides a novel track tracking method and a novel track tracking system, and the method and the system can realize positioning tracking of people and determine the motion track by the aid of three types of cameras. The following detailed description is given in conjunction with specific examples.
FIG. 1 is a block diagram illustrating an application scenario of a trajectory tracking system in accordance with an exemplary embodiment.
As shown in fig. 1, thesystem architecture 10 may includeimage sensing apparatuses 101, 102, 103, anetwork 104, and aprocessing device 105. Thenetwork 104 is a medium to provide a communication link between theimage pickup apparatuses 101, 102, 103 and theprocessing device 105.Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
Theimage pickup apparatuses 101, 102, 103 can be used to interact with theprocessing device 105 through thenetwork 104 to receive or transmit data or the like. Various communication client applications may be installed on theimage pickup apparatuses 101, 102, 103.
Thecamera devices 101, 102, 103 may be various electronic devices having a camera function, including, but not limited to, professional cameras, CCD cameras, web cameras, broadcast-grade cameras, business-grade cameras, home-grade cameras, studio/field-based cameras, camcorders, monochrome cameras, color cameras, infrared cameras, X-ray cameras, surveillance cameras, scout cameras, button cameras, and the like.
Theimage capturing apparatus 101 may be an initial image capturing device for acquiring an initial image of a target object.
Thecamera apparatus 102 may include a plurality of first camera devices for acquiring a first video of the target object.
Theimage capturing apparatus 103 may include a plurality of second image capturing devices for acquiring a second video of the target object based on the initial image.
Theprocessing device 105 may be a processing device that provides various services, such as a processing device that performs background processing on pictures or videos taken by theimage capturing apparatuses 101, 102, 103. Theprocessing device 105 may analyze and otherwise process the received data such as the picture or video, and feed back the processing result (e.g., the trajectory of the target object) to the administrator.
Theprocessing device 105 may, for example, acquire an initial image of the target object; theprocessing device 105 may, for example, acquire a first video of the target object, and generate trajectory information by tracking the moving route of the target object based on the first video; theprocessing device 105 may, for example, obtain a second video of the target object based on the initial image and generate supplemental information based on the second video; theprocessing device 105 may adjust the trajectory information, for example, based on the supplemental information, to perform trajectory tracking of the target object.
Theprocessing device 105 may be a physical processing device, and may also be composed of a plurality of processing devices, for example, it should be noted that the trajectory tracking method provided by the embodiments of the present disclosure may be executed by theimage capturing apparatuses 101, 102, 103 and theprocessing device 105, and accordingly, a trajectory tracking system may be disposed in theimage capturing apparatuses 101, 102, 103 and theprocessing device 105. Theprocessing device 105 may control theimage capturing apparatuses 101, 102, and 103 to capture data and transmit and receive data, or theimage capturing apparatuses 101, 102, and 103 may actively capture data and transmit the captured data to theprocessing device 105 for processing.
Theprocessing device 105 may be a server or a terminal device, and the terminal device may include at least one camera device, which is not limited by the present disclosure. In one embodiment, the method in the present disclosure may be supported by designating any one of theimage capturing apparatuses 101, 102, 103 (for example, the image capturing apparatus 103) as a processing device, in which case, theimage capturing apparatus 103 may receive video data sent by theimage capturing apparatuses 101, 102 in addition to capturing a current video image, and process the video data according to the method described in the embodiment of the present disclosure to track a user, which is not limited by the present disclosure.
FIG. 2 is a schematic diagram illustrating an application scenario of a trajectory tracking system according to an exemplary embodiment. As shown in fig. 2, the initial image capturing device may include a camera L and an information collecting device; the first camera device can be an image acquisition device 1-19; the second camera may be an image capture device a-K.
The camera L is used for identifying entrance personnel, and for example, information can be collected at the entrance gate through the camera L, a two-dimensional code, an RFID card and the like. Various types of characteristic information of users entering an indoor area are collected, and corresponding relations of the various types of characteristic information are established. For example, in an unmanned supermarket scene, a user enters the unmanned supermarket through the two-dimensional code, the two-dimensional code device acquires the two-dimensional code information of the user, the camera acquires the user information, and the corresponding relation between the user information and the two-dimensional code is established.
Wherein the image capturing devices 1-19: the device is arranged on the indoor roof, for example, a plurality of fisheye cameras can be used for shooting downwards from the roof, and the top area characteristics of a moving object can be shot. For example, when thecamera 1 gazes at the top of the head of a user, thecamera 1 may track the motion track of the user and determine the moving direction of a moving target, and when thecamera 1 determines that the user is about to enter the collectable area of thecamera 2, thecamera 2 predicting the area may be notified to switch to thecamera 2 for tracking the track of the user, and the information of the moving target is synchronized with the track information of the target user formed by shooting the target by thecamera 2, and so on, the tracking of the motion track of the user may be realized by thecameras 1 to 19.
Wherein the image acquisition devices A-K: and a plurality of cameras arranged at fixed positions and used for correcting the information of the moving target. When thecameras 1 to 19 installed on the roofs track a target object, problems such as loss of a tracking chain or tracking errors caused by shielding and the like can occur, the A-K cameras analyze image information of users by acquiring the image information of the users, and the specific user of the vertex is corrected or re-determined by algorithms such as face recognition, video structuring, gait and the like. For example, thecamera 1 stares at the tops of two users at the same time, possibly confusing. The a-K type cameras can acquire images of users, and determine which two users the two tops of the head are respectively by combining various types of information (such as faces, clothes, gaits) of the users acquired by the previous camera L, and continue to track.
According to the track tracking method, an initial information initial image of a target object is obtained; acquiring a first video of the target object, and tracking the moving route of the target object based on the first video to generate track information; acquiring a second video of the target object based on the initial image, and generating supplementary information based on the second video; and adjusting the trajectory information based on the supplementary information to track the trajectory of the target object, so that the target object can be accurately positioned and the motion trajectory of the target object can be determined in a complex indoor environment.
FIG. 3 is a flow chart illustrating a trajectory tracking method according to an exemplary embodiment. Thetrajectory tracking method 30 includes steps S302 to S308. The trajectory tracking method is applicable to a server or a terminal device, wherein the terminal device may include at least one camera. In one embodiment, the method in the present disclosure may also be supported by designating any one of theimage capturing apparatuses 101, 102, 103 (for example, the image capturing apparatus 103) as a terminal, in which case, theimage capturing apparatus 103 may receive video data sent by theimage capturing apparatuses 101, 102 in addition to capturing a current video image, and process the video data according to the method described in the embodiment of the present disclosure to track a user, which is not limited by the present disclosure.
As shown in fig. 3, in S302, an initial image of the target object is acquired. The method comprises the following steps: acquiring an initial image of the target object by an initial imaging device, may further include: acquiring feature information of the target object by an information acquisition device, wherein the feature information may include: two-dimensional codes and/or radio frequency identification codes; and associating and storing a plurality of historical track information of the target object according to the characteristic information.
Further, the user information may include a face, clothes, gait, and the like. For example, in an unmanned supermarket scene, a user enters the unmanned supermarket through a two-dimensional code, the two-dimensional code can be used as characteristic information of the user, a two-dimensional code device acquires two-dimensional code information of the user, a camera collects an image of the user, the image of the user can be used as an initial image, and a corresponding relation between the image of the user and the two-dimensional code is established in a background processing device.
In S304, a first video of the target object is acquired, and a moving route of the target object is tracked based on the first video to generate track information. Wherein the first video comprises at least one frame of image.
In one possible implementation, step S304 may include: when the target object moves, acquiring a plurality of first videos generated by tracking and collecting the target object through a plurality of first camera devices; and generating the trajectory information based on the plurality of first videos.
The details of "acquiring the first video of the target object and tracking the moving route of the target object based on the first video to generate the track information" will be described in the embodiment corresponding to fig. 4. The first cameras may track and capture a plurality of first videos generated by the target object, or a part of the first cameras track and capture the target object to generate one first video, and a part of the first cameras track and capture the target object to generate a plurality of first videos, which is not limited in this disclosure.
In S306, a second video of the target object is acquired based on the initial image, and supplemental information is generated based on the second video. Wherein the second video comprises at least one frame of image.
In one possible implementation, step S306 may include: acquiring a plurality of real-time videos acquired by a plurality of second camera devices; and performing target recognition on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object.
The details of "acquiring a second video of the target object based on the initial image, and generating supplemental information based on the second video" will be described in the embodiment corresponding to fig. 5.
In S308, the trajectory information is adjusted based on the supplemental information to perform trajectory tracking of the target object.
In one embodiment, further comprising: storing a plurality of historical track information of the target object; and analyzing and processing the behavior of the target object based on the plurality of historical track information.
In one embodiment, further comprising: and generating early warning information when the track information of the target object meets a preset condition.
In one embodiment, in an unmanned supermarket, a face image of a user collected by a camera can be stored in a processing device, when the user reappears, a data record of the user can be established through face image comparison, and according to the data record, the purchasing habits of the user can be continuously analyzed, such as the preferences of the user, the living habits of the user, the work and rest time of the user, and the like. Further, a warning may be given when conditions are met, for example, if a user appears in a supermarket too frequently, the user is considered to have a possible malicious attempt, and then a warning is given to the user so that the manager pays attention to the user.
According to the track tracking method disclosed by the invention, the positioning tracking of people can be realized and the motion track can be determined through the three cameras. Data analysis can be performed based on the acquired motion trajectory, and the analysis result can be used for early warning and the like.
It should be clearly understood that this disclosure describes how to make and use particular examples, but the principles of this disclosure are not limited to any details of these examples. Rather, these principles can be applied to many other embodiments based on the teachings of the present disclosure.
FIG. 4 is a flow chart illustrating a trajectory tracking method according to another exemplary embodiment. Theflow 40 shown in fig. 4 is a detailed description of S304 "acquiring the first video of the target object and tracking the moving route of the target object based on the first video to generate track information" in the flow shown in fig. 3.
As shown in fig. 4, in S402, one of the plurality of first image capturing apparatuses is controlled to acquire one first video of the target object. A plurality of first camera device can arrange at indoor roof, and a plurality of first camera device can be a plurality of fisheye cameras, from the roof shooting downwards, and a plurality of first camera device still can be different types of camera, the work of mutually supporting, and this disclosure does not use this as the limit.
In S404, a moving direction of the target object is determined according to the first video. When the current first camera device shoots the user, the track direction of the user can be determined according to the distance range shot by the user in the current first camera device.
In S406, another first camera is controlled to acquire another first video of the target object according to the moving direction. And analyzing an image area to be entered by the user according to the moving direction to determine another first camera device in the image area to be performed, and switching to another first camera device to continue tracking shooting of the track of the user when the user enters the image area.
In S408, a plurality of movement trajectories of the target object in the plurality of first videos is determined.
In S410, the trajectory information is generated based on the positional relationship between the plurality of movement routes and the plurality of first image pickup devices. During the shooting process of the video, the track information of the user can be generated through the positions of the different first camera devices and the movement tracks of the user in different areas.
In one embodiment, for example, the area where each first camera device that the user passes through is located is marked, an approximate range of the user track is drawn, then the track of the user in each area is drawn, the tracks of adjacent areas are connected, and then the user track information is determined.
FIG. 5 is a flow chart illustrating a trajectory tracking method according to another exemplary embodiment. Theflow 50 shown in fig. 5 is a detailed description of S306 "acquiring a second video of the target object based on the initial image and generating supplementary information based on the second video" in the flow shown in fig. 3.
As shown in fig. 5, in S502, a plurality of real-time videos captured by a plurality of second cameras are acquired. The second camera device can be arranged at a fixed position, is used for correcting the information of the moving target, and can be used for tracking the problems that the tracking chain of the user is lost or the tracking is wrong and the like due to the problems of shielding and the like. Wherein the real-time video comprises at least one frame of image.
In a possible implementation manner, the second video corresponding to the target object may be determined in at least one of steps S504, S506 and S508.
In S504, performing face recognition on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object. The face recognition is a biological recognition technology for identity recognition based on face feature information of a person. A series of related technologies, also commonly called face recognition and face recognition, are used to collect images or video streams containing faces by using a camera or a video camera, automatically detect and track the faces in the images, and then perform face recognition on the detected faces.
In S506, performing video structural calculation on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object. The video structuring refers to establishing a video big data structuring platform according to the characteristics of people, vehicles, objects, colors, numbers and other attributes presented in a video picture. After the video is structured, the video is stored in a corresponding structured data warehouse, and the storage capacity is greatly reduced. The person in the video image can be subjected to structural processing, and various structural characteristic attribute information of the user can be obtained, wherein the structural characteristic attribute information comprises clothing and ornament characteristics: coats, trousers, skirts and dresses, shoes, hats, sunglasses and sunglasses, scarves and belt waistbands; carrying object characteristics: single shoulder satchels, backpack, handbags, draw-bar boxes, umbrellas; human body characteristics: hair, face.
In S508, gait recognition is performed on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object. The gait recognition is a biological feature recognition technology, aims to identify the identity through the walking posture of people, and has the advantages of non-contact remote distance and difficulty in disguising compared with other biological recognition technologies. In the field of intelligent video monitoring, the method has more advantages than image recognition.
In S510, a real-time location of the target object is determined based on the second video.
In S512, the supplementary information is generated based on the real-time position and the positional relationship of the plurality of second image pickup devices.
Those skilled in the art will appreciate that all or part of the steps implementing the above embodiments are implemented as computer programs executed by a CPU. When executed by the CPU, performs the functions defined by the above-described methods provided by the present disclosure. The program may be stored in a computer readable storage medium, which may be a read-only memory, a magnetic or optical disk, or the like.
Furthermore, it should be noted that the above-mentioned figures are only schematic illustrations of the processes involved in the methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
FIG. 6 is a block diagram illustrating a trajectory tracking system in accordance with an exemplary embodiment. As shown in fig. 6, thetrajectory tracking system 60 includes: an initialimage capturing device 602, a firstimage capturing device 604, a secondimage capturing device 606, and aprocessing device 608.
Theinitial camera 602 is used for acquiring an initial image of a target object;
the plurality offirst cameras 604 are used for acquiring a first video of the target object; the plurality offirst cameras 604 are arranged at predetermined positions according to a first rule, and thefirst cameras 604 comprise fisheye cameras
The plurality ofsecond cameras 606 are used for acquiring a second video of the target object based on the initial image; the plurality of secondimage pickup devices 606 are arranged at predetermined positions according to a second rule
Theprocessing device 608 is configured to track the moving route of the target object based on the first video to generate track information; and generating supplementary information based on the second video, and adjusting the track information based on the supplementary information to track the target object.
The first rule may be determined according to the area to be tracked and the acquisition range of the first camera device. For example, the area to be tracked is an indoor supermarket, and a plurality of first camera devices can be uniformly or non-uniformly distributed on the ceiling of the indoor supermarket according to the area of the indoor supermarket and the collection range of the first camera devices, so that the first camera devices can collect any area of the indoor supermarket. The second rule may be determined according to the area to be tracked and the acquisition range of the second camera. For example, the area to be tracked is an indoor supermarket, the second camera devices may be respectively arranged in a plurality of fixed areas of the indoor supermarket, and the area collected by the second camera devices may be an area with a high occurrence frequency of users in the indoor supermarket, so that the second video is collected by the second camera devices in the area, and the supplementary information is generated based on the second video, thereby correcting the track information of the target object.
According to the track tracking system disclosed by the invention, an initial image of a target object is obtained; acquiring a first video of the target object, and tracking the moving route of the target object based on the first video to generate track information; acquiring a second video of the target object based on the initial image, and generating supplementary information based on the second video; and adjusting the trajectory information based on the supplementary information to track the trajectory of the target object, so that the target object can be accurately positioned and the motion trajectory of the target object can be determined in a complex indoor environment.
FIG. 7 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Anelectronic device 700 according to this embodiment of the disclosure is described below with reference to fig. 7. Theelectronic device 700 shown in fig. 7 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7,electronic device 700 is embodied in the form of a general purpose computing device. The components of theelectronic device 700 may include, but are not limited to: at least oneprocessing unit 710, at least onememory unit 720, abus 730 that connects the various system components (including thememory unit 720 and the processing unit 710), adisplay unit 740, and the like.
Wherein the storage unit stores program codes executable by theprocessing unit 710 to cause theprocessing unit 710 to perform the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned electronic prescription flow processing method section of the present specification. For example, theprocessing unit 710 may perform the steps as shown in fig. 3, 4, 5.
Thememory unit 720 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)7201 and/or acache memory unit 7202, and may further include a read only memory unit (ROM) 7203.
Thememory unit 720 may also include a program/utility 7204 having a set (at least one) ofprogram modules 7205,such program modules 7205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 730 may be any representation of one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
Theelectronic device 700 may also communicate with one or more external devices 700' (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with theelectronic device 700, and/or with any devices (e.g., router, modem, etc.) that enable theelectronic device 700 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O)interface 750. Also, theelectronic device 700 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet) via thenetwork adapter 760. Thenetwork adapter 760 may communicate with other modules of theelectronic device 700 via thebus 730. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with theelectronic device 700, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, as shown in fig. 8, the technical solution according to the embodiment of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a processing apparatus, or a network device, etc.) to execute the above method according to the embodiment of the present disclosure.
The software product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or processing apparatus. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The computer readable medium carries one or more programs which, when executed by a device, cause the computer readable medium to perform the functions of: acquiring an initial image of a target object; acquiring a first video of the target object, and tracking the moving route of the target object based on the first video to generate track information; acquiring a second video of the target object based on the initial image, and generating supplementary information based on the second video; and adjusting the trajectory information based on the supplemental information to perform trajectory tracking of the target object.
Those skilled in the art will appreciate that the modules described above may be distributed in the apparatus according to the description of the embodiments, or may be modified accordingly in one or more apparatuses unique from the embodiments. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a processing device, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Exemplary embodiments of the present disclosure are specifically illustrated and described above. It is to be understood that the present disclosure is not limited to the precise arrangements, instrumentalities, or instrumentalities described herein; on the contrary, the disclosure is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (15)

Translated fromChinese
1.一种轨迹跟踪方法,其特征在于,包括:1. a trajectory tracking method, is characterized in that, comprises:获取目标对象的初始图像;Get the initial image of the target object;获取所述目标对象的第一视频,并基于所述第一视频对所述目标对象的移动路线进行跟踪生成轨迹信息;acquiring the first video of the target object, and tracking the movement route of the target object based on the first video to generate trajectory information;基于所述初始图像获取所述目标对象的第二视频,并基于所述第二视频生成补充信息;以及Obtaining a second video of the target object based on the initial image, and generating supplemental information based on the second video; and基于所述补充信息对所述轨迹信息进行调整以进行所述目标对象的轨迹跟踪。The trajectory information is adjusted based on the supplementary information to perform trajectory tracking of the target object.2.如权利要求1所述的方法,其特征在于,获取目标对象的初始图像,包括:2. The method of claim 1, wherein acquiring an initial image of the target object comprises:获取初始摄像装置采集的所述目标对象的初始图像,所述初始图像中包括:用户面部图像和/或外观图像和/或步态图像。An initial image of the target object captured by the initial camera device is acquired, where the initial image includes: a user's face image and/or appearance image and/or gait image.3.如权利要求1所述的方法,其特征在于,获取所述目标对象的第一视频,并基于所述第一视频对所述目标对象的移动路线进行跟踪生成轨迹信息,包括:3. The method of claim 1, wherein acquiring a first video of the target object, and tracking a movement route of the target object based on the first video to generate trajectory information, comprising:在所述目标对象进行移动时,获取由多个第一摄像装置对所述目标对象进行跟踪采集生成的多个第一视频;以及When the target object moves, acquiring a plurality of first videos generated by tracking and collecting the target object by a plurality of first cameras; and基于所述多个第一视频生成所述轨迹信息。The trajectory information is generated based on the plurality of first videos.4.如权利要求3所述的方法,其特征在于,获取由多个第一摄像装置对所述目标对象进行跟踪采集生成的多个第一视频,包括:4. The method of claim 3, wherein acquiring a plurality of first videos generated by tracking and collecting the target object by a plurality of first cameras, comprising:控制所述多个第一摄像装置中的一个第一摄像装置获取所述目标对象至少一个第一视频;controlling one of the plurality of first cameras to acquire at least one first video of the target object;根据所述至少一个第一视频确定所述目标对象的移动方向;以及determining the moving direction of the target object according to the at least one first video; and根据所述移动方向控制另一个第一摄像装置获取所述目标对象的至少一个第一视频。Control another first camera device to acquire at least one first video of the target object according to the moving direction.5.如权利要求3或4所述的方法,其特征在于,基于所述多个第一视频生成所述轨迹信息,包括:5. The method of claim 3 or 4, wherein generating the trajectory information based on the plurality of first videos comprises:确定所述多个第一视频中所述目标对象的多个移动路线;determining a plurality of moving routes of the target object in the plurality of first videos;基于所述多个移动路线和所述多个第一摄像装置之间的位置关系生成所述轨迹信息。The trajectory information is generated based on the positional relationship between the plurality of movement routes and the plurality of first cameras.6.如权利要求1所述的方法,其特征在于,基于所述初始图像获取所述目标对象的第二视频,包括:6. The method of claim 1, wherein acquiring the second video of the target object based on the initial image comprises:获取多个第二摄像装置采集的多个实时视频;以及acquiring a plurality of real-time videos captured by a plurality of second cameras; and基于所述初始图像对所述多个实时视频进行目标识别以确定所述目标对象对应的所述第二视频。Target recognition is performed on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object.7.如权利要求6所述的方法,其特征在于,基于所述初始图像对所述多个实时视频进行目标识别以确定所述目标对象对应的所述第二视频,包括:7. The method of claim 6, wherein performing target recognition on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object, comprising:基于所述初始图像对所述多个实时视频进行人脸识别以确定所述目标对象对应的所述第二视频;和/或Perform face recognition on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object; and/or基于所述初始图像对所述多个实时视频进行视频结构化计算以确定所述目标对象对应的所述第二视频;和/或Perform video structure calculation on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object; and/or基于所述初始图像对所述多个实时视频进行步态识别以确定所述目标对象对应的所述第二视频。Gait recognition is performed on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object.8.如权利要求1所述的方法,其特征在于,基于所述第二视频生成补充信息,包括:8. The method of claim 1, wherein generating supplementary information based on the second video comprises:基于所述第二视频确定所述目标对象的实时位置;以及determining a real-time location of the target object based on the second video; and基于所述实时位置和所述多个第二摄像装置的位置关系生成所述补充信息。The supplementary information is generated based on the real-time position and the positional relationship of the plurality of second cameras.9.如权利要求1所述的方法,其特征在于,还包括:9. The method of claim 1, further comprising:通过信息采集装置获取所述目标对象的特征信息,所述特征信息包括:二维码和/或射频识别码;The characteristic information of the target object is acquired through the information collection device, and the characteristic information includes: a two-dimensional code and/or a radio frequency identification code;根据所述特征信息存储所述目标对象的多个历史轨迹信息;以及storing a plurality of historical track information of the target object according to the feature information; and基于所述多个历史轨迹信息对所述目标对象的行为进行分析处理。The behavior of the target object is analyzed and processed based on the plurality of historical track information.10.如权利要求1所述的方法,其特征在于,还包括:10. The method of claim 1, further comprising:在所述目标对象的轨迹信息满足预设条件时,生成预警信息。When the trajectory information of the target object satisfies a preset condition, early warning information is generated.11.一种轨迹跟踪系统,其特征在于,包括:11. A trajectory tracking system, characterized in that, comprising:初始摄像装置,用于获取目标对象的初始图像;an initial camera device for acquiring an initial image of the target object;多个第一摄像装置,用于获取所述目标对象的第一视频;a plurality of first camera devices for acquiring the first video of the target object;多个第二摄像装置,用于获取第二视频;以及a plurality of second cameras for acquiring a second video; and处理装置,用于基于所述第一视频对所述目标对象的移动路线进行跟踪生成轨迹信息,基于所述初始图像,确定所述目标对象的第二视频,根据所述第二视频生成补充信息,并基于所述补充信息对所述轨迹信息进行调整以进行所述目标对象的轨迹跟踪。a processing device, configured to track the moving route of the target object based on the first video to generate trajectory information, determine a second video of the target object based on the initial image, and generate supplementary information according to the second video , and adjust the trajectory information based on the supplementary information to track the trajectory of the target object.12.如权利要求11所述的系统,其特征在于,所述多个第一摄像装置按照第一规则设置在预定位置,第一摄像装置包括鱼眼摄像头。12 . The system of claim 11 , wherein the plurality of first cameras are arranged at predetermined positions according to a first rule, and the first cameras comprise fisheye cameras. 13 .13.如权利要求11所述的系统,其特征在于,所述多个第二摄像装置按照第二规则设置在预定位置。13. The system of claim 11, wherein the plurality of second cameras are arranged at predetermined positions according to a second rule.14.一种电子设备,其特征在于,包括:14. An electronic device, characterized in that, comprising:一个或多个处理器;one or more processors;存储装置,用于存储一个或多个程序;a storage device for storing one or more programs;当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-10中任一所述的方法。The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-10.15.一种计算机可读介质,其上存储有计算机程序,其特征在于,所述程序被处理器执行时实现如权利要求1-10中任一所述的方法。15. A computer-readable medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the method according to any one of claims 1-10 is implemented.
CN202010334554.0A2020-04-242020-04-24Trajectory tracking method, system, electronic device and computer readable mediumPendingCN113628237A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010334554.0ACN113628237A (en)2020-04-242020-04-24Trajectory tracking method, system, electronic device and computer readable medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010334554.0ACN113628237A (en)2020-04-242020-04-24Trajectory tracking method, system, electronic device and computer readable medium

Publications (1)

Publication NumberPublication Date
CN113628237Atrue CN113628237A (en)2021-11-09

Family

ID=78376269

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010334554.0APendingCN113628237A (en)2020-04-242020-04-24Trajectory tracking method, system, electronic device and computer readable medium

Country Status (1)

CountryLink
CN (1)CN113628237A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114200934A (en)*2021-12-062022-03-18北京云迹科技股份有限公司 Robot target following control method, device, electronic device and storage medium
CN114494412A (en)*2021-12-242022-05-13成都鹏业软件股份有限公司 Indoor monitoring method, device, system and storage medium
CN116778550A (en)*2023-06-062023-09-19广东科诺勘测工程有限公司Personnel tracking method, device and equipment for construction area and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101572804A (en)*2009-03-302009-11-04浙江大学Multi-camera intelligent control method and device
US20130307974A1 (en)*2012-05-172013-11-21Canon Kabushiki KaishaVideo processing apparatus and method for managing tracking object
CN108010008A (en)*2017-12-012018-05-08北京迈格威科技有限公司Method for tracing, device and the electronic equipment of target
CN110188691A (en)*2019-05-302019-08-30银河水滴科技(北京)有限公司A kind of motion track determines method and device
CN110232712A (en)*2019-06-112019-09-13武汉数文科技有限公司Indoor occupant positioning and tracing method and computer equipment
CN110717414A (en)*2019-09-242020-01-21青岛海信网络科技股份有限公司Target detection tracking method, device and equipment
CN110866480A (en)*2019-11-072020-03-06浙江大华技术股份有限公司Object tracking method and device, storage medium and electronic device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101572804A (en)*2009-03-302009-11-04浙江大学Multi-camera intelligent control method and device
US20130307974A1 (en)*2012-05-172013-11-21Canon Kabushiki KaishaVideo processing apparatus and method for managing tracking object
CN108010008A (en)*2017-12-012018-05-08北京迈格威科技有限公司Method for tracing, device and the electronic equipment of target
CN110188691A (en)*2019-05-302019-08-30银河水滴科技(北京)有限公司A kind of motion track determines method and device
CN110232712A (en)*2019-06-112019-09-13武汉数文科技有限公司Indoor occupant positioning and tracing method and computer equipment
CN110717414A (en)*2019-09-242020-01-21青岛海信网络科技股份有限公司Target detection tracking method, device and equipment
CN110866480A (en)*2019-11-072020-03-06浙江大华技术股份有限公司Object tracking method and device, storage medium and electronic device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIUZHUANG ZHOU ET AL: "Multiple face tracking and recognition with identity-specific localized metric learning", 《PATTERN RECOGNITION》, vol. 75, 23 September 2017 (2017-09-23), pages 41 - 50, XP085250138, DOI: 10.1016/j.patcog.2017.09.022*

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114200934A (en)*2021-12-062022-03-18北京云迹科技股份有限公司 Robot target following control method, device, electronic device and storage medium
CN114494412A (en)*2021-12-242022-05-13成都鹏业软件股份有限公司 Indoor monitoring method, device, system and storage medium
CN116778550A (en)*2023-06-062023-09-19广东科诺勘测工程有限公司Personnel tracking method, device and equipment for construction area and storage medium

Similar Documents

PublicationPublication DateTitle
US10812761B2 (en)Complex hardware-based system for video surveillance tracking
KR102215041B1 (en)Method and system for tracking an object in a defined area
CN110823218B (en)Object tracking system
Natarajan et al.Multi-camera coordination and control in surveillance systems: A survey
Fusco et al.Indoor localization for visually impaired travelers using computer vision on a smartphone
EP2864930B1 (en)Self learning face recognition using depth based tracking for database generation and update
US20090066513A1 (en)Object detecting device, object detecting method and object detecting computer program
CN111311649A (en)Indoor internet-of-things video tracking method and system
CN113628237A (en)Trajectory tracking method, system, electronic device and computer readable medium
CN112381853B (en) Device and method for detecting, tracking and identifying people using wireless signals and images
Zhang et al.Indoor space recognition using deep convolutional neural network: a case study at MIT campus
Chen et al.Smart campus care and guiding with dedicated video footprinting through Internet of Things technologies
Van et al.Things in the air: Tagging wearable IoT information on drone videos
Belka et al.Integrated visitor support system for tourism industry based on IoT technologies
US10674117B2 (en)Enhanced video system
CN112399137B (en)Method and device for determining movement track
Rehman et al.Human tracking robotic camera based on image processing for live streaming of conferences and seminars
Yamaguchi et al.Towards intelligent environments: Human sensing through 3d point cloud
CN111310524B (en) Multi-video association method and device
Al-Salhie et al.Multimedia surveillance in event detection: crowd analytics in Hajj
Pennisi et al.Multi-robot surveillance through a distributed sensor network
JP2019200535A (en)Movement information utilization apparatus and movement information utilization method
Mandel et al.People tracking in ambient assisted living environments using low-cost thermal image cameras
Song et al.A novel dynamic model for multiple pedestrians tracking in extremely crowded scenarios
Liang et al.Enhancing person identification for smart cities: Fusion of video surveillance and wearable device data based on machine learning

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp