Movatterモバイル変換


[0]ホーム

URL:


CN114064669B - Personnel track feature recording and feature updating method - Google Patents

Personnel track feature recording and feature updating method
Download PDF

Info

Publication number
CN114064669B
CN114064669BCN202111292567.7ACN202111292567ACN114064669BCN 114064669 BCN114064669 BCN 114064669BCN 202111292567 ACN202111292567 ACN 202111292567ACN 114064669 BCN114064669 BCN 114064669B
Authority
CN
China
Prior art keywords
personnel
track
sequence
images
sequence data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111292567.7A
Other languages
Chinese (zh)
Other versions
CN114064669A (en
Inventor
王贝贝
杨立成
张艺觉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Hongmu Intelligent Technology Co ltd
Original Assignee
Shanghai Hongmu Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Hongmu Intelligent Technology Co ltdfiledCriticalShanghai Hongmu Intelligent Technology Co ltd
Priority to CN202111292567.7ApriorityCriticalpatent/CN114064669B/en
Publication of CN114064669ApublicationCriticalpatent/CN114064669A/en
Application grantedgrantedCritical
Publication of CN114064669BpublicationCriticalpatent/CN114064669B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention discloses a personnel track feature recording and feature updating method, which comprises the steps of capturing personnel images by a terminal, comparing the personnel images with a terminal local portrait feature library to determine personnel identity results, and sending the personnel identity results to a server side; the server matches corresponding or similar track sequence data or newly-built track sequence data according to the terminal sending information; updating the track sequence data by using the snap-shot personnel images; the method comprises the steps that a terminal local portrait characteristic library and the same person in a cloud track library have personnel Identity (ID) information in one-to-one correspondence, track sequence data comprise a full sequence and a representative sequence, the full sequence comprises all person images belonging to the same person, the representative sequence is a plurality of person images with optimal quality and angle representativeness in the full sequence, an upper limit of quantity larger than 1 is set, and the representative sequence is updated according to quality when the track sequence data are updated. The invention can avoid the increase of the calculated amount due to the increase of accumulated data to reduce the speed, and ensures the speed and accuracy of track comparison.

Description

Personnel track feature recording and feature updating method
Technical Field
The invention relates to a personnel track management method, in particular to a personnel track characteristic recording and characteristic updating method.
Background
The face recognition technology has been widely used, and a large number of cameras for collecting the human images are installed at each stuck point, so that the computing power requirement for detecting and identifying the human images reaches an unprecedented height because all the personnel passing through the monitoring points are collected and processed. The general method in the existing portrait identification system technology is to compare the face capturing and shooting with the mastered recorded identity portrait characteristic library, and the unique personnel ID in the comparison can be obtained, so as to form a track.
However, since the quality of the monitored snapshot picture is uncontrollable and the certificate picture serving as a comparison basis is often obviously different from the monitored picture in terms of age, imaging quality and the like, the snapshot portraits of a plurality of cameras cannot be accurately compared. In order to solve the problem, the portrait of the history snapshot is usually used as a comparison basis to improve the accuracy of the result. The storage amount of the snapshot portrait can be rapidly increased along with the time, if the number of the snapshot terminals is large and the distribution is wide, the number of the snapshot personnel is large, the calculation power requirement on the server is extremely high, and along with the increase of the snapshot amount, the portrait track comparison speed for a single portrait picture can be slower and slower.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a personnel track feature recording and feature updating method, which solves the problems that the existing identity confirmation and history track searching modes are low in accuracy and the speed is slower and slower along with the increase of the snapshot amount.
The technical scheme of the invention is as follows: a personnel track feature recording and feature updating method comprises the following steps:
step1, capturing a personnel image by a terminal, extracting characteristics, comparing the extracted personnel image with a terminal local portrait characteristic library, determining personnel identity results, and sending the comparison results and the captured personnel image to a server, wherein the personnel identity results comprise personnel identities in a library obtained by matching the personnel identity results with the terminal local portrait characteristic library or strangers which can not be formed by matching the personnel identity results;
Step 2, the server side searches corresponding track sequence data in a cloud track library according to the identity of the personnel in the library, or newly establishes track sequence data or searches track sequence data with similar characteristics with the snap-shot personnel image when the terminals cannot be matched to form strangers;
Step 3, updating the track sequence data newly built or found in the step 2 by using the captured personnel image obtained in the step 1;
The method comprises the steps that a terminal local portrait characteristic library and a cloud track library are provided with personnel identity ID information corresponding to the same personnel one by one, track sequence data comprise a full sequence and a representative sequence, the full sequence comprises all personnel images belonging to the same personnel, the representative sequence is a plurality of personnel images with optimal quality and angle representation in the full sequence and is provided with a quantity upper limit larger than 1, the step3 is to add the captured personnel images into the full sequence and update the representative sequence according to quality when updating the track sequence data, and the step 2 is to search the track sequence data with similar characteristics with the captured personnel images and compare the similarity of the captured personnel images with the representative sequence.
Further, step 4, synchronously updating the representative photo sequence in the updated track sequence data to the terminal local portrait feature library, and accelerating the comparison and identification speed of the terminal by adopting the newly added snapshot personnel image as terminal comparison basis as much as possible.
Further, in the step 2, newly creating track sequence data or searching track sequence data with similar characteristics to the captured personnel image when the terminals cannot be matched to form strange persons, specifically setting a first similarity threshold, calculating first similarity between the captured personnel image and all track sequence data in the cloud track library, when the first similarity between no track sequence data and the captured personnel image exceeds the first similarity threshold, adding track sequence data in the cloud track library, and when one or more than one track sequence data and the first similarity between the captured personnel image exceed the first similarity threshold, updating the track sequence data with the highest first similarity by using the captured personnel image obtained in the step 1.
Further, in order to optimize track sequence data in a cloud track library, a plurality of pieces of track sequence data are formed by the same person due to factors such as snapshot time, angles and the like, and then the comparison speed of the whole server is dragged, when the similarity of more than one piece of track sequence data and the snapshot personnel image exceeds the first similarity threshold value, the track sequence data with the highest first similarity is updated by the snapshot personnel image obtained in the step 1, then a second similarity threshold value is set, the second similarity is calculated respectively for a representative photo sequence in the track sequence data with the highest first similarity and the rest track sequence data, the rest track sequence data exceeding the second similarity threshold value is combined with the track sequence data with the highest first similarity, the whole sequence of the track sequence data is directly combined when the representative photo sequence of the track sequence data is combined, and the number of images in the representative photo sequence is controlled according to the upper limit of quantity.
Further, the updating the representative sequence according to the quality when the track sequence data is updated in the step 3 specifically includes the following steps: step 3-1, judging whether the number of images in the representative illumination sequence is larger than 0, if not, directly adding the captured personnel images into the representative illumination sequence, and ending updating; if the number is greater than 0, the step S3-2 is entered; step 3-2, judging whether the captured personnel image reaches the basic quality, and if the captured personnel image does not reach the basic quality, ending updating; if the basic quality is reached, entering a step S3-3; and S3-3, adding the captured personnel images into the representative image sequence, controlling the number of the images in the representative image sequence according to the upper limit of the number, and ending updating.
Further, in order to reduce the influence similarity matching of the low-quality representative images existing in the representative sequence when newly creating the track sequence data, the step S3-3 firstly judges whether the number of images in the representative sequence is larger than 1, if yes, the captured personnel images are added into the representative sequence, then the number of images in the representative sequence is controlled according to the upper limit of the number, and updating is finished, if not, the step S3-4 is started; s3-4, judging whether the images in the representative photo sequence reach the basic quality, if so, adding the captured personnel images into the representative photo sequence to finish updating; if not, replacing the captured personnel image with the image in the representative image sequence.
Further, the basic quality is that the size of the face in the image exceeds a face threshold, the definition of the image exceeds a definition threshold, and the angles of the left and right sides of the face in the image are within an angle threshold.
Furthermore, in order to enrich the image features in the representative shot sequence, the comparison speed can be increased when the captured images are compared, the representative shot sequence is divided into a plurality of groups according to the angles of the left face and the right face, when the number of the images in the representative shot sequence is controlled according to the upper limit of the number, the images with poor definition in the groups with more images are preferentially removed, and when the groups have the same number of the images, the images with poor definition in the groups are preferentially removed.
Further, the upper limit of the number of representative illumination sequences is 7 to 10.
The technical scheme provided by the invention has the advantages that:
Calculating the calculation load in a mode of distributing and matching between the terminal and the server, forming a full sequence only used for displaying information and a representative sequence used for comparison for track data at the server, carrying out quality update on the representative sequence and controlling the upper limit of the quantity of the representative sequence, thereby reducing the quantity of comparison data at the server, and ensuring that the comparison speed is not reduced due to the increase of the quantity of the full sequence; meanwhile, as the quality of the representative photo sequence is updated, the quality of the representative photo sequence is gradually improved along with the increase of the number of the shots, and therefore, the accuracy of a comparison structure can be improved on the premise of not reducing the comparison speed.
Drawings
Fig. 1 is a schematic diagram of a system module of a personnel track feature recording and feature updating method according to the invention.
Fig. 2 is a schematic flow chart of a method for recording and updating the characteristics of the personnel track according to the present invention.
Fig. 3 is a schematic diagram representing a sequential update (only one image and not meeting the base quality).
Fig. 4 is a schematic diagram representing a sequential update (only one image and meeting the base quality).
Fig. 5 is a schematic diagram representing a sequential update mode (the image exceeds the upper limit of the number).
Fig. 6 is a schematic diagram showing the merging of two track sequence data.
Detailed Description
The present application is further described below with reference to examples, which are to be construed as merely illustrative of the present application and not limiting of its scope, and various modifications to the equivalent arrangements of the present application will become apparent to those skilled in the art upon reading the present description, which are within the scope of the application as defined in the appended claims.
Referring to fig. 1, the system adopted by the personnel track feature recording and feature updating method of the present embodiment includes a terminal and a server, wherein the terminal device is installed in a place needing to be monitored, such as a community, a hotel, etc., and the terminal device includes an intelligent recognition device and a personnel image capturing device (a camera, etc.). A plurality of personnel image capturing devices can be arranged at a specific place, and the personnel image capturing devices are provided with an intelligent recognition device to perform terminal local recognition. The server side is connected with a plurality of terminals through a network, the server side is configured with a plurality of server load clusters to evaluate according to the acquisition quantity of the terminal equipment, and the terminals can be distributed in a plurality of different places so as to monitor the whole area.
The overall flow of the personnel track feature record and feature update method is that firstly, the intelligent recognition device is connected with a server to acquire the connection information of a terminal local portrait feature library and a personnel image snapshot device; then the intelligent recognition device is connected with the camera and takes the flow; then the intelligent recognition device decodes the code stream and performs face detection, and feature extraction and face comparison are started after the face is detected; then the intelligent recognition device uploads the comparison result to the server; and finally, the server receives the terminal data and starts track searching. In particular, as shown in connection with fig. 2,
Step 1, a personnel image snapshot device of the terminal captures personnel images, and then an intelligent recognition device compares the extracted features of the captured personnel images with a local personnel feature library of the terminal to determine personnel identity results. The personnel identity result has two possibilities, one is that a record matched with the snap-shot personnel image can be found from a terminal local portrait feature library, so that the personnel identity is determined, a unique personnel ID is obtained, and the same unique personnel ID record track sequence data is also used in a cloud track library at a server side. That is to say, for the same person, the same person has the same ID in the terminal local portrait feature library and the cloud track library. The other is that a record matched with the snap shot personnel image cannot be found from the local portrait feature library of the terminal, and the personnel identity result is indicated as a stranger. And the intelligent identification device sends the identification result of the person and the snap shot person image to the server side, and the step 2 is entered.
Step 2, the step 2 and the step 3 are executed by the server. The server side performs corresponding matching and updating according to the information uploaded by the intelligent recognition device of the terminal, specifically, when the unique personnel ID exists in the information uploaded by the intelligent recognition device, the server side directly searches corresponding track sequence data in the cloud track library according to the unique personnel ID, and then the acquired snap-shot personnel images update the track sequence data, namely, the step 3 is entered (the specific updating mode is detailed in the step 3). When no unique person ID is in the information uploaded by the intelligent identification device, namely the information is indicated as strange person, the server side carries out identification matching on the captured person image again, and at the moment, the basis of identification matching is all track sequence data in the cloud track library. The composition of the cloud track library is as follows: all the snap shot person images of the same person form the same track sequence data, and the same track sequence data are distinguished by a unique person ID, wherein the picture quality of each track sequence data is distinguished again and is divided into a representative photo sequence and a full sequence, wherein the representative photo sequence is an image which truly participates in track searching and matching, and the full sequence is only display information which contains all the person images of the person but does not participate in subsequent comparison and matching. The representative photo sequence is provided with an upper limit of number, which is generally 7-10, so that track feature vectors (X1, X2, the..once, xn) taking each representative photo feature as an element are generated, the sequence classifies tracks according to the registered position information or the position and time of each track node in the sequence, and the speed optimization of track searching is facilitated according to the priority of the time nearby.
When the server side performs identification matching, whether the matching is performed is determined according to whether the similarity of the captured personnel image and the representative photo sequence in the track sequence data exceeds a first set similarity threshold, and the first similarity threshold can be distinguished according to the specific matching condition. Specifically, when the representative shot sequence in the track sequence data only has one image, the similarity between the captured personnel image and the captured personnel image is more than 92, and the matching hit is considered; when two images exist in the representative sequence, the average value of the similarity reaches 90, namely the matching hit is considered; when the representative sequence has 3 images, the average value or the median of the similarity reaches 88, and the matching hit is considered; when there are more than 3 images representing a sequence, i.e., the average or median of the similarity reaches 86, a match hit is considered.
For all track sequence data in the cloud track library, the number of matching hits is divided into multiple cases. When no track sequence data is matched and hit, the server side takes the captured personnel image as a stranger, correspondingly newly builds track sequence data in the cloud track library, and updates the track sequence data by the captured personnel image, namely, the step 3 is entered. When only one piece of track sequence data is matched and hit, the captured personnel image directly updates the track sequence data, and the step 3 is also entered. When two or more pieces of track sequence data are matched and hit, the captured personnel image is directly updated to the sequence data with the highest similarity in a step 3 mode, and then the similarity between other hit track sequence data and the sequence data with the highest similarity is needed to be calculated to determine whether the two track sequence data are matched. The specific mode is that the track feature vectors representing the illumination sequences in the two pieces of track sequence data are (X1, X2, the..the Xn) and (X1, X2, the..the Xm) are subjected to pairwise similarity calculation, namely m: n comparison is carried out, namely m times of judgment similar to 1:n are carried out to determine whether the two pieces of track sequence data are matched and hit, and the total calculation times of the similarity are num=m X n; when num=1, a similarity greater than 92 is considered a match hit; when num=2, a similarity minimum value greater than 88 is considered a match hit; when num is equal to 3, a similarity average value greater than 88 considers a match hit; when num >3 but less than 8, a similarity average greater than 88 or a median greater than 90 is considered a match hit; when num >8 but less than 64, a similarity average greater than 84 or a median greater than 86 is considered a match hit; when num >64, a similarity average greater than 80 or a median greater than 82 is considered a match hit. If the matching hits, the two pieces of track sequence data are combined, and the combining operation is to directly combine the whole sequence in the track sequence data, and after the representative shot sequence of the track sequence data is combined, the number of images in the representative shot sequence is controlled according to the upper limit of the number in a control mode of the number of images in the representative shot sequence in the step 3.
Step 3, updating the newly built or found track sequence data by using the snap shot personnel image obtained in the step 1; the method specifically comprises the following steps:
Step 3-1, judging whether the number of images in the representative photo sequence is larger than 0, if not, indicating that one representative photo image is not available, namely, the first appearance of the person is taken as the representative photo no matter how the quality of the image is, and remaining to be replaced when the subsequent picture conforming to the representative photo enters, so that the captured person image is directly added into the representative photo sequence, and meanwhile, the corresponding full sequence is also added, and updating is finished; if the number is greater than 0, the process proceeds to step S3-2.
Step 3-2, judging whether the captured personnel images reach the basic quality, if not, only adding the captured personnel images into the corresponding full sequence, and ending updating; if the basic quality is reached, step S3-3 is entered, wherein the basic quality of the image mainly considers the face size, the definition, and the angles of the side faces, the face size is determined by the inter-eye distance, the face threshold may be set to 60, the definition threshold may be set to 0.7, and the angle threshold of the side faces may be set to 60 ° each.
Step S3-3 firstly judges whether the number of images in the representative photo sequence is larger than 1, if yes, the captured personnel images are added into the representative photo sequence, meanwhile, the corresponding full sequence is also added, then the number of images in the representative photo sequence is controlled according to the upper limit of the number, updating is finished, if no, only one representative photo image is indicated, and two conditions need to be considered at the moment, so that the step S3-4 classification processing is carried out.
Step S3-4, judging whether the images in the representative photo sequence reach the basic quality, wherein the original representative photo image is a representative photo image which is needed to be stored after newly creating the track sequence data, and the representative photo image does not meet the basic quality requirement, so that the representative photo image is firstly judged to determine whether the newly terrible person image is directly added into the representative photo sequence or is replaced. When the judgment result is yes, the original image accords with the basic quality requirement, the captured personnel image is added into the representative image sequence, and meanwhile, the corresponding full sequence is also added to finish updating; if the judgment result is that the original image does not meet the basic quality requirement, replacing the captured personnel image with the image in the representative image sequence, and simultaneously adding the corresponding full sequence to finish updating.
In the above process, the images in the representative photo sequence are classified according to the side face angles, for example, when the upper limit of the number of representative photo sequences is 10, the side face angles can be classified into 7 categories, namely-45 °, -30 °, -15 °,0 °, 15 °,30 °,45 °, "-" indicates the left side, and the newly added snapshot person image side face angle is classified into which category is closest. And then carrying out quantity control according to the definition, namely finding out the angle with the largest number of representative shots when the number of images in the representative shot sequence reaches the upper limit, finding out the rejection with the lowest definition in the angle, and if two or more classes with the same largest number of representative shots exist, finding out the rejection with the lowest definition in the classes.
And step 4, synchronously updating the representative photo sequence in the updated track sequence data to a terminal local portrait feature library, for example, after the personnel images captured by the terminal 1 of the hotel are compared at the server side and the track data sequence is updated, synchronously updating the corresponding representative photo sequence to the terminal local portrait feature library of the terminal 1 of the hotel, and taking the captured personnel images with higher quality as terminal comparison basis, so that the comparison and identification speed of the terminal can be increased.

Claims (5)

Step 2, a server side searches corresponding track sequence data in a cloud track library according to personnel identities in the library, or newly establishes track sequence data when a terminal cannot be matched to form strange people or searches track sequence data with similar characteristics to a captured personnel image, specifically sets a first similarity threshold, calculates first similarity between the captured personnel image and all track sequence data in the cloud track library, newly adds track sequence data in the cloud track library when the first similarity between no track sequence data and the captured personnel image exceeds the first similarity threshold, and updates track sequence data with the highest first similarity by using the captured personnel image obtained in the step 1 when the first similarity between one track sequence data and the captured personnel image exceeds the first similarity threshold; when the similarity between more than one piece of track sequence data and the captured personnel image exceeds the first similarity threshold value, firstly updating the track sequence data with the highest first similarity by using the captured personnel image obtained in the step 1, then setting a second similarity threshold value, respectively calculating the second similarity between the representative illumination sequence in the track sequence data with the highest first similarity and the rest track sequence data, merging the rest track sequence data exceeding the second similarity threshold value with the track sequence data with the highest first similarity, directly merging the whole sequence of the track sequence data when merging, and controlling the number of images in the representative illumination sequence according to the upper limit of the number after the representative illumination sequence of the track sequence data is merged;
And step 3, updating the track sequence data newly built or found in the step 2 by using the snap shot personnel image obtained in the step 1, and specifically comprising the following steps: step 3-1, judging whether the number of images in the representative illumination sequence is larger than 0, if not, directly adding the captured personnel images into the representative illumination sequence, and ending updating; if the number is greater than 0, the step S3-2 is entered; step 3-2, judging whether the captured personnel image reaches the basic quality, and if the captured personnel image does not reach the basic quality, ending updating; if the basic quality is reached, entering a step S3-3; step S3-3 firstly judges whether the number of images in the representative illumination sequence is larger than 1, if yes, the captured personnel images are added into the representative illumination sequence, then the number of images in the representative illumination sequence is controlled according to the upper limit of the number, and updating is finished, if not, the step S3-4 is started; s3-4, judging whether the images in the representative photo sequence reach the basic quality, if so, adding the captured personnel images into the representative photo sequence to finish updating; if not, replacing the captured personnel image with the image in the representative image sequence;
The method comprises the steps that a terminal local portrait characteristic library and a cloud track library are provided with personnel identity ID information corresponding to the same personnel one by one, track sequence data comprise a full sequence and a representative sequence, the full sequence comprises all personnel images belonging to the same personnel, the representative sequence is a plurality of personnel images with optimal quality and angle representation in the full sequence and is provided with a quantity upper limit larger than 1, the step 3 is to add the captured personnel images into the full sequence and update the representative sequence according to quality when updating the track sequence data, and the step 2 is to search the track sequence data with similar characteristics with the captured personnel images and compare the similarity of the captured personnel images with the representative sequence.
CN202111292567.7A2021-11-032021-11-03Personnel track feature recording and feature updating methodActiveCN114064669B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202111292567.7ACN114064669B (en)2021-11-032021-11-03Personnel track feature recording and feature updating method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202111292567.7ACN114064669B (en)2021-11-032021-11-03Personnel track feature recording and feature updating method

Publications (2)

Publication NumberPublication Date
CN114064669A CN114064669A (en)2022-02-18
CN114064669Btrue CN114064669B (en)2024-09-06

Family

ID=80236600

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111292567.7AActiveCN114064669B (en)2021-11-032021-11-03Personnel track feature recording and feature updating method

Country Status (1)

CountryLink
CN (1)CN114064669B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114627504B (en)*2022-03-172023-01-10盐城笃诚建设有限公司 A management system and management method for construction engineering labor personnel

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110176142A (en)*2019-05-172019-08-27佳都新太科技股份有限公司Track of vehicle prediction model is established and prediction technique
CN112669345A (en)*2020-12-302021-04-16中山大学Cloud deployment-oriented multi-target track tracking method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR20200060942A (en)*2018-11-232020-06-02주식회사 리얼타임테크Method for face classifying based on trajectory in continuously photographed image
CN113590866A (en)*2021-07-292021-11-02苏州弘目信息技术有限公司Personnel identity confirmation and track management method and system based on end cloud combination

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110176142A (en)*2019-05-172019-08-27佳都新太科技股份有限公司Track of vehicle prediction model is established and prediction technique
CN112669345A (en)*2020-12-302021-04-16中山大学Cloud deployment-oriented multi-target track tracking method and system

Also Published As

Publication numberPublication date
CN114064669A (en)2022-02-18

Similar Documents

PublicationPublication DateTitle
CN113034548B (en)Multi-target tracking method and system suitable for embedded terminal
CN107292240B (en)Person finding method and system based on face and body recognition
CN111832457B (en)Stranger intrusion detection method based on cloud edge cooperation
JP4616702B2 (en) Image processing
CN109635146B (en)Target query method and system based on image characteristics
CN109740004B (en)Filing method and device
CN110163135B (en)Dynamic algorithm-based one-person one-file face clustering method and system
CN111968152B (en) A dynamic identity recognition method and device
CN114187463B (en) Electronic archive generation method, device, terminal equipment and storage medium
CN102065275B (en)Multi-target tracking method in intelligent video monitoring system
CN107230267A (en)Intelligence In Baogang Kindergarten based on face recognition algorithms is registered method
CN109784220B (en)Method and device for determining passerby track
CN107657232A (en)A kind of pedestrian's intelligent identification Method and its system
CN109800329B (en)Monitoring method and device
Yang et al.A method of pedestrians counting based on deep learning
CN114064669B (en)Personnel track feature recording and feature updating method
CN117037057A (en)Tracking system based on pedestrian re-identification and hierarchical search strategy
CN111062294B (en)Passenger flow queuing time detection method, device and system
CN113657169A (en)Gait recognition method, device, system and computer readable storage medium
CN115115976B (en)Video processing method, device, electronic equipment and storage medium
CN112487966B (en)Mobile vendor behavior recognition management system
WO2014088407A1 (en)A self-learning video analytic system and method thereof
CN109145758A (en)A kind of recognizer of the face based on video monitoring
CN115861869A (en)Gait re-identification method based on Transformer
Zhang et al.What makes for good multiple object trackers?

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp