Movatterモバイル変換


[0]ホーム

URL:


CN113378005B - Event processing method, device, electronic equipment and storage medium - Google Patents

Event processing method, device, electronic equipment and storage medium
Download PDF

Info

Publication number
CN113378005B
CN113378005BCN202110622066.4ACN202110622066ACN113378005BCN 113378005 BCN113378005 BCN 113378005BCN 202110622066 ACN202110622066 ACN 202110622066ACN 113378005 BCN113378005 BCN 113378005B
Authority
CN
China
Prior art keywords
information
target
event
target object
feature information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110622066.4A
Other languages
Chinese (zh)
Other versions
CN113378005A (en
Inventor
甘露
付琰
周洋杰
陈亮辉
彭玉龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co LtdfiledCriticalBeijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110622066.4ApriorityCriticalpatent/CN113378005B/en
Publication of CN113378005ApublicationCriticalpatent/CN113378005A/en
Application grantedgrantedCritical
Publication of CN113378005BpublicationCriticalpatent/CN113378005B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本公开提出了一种事件处理方法、装置、设备以及存储介质,涉及深度学习和大数据领域,可用于智慧城市场景下。具体实现方案为:获取待检测图像,并对所述待检测图像进行特征提取以获取所述待检测图像的多个特征信息;确定目标事件的事件信息;根据所述待检测图像的多个特征信息在预先建立的对象信息库进行检索,并根据所述目标事件的事件信息对检索结果进行排序;根据排序结果获取所述待检测图像中的目标对象的对象信息;根据所述对象信息对所述目标对象进行跟踪定位。本公开提高了对象信息库的准确率,减少了人工排查候选对象的成本,提高了事件处理效率。

Figure 202110622066

The disclosure proposes an event processing method, device, device and storage medium, which relate to the fields of deep learning and big data, and can be used in smart city scenarios. The specific implementation scheme is: acquire the image to be detected, and perform feature extraction on the image to be detected to obtain multiple feature information of the image to be detected; determine the event information of the target event; according to the multiple features of the image to be detected The information is retrieved in the pre-established object information database, and the retrieval results are sorted according to the event information of the target event; the object information of the target object in the image to be detected is obtained according to the sorting result; the object information of the target object is obtained according to the object information. The target object is tracked and positioned. The disclosure improves the accuracy rate of the object information database, reduces the cost of manually checking candidate objects, and improves event processing efficiency.

Figure 202110622066

Description

Event processing method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of data processing, and in particular, to the field of deep learning and big data, and more particularly, to an event processing method, apparatus, electronic device, and storage medium.
Background
As AI (Artificial Intelligence ) has increasingly penetrated smart city construction, functional departments of the city are actively combing pain points to explore solutions in cooperation with the internet or traditional suppliers. The method generally comprises the following optimization links after the traditional office flow is disassembled: intelligent data fusion, intelligent application, intelligent flow propulsion, intelligent analysis and evaluation and the like, and are used for improving office efficiency and quality.
At present, intelligent data fusion in some scenes also has the problems of lower accuracy, lower efficiency and the like because manual investigation is needed.
Disclosure of Invention
The present disclosure provides a method, apparatus, electronic device, and storage medium for event processing, which are applicable in a smart city scenario.
According to a first aspect of the present disclosure, there is provided an event processing method, including:
acquiring an image to be detected, and carrying out feature extraction on the image to be detected to acquire a plurality of feature information of the image to be detected;
determining event information of a target event, wherein the event information comprises at least one of occurrence place information and occurrence time information;
searching in a pre-established object information base according to a plurality of characteristic information of the image to be detected, and sequencing search results according to event information of the target event;
Acquiring object information of a target object in the image to be detected according to the sequencing result;
and tracking and positioning the target object according to the object information.
According to a second aspect of the present disclosure, there is provided an event processing apparatus comprising:
the image processing module is used for acquiring an image to be detected, and extracting the characteristics of the image to be detected to acquire a plurality of characteristic information of the image to be detected;
a first determining module for determining event information of a target event, the event information including at least one of occurrence place information and occurrence time information;
the retrieval module is used for retrieving in a pre-established object information base according to the plurality of characteristic information of the image to be detected and sequencing retrieval results according to the event information of the target event;
the second determining module is used for determining object information of the target object in the image to be detected according to the sorting result;
and the positioning module is used for tracking and positioning the target object according to the object information.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect described above.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the first aspect described above.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of the first aspect described above.
According to the technical scheme, the number of search results is reduced by extracting the plurality of feature information of the image to be detected and searching in the pre-established object information base according to the plurality of feature information of the image to be detected. In addition, the search results are ordered according to the event information of the target event, so that the relevance between the search results and the target event is introduced, the search results can be further screened, the accuracy of the search results is improved, and the time for manual elimination is effectively shortened. In addition, the target object is tracked and positioned according to the object information of the target object in the acquired image to be detected, and the accuracy and the efficiency of event processing can be improved through comprehensive multi-aspect data tracking and positioning analysis.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow chart of a method of event processing according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of creating an object information library according to an embodiment of the present disclosure;
FIG. 3 is a flow chart of establishing object information for each target object according to an embodiment of the present disclosure;
FIG. 4 is a flow chart of acquiring candidates and their ordering according to an embodiment of the present disclosure;
FIG. 5 is a flow chart of another method for obtaining candidates and their ordering according to an embodiment of the disclosure;
FIG. 6 is a flow chart of tracking and locating a target object according to an embodiment of the present disclosure;
fig. 7 is a block diagram of an event processing apparatus according to an embodiment of the present disclosure;
FIG. 8 is a block diagram of another event processing device according to an embodiment of the present disclosure;
Fig. 9 is a block diagram of an electronic device for implementing an event processing method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The term "and/or" in the embodiments of the present disclosure describes an association relationship of association objects, which indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
In the existing data fusion scheme, only the face information of the target is adopted to perform data fusion, so that the accuracy and recall rate of the target information base are not high enough. In addition, when searching is carried out in the target information base according to the face image of the target object, a plurality of similar candidates are easy to find, so that the manual elimination workload is high and the efficiency is low.
In view of the foregoing, the present disclosure proposes an event processing method, apparatus, device, and storage medium.
Fig. 1 is a flowchart of an event processing method according to an embodiment of the present disclosure. It should be noted that the event processing method according to the embodiments of the present disclosure may be applied to the event processing apparatus according to the embodiments of the present disclosure, and the event processing apparatus may be configured in an electronic device. As shown in fig. 1, the method comprises the steps of:
step 101, obtaining an image to be detected, and extracting features of the image to be detected to obtain a plurality of feature information of the image to be detected.
In order to retrieve the target object as accurate as possible according to the image to be detected, feature extraction is required to be performed on the image to be detected so as to obtain a plurality of feature information of the image to be detected, and the feature information can be used as a clue for further retrieving the target object, so that the retrieval efficiency is improved.
It should be noted that, the plurality of feature information of the image to be detected may include: at least two of the first feature information, the second feature information, the vehicle feature information, and the space-time feature information may also include feature information not mentioned in other embodiments of the present disclosure according to scene requirements, which is not limited in this disclosure.
In a certain scenario, the first feature information may be facial feature information, and the second feature information may be body feature information. As one example, the human characteristic information may include a human vector, a clothing color, a gender, whether to wear glasses, whether to wear a hat, and the like. Regarding vehicle characteristic information, such as that a target object in an image to be detected is in a vehicle, information such as a license plate number, a vehicle color and the like of the vehicle can be extracted. In addition, the space-time characteristic information can be information such as snapshot time and place.
Step 102, determining event information of a target event, wherein the event information comprises at least one of occurrence place information and occurrence time information.
It is understood that at least one of the occurrence information and the occurrence time information of the target event may be used as a clue for further determining the target object. For example, the information of the occurrence place of the target event corresponds to the snapshot place in the object information base, and for example, the information of the occurrence time of the target event corresponds to the snapshot time in the object information base, wherein the object information base will be described below.
And 103, searching in a pre-established object information base according to a plurality of characteristic information of the image to be detected, and sequencing search results according to event information of the target event.
That is, a plurality of feature information of the image to be detected is used as a screening condition, a search result is searched in a pre-established object information base, and the correlation between the search result and at least one of the occurrence place information and the occurrence time information of the target event is calculated, so that the search result is ranked according to the correlation.
The pre-established object information base can be the feature information base of each object obtained by converting the video shot by the monitoring camera into an image, extracting the features, and clustering the feature information of the same object. Searching is carried out in the object information base according to the plurality of characteristic information of the image to be detected, and a search result with high matching performance is obtained, so that the accuracy of the search result can be improved, and the number of the search results can be reduced. The search results are ordered according to the event information of the target event, which is equivalent to automatic investigation aiming at the search results, so that the investigation efficiency is improved, and the cost of manual investigation is reduced.
And 104, acquiring object information of a target object in the image to be detected according to the sorting result.
It will be appreciated that according to the ranking result, it is possible to obtain which result or results of the search result is/are the most matched with the image to be detected, so that the result or results are/are the target object in the detected image.
And 105, tracking and positioning the target object according to the object information.
Because the object information comprises corresponding place information and behavior information of the corresponding target object at different times, tracking and positioning analysis can be performed on the target object according to the information. In addition, the location information and the behavior information of the target object at different times can be obtained in the related database according to the object information, so that tracking and positioning of the target object are realized.
It should be noted that, in the technical solution of the present disclosure, the feature information and the track behavior information of the related target object are acquired, stored, applied, etc. all conform to the requirements of the related laws and regulations, and do not violate the popular public order.
According to the event processing method of the embodiment of the disclosure, the plurality of pieces of characteristic information of the image to be detected are extracted, and searching is performed in the pre-established object information base according to the plurality of pieces of characteristic information of the image to be detected, so that the number of search results is reduced by introducing a plurality of clues. In addition, the search results are ordered according to the event information of the target event, so that the relevance between the search results and the target event is introduced, namely, the search results can be further screened, the accuracy of the search results is improved, and the time for manual elimination is effectively shortened. In addition, the target object is tracked and positioned according to the object information of the target object in the acquired image to be detected, and tracking and positioning analysis is performed through comprehensive multi-aspect data, so that the accuracy of event processing can be improved, and the efficiency of event processing can be improved.
In order to further describe the manner in which the object information library is created in detail, this disclosure proposes yet another embodiment.
Fig. 2 is a flowchart of creating an object information base according to an embodiment of the present disclosure. As shown in fig. 2, the object information base may be previously established by:
step 201, acquiring a monitoring video stream shot by a monitoring camera, and sampling the monitoring video stream to obtain N video frames; wherein N is a positive integer.
The monitoring camera may be a monitoring camera of a plurality of different scenes, for example: monitoring traffic roads and monitoring public places such as subway stations or stations.
Step 202, performing object detection on each video frame to determine M object samples in each video frame; wherein M is a positive integer.
It will be appreciated that each video frame corresponds to an image, and that object detection is performed for each video frame, thereby detecting M object samples in each video frame. Wherein the M target object samples in each video frame refer to M portraits in each video frame.
Instep 203, an image of each target object sample is obtained from N video frames, and feature extraction is performed on the image to obtain a plurality of feature information of each target object sample.
That is, all the target object samples corresponding to the N video frames are extracted as images, and then feature extraction is performed on the images, so as to obtain a plurality of feature information of each target object sample.
In the embodiment of the disclosure, feature extraction of the image may include extraction of at least two types of features of the first feature information, the second feature information, the vehicle feature information, the time-space feature information, and the like, so that the acquired information coverage is wide, and therefore, a plurality of types of features are extracted as much as possible to improve the quality of the object information base. In a certain scene, the first feature information may be facial feature information, and the second feature information may be body feature information. As one example, the human characteristic information may include a human vector, a clothing color, a gender, whether to wear glasses, whether to wear a hat, and the like. Regarding vehicle feature information, such as that a target object is in a vehicle in an image of a target object sample, information such as a license plate number, a vehicle color, etc. of the vehicle may be extracted. In addition, the space-time characteristic information can be information such as snapshot time and place.
Instep 204, object information of each target object sample is established according to the plurality of feature information of each target object sample.
That is, by extracting the features of the image, a plurality of pieces of feature information of each target object sample, that is, first feature information, second feature information, vehicle information, spatiotemporal information, and the like, corresponding to each target object are obtained, so that these pieces of feature information are taken as object information of each target object sample.
It should be noted that, since different target object samples may refer to the same target object, it is necessary to determine according to each target object sample and its characteristic information, so as to combine the target object samples and its characteristic information that refer to the same target object, thereby obtaining object information corresponding to each target object.
And 205, building a library according to the object information of each target object sample to obtain an object information library.
According to the event processing method provided by the embodiment of the disclosure, when the object information base is established, the characteristic extraction is respectively carried out for each target object sample so as to obtain a plurality of characteristic information corresponding to each target object sample, so that the data information coverage of the object information base can be effectively improved, the accuracy and recall rate of the object information base are greatly improved, and a basic guarantee is provided for accurately acquiring the target object information and tracking and positioning in the image to be detected.
To further illustrate the creation of object information for each target object sample in the above embodiments, the present disclosure proposes another embodiment.
Fig. 3 is a flowchart of establishing object information of each target object according to an embodiment of the present disclosure.
As shown in fig. 3, an implementation manner of establishing object information of each target object includes:
step 301, acquiring a pre-established discrimination model; wherein the discriminant model is trained using a plurality of characteristic information of the subject sample.
The pre-established discrimination model is used for judging whether the plurality of target object samples are the same object according to the plurality of characteristic information of the plurality of target object samples.
Step 302, grouping each target object sample, and inputting a plurality of feature information of each target object sample in each group into a discrimination model to determine whether each target object sample in each group is the same object.
It will be understood that, among the plurality of target object samples obtained by the above sampling, there may be a case where different target object samples refer to the same object, so in order to make each object information and each object form a one-to-one correspondence, it is necessary to perform group discrimination for each target object sample.
As an example, all target correspondence samples may be combined two by two to obtain multiple sets of target object samples. And inputting a plurality of characteristic information corresponding to each target object sample in each group of target object samples into a judging model to judge whether each target object sample in each group is the same object.
Step 303, in response to each target object sample in each group being the same object, combining the plurality of feature information of each target object sample in each group to obtain object information of the same object.
That is, if the target object samples in each group are the same object, the plurality of feature information of the target object samples in each group all belong to the same object, so that the plurality of feature information of the target object samples in each group are combined to obtain the object information corresponding to the same object.
Instep 304, in response to the target object samples in each group not being the same object, object information of the target object samples is established according to the feature information of the target object samples in each group.
That is, if the target object samples in each group are not the same object, it is explained that the plurality of feature information of the target object samples in each group are information of different objects, so that the corresponding object information is respectively established for the target object samples.
According to the event processing method provided by the embodiment of the disclosure, when object information is established, whether each group of target object samples are the same object is judged according to the judging model, and a plurality of characteristic information of each target object sample of the same object is combined into object information of the same object, so that the situation that the object information of a plurality of target objects corresponds to the same object can be avoided, and the accuracy and recall rate of the object information are further improved.
In the event processing method of the above embodiment, searching is performed in a pre-established object information base according to a plurality of feature information of an image to be detected, and searching results are ordered according to event information of a target event. To further describe the specific implementation of this section, the present disclosure proposes yet another embodiment for this section.
Fig. 4 is a flowchart of acquiring a candidate object and its ordering according to an embodiment of the disclosure. As shown in fig. 4, a specific implementation of obtaining a candidate object and its ordering may include:
step 401, retrieving in an object information base according to first feature information among a plurality of feature information of an image to be detected, to obtain at least one candidate object.
In the embodiment of the present disclosure, a plurality of feature information and object information libraries of an image to be detected include first feature information, second feature information, vehicle feature information and time-space feature information as examples. The first feature information may be facial feature information, and the second feature information may be human feature information. As an example, the implementation manner of searching the object information base according to the first characteristic information among the plurality of characteristic information of the image to be detected may be: acquiring face characteristic information of an image to be detected; acquiring a centroid face vector of each person in an object information base; according to the face characteristic information of the image to be detected, obtaining the similarity between the image to be detected and the centroid face vector of each object in the object information base; and taking the object corresponding to the object information with the similarity meeting the expectations as a candidate object. Wherein, since each object may have a plurality of face feature vectors extracted from different images, the centroid face vector refers to an average value of the plurality of face feature vectors. That is, an average value of a plurality of face feature vectors of each object may be regarded as a centroid face vector of each person. Thus, the calculated amount of the facial feature similarity can be reduced, and the resource consumption is reduced.
Step 402, acquiring space-time characteristic information from object information of each candidate object.
Step 403, calculating a first correlation between each candidate object and the target event according to the event information of the target event and the space-time characteristic information of each candidate object.
It will be appreciated that in order to narrow down the range of candidates, further matching may be achieved by adding cues.
In the embodiment of the present disclosure, the event information of the target event may include at least one of occurrence place information and occurrence time information of the target event. According to at least one of the occurrence place information and the occurrence time information of the target event and the space-time characteristic information of each candidate object, the possibility that each candidate object participates in the target event can be calculated according to time and place, and the possibility can be embodied in the form of a calculated score, so that the first correlation between each candidate object and the target event is obtained.
Step 404, obtaining corresponding feature information from the object information of each candidate object according to at least one feature information of the second feature information, the vehicle feature information and the space-time feature information among the plurality of feature information of the image to be detected.
That is, according to each feature information of the image to be detected, the feature information of the corresponding category is acquired in the object information of each candidate object. For example, if the plurality of feature information of the image to be detected includes the second feature information, the vehicle feature information, and the space-time feature information, the corresponding second feature information, vehicle feature information, and space-time feature information need to be obtained from the object information of each candidate object. The second feature information may be human feature information in some scenarios.
And step 405, inputting at least one piece of characteristic information and corresponding characteristic information into a pre-established discrimination model to obtain a second correlation between each candidate object and the target event.
It can be understood that the pre-established discrimination model can determine whether the target object and the candidate object in the image to be detected are the same object according to the feature information of the image to be detected and the corresponding feature information in the object information, so as to obtain the similarity score of each candidate object.
Atstep 406, at least one candidate object is ranked according to the first correlation and the second correlation.
In order to comprehensively consider at least one of the occurrence place information and the occurrence time information of the target event, and clues such as feature information in the image to be detected and the like, so as to further examine the candidate objects, the at least one candidate object is ordered according to the first correlation and the second correlation. In the embodiment of the disclosure, the score of the first correlation and the score of the second correlation may be weighted and ranked according to the final score size of the weighted calculation.
According to the event processing method of the embodiment of the disclosure, when object information retrieval is performed, not only the event information of the target event is introduced, but also the correlation among the second characteristic information, the vehicle and the space-time characteristic information is introduced, so that the possibility that the candidate object participates in the target event can be synthesized, and the similarity with the target object in the image to be detected is calculated, the purpose of accurately checking the candidate object is achieved, the labor cost is further saved, and the candidate object checking efficiency is improved.
In order to further improve the candidate object checking efficiency, based on the above embodiments, another way to obtain the candidate objects and the ordering thereof is proposed in the embodiments of the present disclosure. Fig. 5 is a flowchart of another method for obtaining candidates and ordering thereof according to an embodiment of the present disclosure. As shown in fig. 5, on the basis of the above embodiment, the implementation further includes:
step 507, it is determined whether the candidate object has participated in a particular event. If the candidate object does not participate in the specific event, go to step 506; if the candidate object participated in the particular event,step 508 is performed.
It will be appreciated that if the candidate object has a record in the related event database and the recorded event has a degree of coincidence with the target event, the likelihood of the candidate object being the target object will increase.
As an example, a query may be performed in the related event database based on the candidate object, and if a specific event in which the candidate object participates may be found in the related event database, it is explained that the candidate object participates in the specific event. Otherwise, the specific event is not participated.
Instep 508, in response to the candidate object participating in the specific event, descriptive information of the specific event is obtained.
Step 509, obtaining a clue description keyword of the target event.
Step 510, calculating the third relatedness of the candidate object and the target event according to the description information of the specific event and the clue description keywords of the target event.
It can be understood that, according to the description information of the specific event and the clue description keyword of the target event, the coincidence of the specific event and the target event can be calculated, so as to obtain the third correlation between the candidate object and the target event.
Step 511 ranks the at least one candidate object according to the first correlation, the second correlation, and the third correlation.
In order to comprehensively consider the occurrence place information and/or the occurrence time information of the target event, the characteristic information in the image to be detected, the correlation with the specific event and other clues, so as to further examine the candidate objects, at least one candidate object is ordered according to the first correlation, the second correlation and the third correlation. In the embodiment of the disclosure, the score of the first correlation, the score of the second correlation and the score of the third correlation may be weighted and ranked according to the final score size of the weighted calculation.
It should be noted that, steps 501 to 506 in fig. 5 are identical to the implementation manner ofsteps 401 to 406 in fig. 4, and are not described herein.
According to the event processing method of the embodiment of the disclosure, when the object information is searched, the correlation between the specific event in which the candidate object participates and the target event is increased, that is, if the candidate object participates in the specific event related to the target event, the possibility that the candidate object is the target object is increased, so that the candidate object can be further checked, and the candidate object checking efficiency can be further improved.
In the specific manner of tracking and positioning the target object according to the object information in the above embodiment, the present disclosure proposes yet another embodiment.
Fig. 6 is a flowchart of tracking and positioning a target object according to an embodiment of the present disclosure. As shown in fig. 6, an implementation manner of tracking and positioning the target object may include:
step 601, obtaining a motion trail of a target object according to object information; the motion trail comprises at least one of a snapshot trail and an identity ID (Identity document, identity number) trail of the monitoring camera.
It should be noted that the motion track of the target object may be included in the object information, that is, the capturing track of the monitoring camera is obtained based on the capturing time and the place of the monitoring camera in the object information. In addition, the motion trail of the target object can be queried in a trail database according to the object information, wherein the trail database comprises the motion trail of each object. For example, a track point obtained by a base station access dotting mode (such as a terminal SIM card (Subscriber Identity Module, user identification card) of a user is accessed to a certain base station, the base station can report the location information to obtain track information of the user reaching the location), and for example, a track obtained by a WiFi access dotting mode; as another example, a network IP address used when a user logs into a social application; for another example, a dotting mode of taking an identity card into and out of a bus is utilized; for another example, the identity card is used for handling the check-in and check-out of hotels, or other track points obtained by the ID check-in mode.
Step 602, merging at least one of the snap-shot track and the identity ID track of the monitoring camera of the target object, and performing conflict detection analysis on the motion track obtained after merging.
In the embodiment of the present disclosure, after at least one of the snapshot track and the ID track of the monitoring camera of the target object is combined, an abnormal track point may exist therein, so that conflict detection analysis needs to be performed on the combined motion track. As an example, the combined motion trail can be smoothed through speed to find abnormal points, and the abnormal reasons are analyzed according to the information of the abnormal points and the object information, so that the clustering errors of the object information base can be corrected in time. In addition, confidence coefficient calculation can be performed on each track point according to the object information, and related first characteristic information, identity ID and other information can be obtained for track points with confidence coefficient lower than a threshold value, so that a worker can conveniently and manually check key information points, and timely modify object information and ID associated information to obtain a target object motion track with high accuracy.
And 603, tracking and positioning the target object according to the motion trail after conflict detection analysis.
It can be understood that after the collision detection analysis is performed on the motion trail of the target object, the staff performs tracking and positioning on the target object according to the analyzed motion trail, so as to process the target event.
According to the event processing method provided by the embodiment of the disclosure, at least one of the snapshot track and the identity ID track of the target object motion obtained according to the object information, so that the acquisition of the target object motion track by data fusion is realized. In addition, conflict detection analysis is carried out on the motion trail of the target object, and manual verification is carried out on key trail points, so that the accuracy of the motion trail of the target object is improved.
In order to implement the above method, the present disclosure proposes an event processing apparatus.
Fig. 7 is a block diagram of an event processing device according to an embodiment of the present disclosure. As shown in fig. 7, the apparatus includes:
theimage processing module 710 is configured to obtain an image to be detected, and perform feature extraction on the image to be detected to obtain a plurality of feature information of the image to be detected;
a first determiningmodule 720 for determining event information of a target event, the event information including at least one of occurrence place information and occurrence time information;
The retrievingmodule 730 is configured to retrieve from a pre-established object information base according to a plurality of feature information of the image to be detected, and order the retrieval results according to event information of the target event;
a second determiningmodule 740, configured to determine object information of the target object in the image to be detected according to the sorting result;
and thepositioning module 750 is used for tracking and positioning the target object according to the object information.
In some embodiments of the present disclosure, theretrieval module 730 includes:
a search obtaining unit 730-1, configured to search in an object information base according to first feature information among a plurality of feature information of an image to be detected, to obtain at least one candidate object;
a first acquisition unit 730-2 for acquiring spatiotemporal feature information from object information of each candidate object;
a first calculating unit 730-3 for calculating a first correlation between each candidate object and the target event according to the event information of the target event and the space-time characteristic information of each candidate object;
a second obtaining unit 730-4, configured to obtain corresponding feature information from the object information of each candidate object according to at least one feature information of second feature information, vehicle feature information, and space-time feature information among the plurality of feature information of the image to be detected;
A second calculation unit 730-5, configured to input at least one feature information and corresponding feature information into a pre-established discrimination model, to obtain a second correlation between each candidate object and the target event;
a ranking unit 730-6 for ranking the at least one candidate object according to the first correlation and the second correlation.
Furthermore, in the embodiment of the present disclosure, the retrievingmodule 730 further includes:
a determining unit 730-7 for determining whether the candidate object has participated in the specific event;
a third acquiring unit 730-8, configured to acquire description information of a specific event in response to participation of the candidate object in the specific event;
a fourth obtaining unit 730-9 for obtaining a cue description keyword of the target event;
a third calculating unit 730-10, configured to calculate a third relativity between the candidate object and the target event according to the description information of the specific event and the clue description keyword of the target event;
wherein, the sorting unit 730-6 is specifically configured to:
at least one candidate object is ranked according to the first correlation, the second correlation, and the third correlation.
In the embodiment of the present disclosure, thepositioning module 750 is specifically configured to:
acquiring a motion trail of a target object according to object information; the motion track comprises at least one of a snap track and an identity ID track of the monitoring camera;
Merging at least one of the snap-shot track and the identity ID track of the monitoring camera of the target object, and performing conflict detection analysis on the motion track obtained after merging;
and tracking and positioning the target object according to the motion trail after conflict detection and analysis.
According to the event processing device of the embodiment of the disclosure, the plurality of pieces of characteristic information of the image to be detected are extracted, and the retrieval is performed in the pre-established object information base according to the plurality of pieces of characteristic information of the image to be detected, so that the number of retrieval results is reduced by introducing a plurality of clues. In addition, the search results are ordered according to the occurrence place information and/or the occurrence time information of the target event, so that the correlation between the search results and the target event is introduced, namely, the search results can be further screened, the accuracy of the search results is improved, and the time for manual elimination is effectively shortened. In addition, the target object is tracked and positioned according to the object information of the target object in the acquired image to be detected, and tracking and positioning analysis is performed through comprehensive multi-aspect data, so that the accuracy of event processing can be improved, and the efficiency of event processing can be improved.
Fig. 8 is a block diagram illustrating another event processing apparatus according to an embodiment of the present disclosure. As shown in fig. 8, the apparatus further includes:
the establishingmodule 860 is configured to pre-establish an object information base: the establishingmodule 860 is specifically configured to:
acquiring a monitoring video stream shot by a monitoring camera, and sampling the monitoring video stream to obtain N video frames; wherein N is a positive integer;
performing target detection on each video frame to determine M target object samples in each video frame; wherein M is a positive integer;
acquiring images of each target object sample from N video frames, and carrying out feature extraction on the images to obtain a plurality of feature information of each target object sample;
establishing object information of each target object sample according to a plurality of characteristic information of each target object sample;
and building a library according to the object information of each target object sample to obtain an object information library.
In some embodiments of the present disclosure, thecreation module 860 is specifically configured to:
acquiring a pre-established judging model; the discrimination model is trained by adopting a plurality of characteristic information of the object sample;
grouping each target object sample, inputting a plurality of characteristic information of each target object sample in each group into a judging model, and judging whether each target object sample in each group is the same object or not;
In response to the fact that each target object sample in each group is the same object, combining a plurality of characteristic information of each target object sample in each group to obtain object information of the same object;
and establishing object information of each target object sample according to the characteristic information of each target object sample in each group in response to each target object sample in each group not being the same object.
It should be noted that 810 to 850 in fig. 8 have the same functions and structures as 710 to 750 in fig. 7, and are not described here again.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
According to the event processing device provided by the embodiment of the disclosure, when the object information base is established, the characteristic extraction is respectively carried out for each target object sample so as to obtain a plurality of characteristic information corresponding to each target object sample, so that the data information coverage of the object information base can be effectively improved, the accuracy and recall rate of the object information base are greatly improved, and a basic guarantee is provided for accurately acquiring the target object information and tracking and positioning in the image to be detected. In addition, the target object samples of the same object are combined, so that the situation that object information of a plurality of target objects corresponds to the same object can be avoided, and the accuracy and recall rate of an object information base can be further improved.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium, and a computer program product.
As shown in fig. 9, is a block diagram of an electronic device of a method of event processing according to an embodiment of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the electronic device includes: one ormore processors 901,memory 902, and interfaces for connecting the components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). In fig. 9, aprocessor 901 is taken as an example.
Memory 902 is a non-transitory computer-readable storage medium provided by the present disclosure. Wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the methods of event handling provided by the present disclosure. The non-transitory computer readable storage medium of the present disclosure stores computer instructions for causing a computer to perform the method of event processing provided by the present disclosure.
Thememory 902 is used as a non-transitory computer readable storage medium for storing non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules (e.g., theimage processing module 710, thefirst determination module 720, theretrieval module 730, thesecond determination module 740, and thepositioning module 750 shown in fig. 7) corresponding to the method of event processing in the embodiments of the present disclosure. Theprocessor 901 executes various functional applications of the server and data processing, i.e., implements the event processing method in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in thememory 902. The present disclosure provides a computer program product comprising a computer program which, when executed by aprocessor 901, implements the event processing method in the above-described method embodiments.
Thememory 902 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may store data created according to the use of the electronic device for event processing, etc. In addition, thememory 902 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments,memory 902 optionally includes memory remotely located relative toprocessor 901, which may be connected to the event processing electronics via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method of event processing may further include: aninput device 903 and anoutput device 904. Theprocessor 901,memory 902,input devices 903, andoutput devices 904 may be connected by a bus or other means, for example in fig. 9.
Theinput device 903 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the event-handling electronic device, such as a touch screen, keypad, mouse, trackpad, touchpad, pointer stick, one or more mouse buttons, trackball, joystick, and the like. The output means 904 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present application may be performed in parallel or sequentially or in a different order, provided that the desired results of the disclosed embodiments are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (9)

Translated fromChinese
1.一种事件处理方法,包括:1. An event processing method, comprising:获取待检测图像,并对所述待检测图像进行特征提取以获取所述待检测图像的多个特征信息;Acquiring an image to be detected, and performing feature extraction on the image to be detected to obtain a plurality of feature information of the image to be detected;确定目标事件的事件信息,所述事件信息包括发生地信息和发生时间信息中的至少一个;determining event information of the target event, where the event information includes at least one of occurrence place information and occurrence time information;根据所述待检测图像的多个特征信息在预先建立的对象信息库进行检索,并根据所述目标事件的事件信息对检索结果进行排序;Retrieving in a pre-established object information database according to a plurality of characteristic information of the image to be detected, and sorting the retrieval results according to the event information of the target event;根据排序结果获取所述待检测图像中的目标对象的对象信息;Acquiring object information of the target object in the image to be detected according to the sorting result;根据所述对象信息对所述目标对象进行跟踪定位;Tracking and positioning the target object according to the object information;所述根据所述待检测图像的多个特征信息在预先建立的对象信息库进行检索,并根据所述目标事件的事件信息对检索结果进行排序,包括:The searching in the pre-established object information base according to the plurality of characteristic information of the image to be detected, and sorting the retrieval results according to the event information of the target event include:根据所述待检测图像的多个特征信息之中的第一特征信息在所述对象信息库进行检索,获得至少一个候选对象;Searching the object information database according to the first feature information among the plurality of feature information of the image to be detected to obtain at least one candidate object;从每个所述候选对象的对象信息中获取时空特征信息;Obtaining spatio-temporal feature information from the object information of each of the candidate objects;根据所述目标事件的事件信息、以及每个所述候选对象的时空特征信息,计算每个所述候选对象与所述目标事件的第一相关性;calculating a first correlation between each of the candidate objects and the target event according to the event information of the target event and the spatiotemporal feature information of each of the candidate objects;根据所述待检测图像的多个特征信息之中第二特征信息、车辆特征信息和时空特征信息中的至少一个特征信息,从每个所述候选对象的对象信息中获取对应的特征信息;Acquiring corresponding feature information from object information of each candidate object according to at least one feature information among the second feature information, vehicle feature information, and spatio-temporal feature information among the plurality of feature information of the image to be detected;将所述至少一个特征信息和所述对应的特征信息输入至预先建立的判别模型,得到每个所述候选对象与所述目标事件的第二相关性;Inputting the at least one feature information and the corresponding feature information into a pre-established discriminant model to obtain a second correlation between each of the candidate objects and the target event;确定所述候选对象是否参与过特定事件;determining whether the candidate has participated in a particular event;响应于所述候选对象参与过特定事件,获取所述特定事件的描述信息;Acquire description information of the specific event in response to the candidate object having participated in the specific event;获取所述目标事件的线索描述关键词;Acquiring clue description keywords of the target event;根据所述特定事件的描述信息和所述目标事件的线索描述关键词,计算所述候选对象与所述目标事件的第三相关性;calculating a third correlation between the candidate object and the target event according to the description information of the specific event and the clue description keywords of the target event;将所述第一相关性、所述第二相关性和所述第三相关性的得分进行加权计算得到最终得分,根据所述最终得分对所述至少一个候选对象进行排序;weighting the scores of the first correlation, the second correlation, and the third correlation to obtain a final score, and sorting the at least one candidate object according to the final score;所述根据所述对象信息对所述目标对象进行跟踪定位,包括:The tracking and positioning of the target object according to the object information includes:根据所述对象信息获取所述目标对象的运动轨迹;其中,所述运动轨迹包括监控摄像头的抓拍轨迹和身份ID轨迹中的至少一个;Acquiring the motion track of the target object according to the object information; wherein, the motion track includes at least one of a surveillance camera capture track and an ID track;对所述目标对象的监控摄像头的抓拍轨迹和身份ID轨迹中的至少一个进行合并,并对合并后得到的运动轨迹进行冲突检测分析;Merging at least one of the capture track of the monitoring camera of the target object and the identity ID track, and performing conflict detection and analysis on the combined motion track;根据经过冲突检测分析后的运动轨迹对所述目标对象进行跟踪定位;Tracking and locating the target object according to the movement track after the conflict detection and analysis;所述对合并后得到的运动轨迹进行冲突检测分析包括:The conflict detection and analysis of the motion track obtained after the merging includes:通过速度平滑获取所述合并后得到的运动轨迹中的异常点,根据所述异常点的信息与所述对象信息对所述对象信息库进行纠正。The abnormal points in the combined motion track are obtained by speed smoothing, and the object information base is corrected according to the information of the abnormal points and the object information.2.根据权利要求1所述的方法,其中,所述对象信息库通过以下方式预先建立的:2. The method according to claim 1, wherein the object information base is pre-established by:获取监控摄像头拍摄的监控视频流,并对所述监控视频流进行采样以获得N个视频帧;其中N为正整数;Obtain the monitoring video stream taken by the monitoring camera, and sample the monitoring video stream to obtain N video frames; wherein N is a positive integer;对每个所述视频帧进行目标检测以确定出每个所述视频帧之中的M个目标对象样本;其中M为正整数;Perform target detection on each of the video frames to determine M target object samples in each of the video frames; wherein M is a positive integer;从所述N个视频帧中获取每个目标对象样本的图像,并对所述图像进行特征提取以获得所述每个目标对象样本的多个特征信息;acquiring an image of each target object sample from the N video frames, and performing feature extraction on the image to obtain a plurality of feature information of each target object sample;根据所述每个目标对象样本的多个特征信息,建立所述每个目标对象样本的对象信息;Establishing object information of each target object sample according to a plurality of feature information of each target object sample;根据所述每个目标对象样本的对象信息进行建库,得到所述对象信息库。Building a database according to the object information of each target object sample to obtain the object information database.3.根据权利要求2所述的方法,其中,根据所述每个目标对象样本的多个特征信息,建立所述每个目标对象样本的对象信息,包括:3. The method according to claim 2, wherein, according to a plurality of feature information of each target object sample, establishing the object information of each target object sample comprises:获取预先建立的判别模型;其中,所述判别模型是采用对象样本的多个特征信息训练的;Obtaining a pre-established discriminant model; wherein, the discriminant model is trained using multiple feature information of object samples;将所述每个目标对象样本进行分组,并将每组之中各目标对象样本的多个特征信息输入至所述判别模型,判断所述每组之中各目标对象样本是否为同一个对象;grouping each of the target object samples, and inputting a plurality of characteristic information of each target object sample in each group into the discriminant model, and judging whether each target object sample in each group is the same object;响应于所述每组之中各目标对象样本为同一个对象,将所述每组之中各目标对象样本的多个特征信息进行合并,得到所述同一个对象的对象信息;In response to the fact that the target object samples in each group are the same object, combining multiple feature information of the target object samples in each group to obtain the object information of the same object;响应于所述每组之中各目标对象样本不为同一个对象,根据所述每组之中各目标对象样本的多个特征信息,建立所述各目标对象样本的对象信息。In response to the fact that the target object samples in each group are not the same object, the object information of the target object samples in each group is established according to the multiple feature information of the target object samples in each group.4.一种事件处理装置,包括:4. An event processing device, comprising:图像处理模块,用于获取待检测图像,并对所述待检测图像进行特征提取以获取所述待检测图像的多个特征信息;An image processing module, configured to obtain an image to be detected, and perform feature extraction on the image to be detected to obtain a plurality of feature information of the image to be detected;第一确定模块,用于确定目标事件的事件信息,所述事件信息包括发生地信息和发生时间信息中的至少一个;A first determining module, configured to determine event information of a target event, where the event information includes at least one of occurrence place information and occurrence time information;检索模块,用于根据所述待检测图像的多个特征信息在预先建立的对象信息库进行检索,并根据所述目标事件的事件信息对检索结果进行排序;A retrieval module, configured to perform retrieval in a pre-established object information database according to a plurality of feature information of the image to be detected, and sort the retrieval results according to the event information of the target event;第二确定模块,用于根据排序结果确定所述待检测图像中的目标对象的对象信息;The second determination module is used to determine the object information of the target object in the image to be detected according to the ranking result;定位模块,用于根据所述对象信息对所述目标对象进行跟踪定位;A positioning module, configured to track and locate the target object according to the object information;所述检索模块包括:The retrieval module includes:检索获取单元,用于根据所述待检测图像的多个特征信息之中的第一特征信息在所述对象信息库进行检索,获得至少一个候选对象;A retrieval and acquisition unit, configured to search in the object information database according to the first feature information among the plurality of feature information of the image to be detected, and obtain at least one candidate object;第一获取单元,用于从每个所述候选对象的对象信息中获取时空特征信息;a first acquiring unit, configured to acquire spatiotemporal feature information from the object information of each candidate object;第一计算单元,用于根据所述目标事件的事件信息、以及每个所述候选对象的时空特征信息,计算每个所述候选对象与所述目标事件的第一相关性;a first calculation unit, configured to calculate a first correlation between each of the candidate objects and the target event according to the event information of the target event and the spatiotemporal feature information of each of the candidate objects;第二获取单元,用于根据所述待检测图像的多个特征信息之中第二特征信息、车辆特征信息和时空特征信息中的至少一个特征信息,从每个所述候选对象的对象信息中获取对应的特征信息;The second acquisition unit is configured to, according to at least one feature information among the plurality of feature information of the image to be detected, among the second feature information, vehicle feature information, and spatio-temporal feature information, from the object information of each of the candidate objects Obtain corresponding feature information;第二计算单元,用于将所述至少一个特征信息和所述对应的特征信息输入至预先建立的判别模型,得到每个所述候选对象与所述目标事件的第二相关性;A second computing unit, configured to input the at least one feature information and the corresponding feature information into a pre-established discriminant model to obtain a second correlation between each of the candidate objects and the target event;排序单元,用于根据所述第一相关性和所述第二相关性对所述至少一个候选对象进行排序;a sorting unit, configured to sort the at least one candidate object according to the first correlation and the second correlation;所述检索模块包括:The retrieval module includes:确定单元,用于确定所述候选对象是否参与过特定事件;a determining unit, configured to determine whether the candidate object has participated in a specific event;第三获取单元,用于响应于所述候选对象参与过特定事件,获取所述特定事件的描述信息;A third acquiring unit, configured to acquire description information of the specific event in response to the candidate object participating in the specific event;第四获取单元,用于获取所述目标事件的线索描述关键词;A fourth acquiring unit, configured to acquire clue description keywords of the target event;第三计算单元,用于根据所述特定事件的描述信息和所述目标事件的线索描述关键词,计算所述候选对象与所述目标事件的第三相关性;A third calculation unit, configured to calculate a third correlation between the candidate object and the target event according to the description information of the specific event and the clue description keywords of the target event;其中,所述排序单元具体用于:Wherein, the sorting unit is specifically used for:将所述第一相关性、所述第二相关性和所述第三相关性的得分进行加权计算得到最终得分,根据所述最终得分对所述至少一个候选对象进行排序;weighting the scores of the first correlation, the second correlation, and the third correlation to obtain a final score, and sorting the at least one candidate object according to the final score;所述定位模块具体用于:The positioning module is specifically used for:根据所述对象信息获取所述目标对象的运动轨迹;其中,所述运动轨迹包括监控摄像头的抓拍轨迹和身份ID轨迹中的至少一个;Acquiring the motion track of the target object according to the object information; wherein, the motion track includes at least one of a surveillance camera capture track and an ID track;对所述目标对象的监控摄像头的抓拍轨迹和身份ID轨迹中的至少一个进行合并,并对合并后得到的运动轨迹进行冲突检测分析;Merging at least one of the capture track of the monitoring camera of the target object and the identity ID track, and performing conflict detection and analysis on the combined motion track;根据经过冲突检测分析后的运动轨迹对所述目标对象进行跟踪定位;Tracking and locating the target object according to the movement track after the conflict detection and analysis;所述对合并后得到的运动轨迹进行冲突检测分析包括:The conflict detection and analysis of the motion track obtained after the merging includes:通过速度平滑获取所述合并后得到的运动轨迹中的异常点,根据所述异常点的信息与所述对象信息对所述对象信息库进行纠正。The abnormal points in the combined motion track are obtained by speed smoothing, and the object information base is corrected according to the information of the abnormal points and the object information.5.根据权利要求4所述的装置,还包括:5. The apparatus of claim 4, further comprising:建立模块,用于预先建立所述对象信息库:其中,所述建立模块具体用于:An establishment module, configured to pre-establish the object information library: wherein, the establishment module is specifically used for:获取监控摄像头拍摄的监控视频流,并对所述监控视频流进行采样以获得N个视频帧;其中N为正整数;Obtain the monitoring video stream taken by the monitoring camera, and sample the monitoring video stream to obtain N video frames; wherein N is a positive integer;对每个所述视频帧进行目标检测以确定出每个所述视频帧之中的M个目标对象样本;其中M为正整数;Perform target detection on each of the video frames to determine M target object samples in each of the video frames; wherein M is a positive integer;从所述N个视频帧中获取每个目标对象样本的图像,并对所述图像进行特征提取以获得所述每个目标对象样本的多个特征信息;acquiring an image of each target object sample from the N video frames, and performing feature extraction on the image to obtain a plurality of feature information of each target object sample;根据所述每个目标对象样本的多个特征信息,建立所述每个目标对象样本的对象信息;Establishing object information of each target object sample according to a plurality of characteristic information of each target object sample;根据所述每个目标对象样本的对象信息进行建库,得到所述对象信息库。Building a database according to the object information of each target object sample to obtain the object information database.6.根据权利要求5所述的装置,其中,所述建立模块具体用于:6. The device according to claim 5, wherein the establishing module is specifically used for:获取预先建立的判别模型;其中,所述判别模型是采用对象样本的多个特征信息训练的;Obtaining a pre-established discriminant model; wherein, the discriminant model is trained using multiple feature information of object samples;将所述每个目标对象样本进行分组,并将每组之中各目标对象样本的多个特征信息输入至所述判别模型,判断所述每组之中各目标对象样本是否为同一个对象;grouping each of the target object samples, and inputting a plurality of characteristic information of each target object sample in each group into the discriminant model, and judging whether each target object sample in each group is the same object;响应于所述每组之中各目标对象样本为同一个对象,将所述每组之中各目标对象样本的多个特征信息进行合并,得到所述同一个对象的对象信息;In response to the fact that the target object samples in each group are the same object, combining multiple feature information of the target object samples in each group to obtain the object information of the same object;响应于所述每组之中各目标对象样本不为同一个对象,根据所述每组之中各目标对象样本的多个特征信息,建立所述各目标对象样本的对象信息。In response to the fact that the target object samples in each group are not the same object, the object information of the target object samples in each group is established according to the multiple feature information of the target object samples in each group.7. 一种电子设备,其特征在于,包括:7. An electronic device, characterized in that it comprises:至少一个处理器;以及at least one processor; and与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1至3中任一项所述的方法。The memory stores instructions executable by the at least one processor, the instructions are executed by the at least one processor, so that the at least one processor can perform any one of claims 1 to 3 Methods.8.一种存储有计算机指令的非瞬时计算机可读存储介质,其特征在于,所述计算机指令用于使所述计算机执行权利要求1至3中任一项所述的方法。8. A non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to make the computer execute the method according to any one of claims 1 to 3.9.一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现根据权利要求1至3中任一项所述的方法。9. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 3.
CN202110622066.4A2021-06-032021-06-03Event processing method, device, electronic equipment and storage mediumActiveCN113378005B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110622066.4ACN113378005B (en)2021-06-032021-06-03Event processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110622066.4ACN113378005B (en)2021-06-032021-06-03Event processing method, device, electronic equipment and storage medium

Publications (2)

Publication NumberPublication Date
CN113378005A CN113378005A (en)2021-09-10
CN113378005Btrue CN113378005B (en)2023-06-02

Family

ID=77575808

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110622066.4AActiveCN113378005B (en)2021-06-032021-06-03Event processing method, device, electronic equipment and storage medium

Country Status (1)

CountryLink
CN (1)CN113378005B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114118421B (en)*2021-11-032025-06-13重庆中科云从科技有限公司 Event reasoning method, device and computer storage medium
CN115431174B (en)*2022-09-052023-11-21昆山市恒达精密机械工业有限公司Method and system for controlling grinding of middle plate

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US7970240B1 (en)*2001-12-172011-06-28Google Inc.Method and apparatus for archiving and visualizing digital images
CN110717414A (en)*2019-09-242020-01-21青岛海信网络科技股份有限公司Target detection tracking method, device and equipment
CN110888877A (en)*2019-11-132020-03-17深圳市超视智慧科技有限公司Event information display method and device, computing equipment and storage medium
WO2020248386A1 (en)*2019-06-142020-12-17平安科技(深圳)有限公司Video analysis method and apparatus, computer device and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103020303B (en)*2012-12-312015-08-19中国科学院自动化研究所Based on the historical events extraction of internet cross-media terrestrial reference and the searching method of picture concerned
US11417128B2 (en)*2017-12-222022-08-16Motorola Solutions, Inc.Method, device, and system for adaptive training of machine learning models via detected in-field contextual incident timeline entry and associated located and retrieved digital audio and/or video imaging
CN108932509A (en)*2018-08-162018-12-04新智数字科技有限公司A kind of across scene objects search methods and device based on video tracking
CN109145931B (en)*2018-09-032019-11-05百度在线网络技术(北京)有限公司Object detecting method, device and storage medium
CN110705476A (en)*2019-09-302020-01-17深圳市商汤科技有限公司 Data analysis method, apparatus, electronic device and computer storage medium
CN110942036B (en)*2019-11-292023-04-18深圳市商汤科技有限公司Person identification method and device, electronic equipment and storage medium
CN112084939A (en)*2020-09-082020-12-15深圳市润腾智慧科技有限公司Image feature data management method and device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US7970240B1 (en)*2001-12-172011-06-28Google Inc.Method and apparatus for archiving and visualizing digital images
WO2020248386A1 (en)*2019-06-142020-12-17平安科技(深圳)有限公司Video analysis method and apparatus, computer device and storage medium
CN110717414A (en)*2019-09-242020-01-21青岛海信网络科技股份有限公司Target detection tracking method, device and equipment
CN110888877A (en)*2019-11-132020-03-17深圳市超视智慧科技有限公司Event information display method and device, computing equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于突发事件的跨媒体信息检索系统的研究;訾玲玲;杜军平;;计算机仿真(06);全文*

Also Published As

Publication numberPublication date
CN113378005A (en)2021-09-10

Similar Documents

PublicationPublication DateTitle
CN111125435B (en) Method, device and computer equipment for determining video label
US9532012B1 (en)Discovering object pathways in a camera network
US8130285B2 (en)Automated searching for probable matches in a video surveillance system
CN111985298B (en)Face recognition sample collection method and device
CN113033458B (en)Action recognition method and device
CN111783650A (en) Model training method, action recognition method, apparatus, equipment and storage medium
CN112001265B (en)Video event identification method and device, electronic equipment and storage medium
CN113378005B (en)Event processing method, device, electronic equipment and storage medium
CN108228792A (en)Picture retrieval method, electronic equipment and storage medium
CN112348107A (en)Image data cleaning method and apparatus, electronic device, and medium
CN112084812B (en)Image processing method, device, computer equipment and storage medium
CN111783619A (en) Recognition method, device, equipment and storage medium of human body attributes
CN110706258A (en) Object tracking method and device
CN111709382A (en) Human body trajectory processing method, device, computer storage medium and electronic device
CN111832483A (en) A method, device, device, and storage medium for identifying the validity of a point of interest
CN112507090A (en)Method, apparatus, device and storage medium for outputting information
CN113963303A (en) Image processing method, video recognition method, apparatus, equipment and storage medium
CN112148908A (en) Image database update method, device, electronic device and medium
KR20220027000A (en)Method and device for extracting spatial relationship of geographic location points
CN109961103B (en)Training method of feature extraction model, and image feature extraction method and device
US11256945B2 (en)Automatic extraction of attributes of an object within a set of digital images
CN111444819B (en)Cut frame determining method, network training method, device, equipment and storage medium
CN110889392B (en)Method and device for processing face image
CN115116130B (en)Call action recognition method, device, equipment and storage medium
CN112329708A (en) Bill recognition method and device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp