Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The term "and/or" in the embodiments of the present disclosure describes an association relationship of association objects, which indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
In the existing data fusion scheme, only the face information of the target is adopted to perform data fusion, so that the accuracy and recall rate of the target information base are not high enough. In addition, when searching is carried out in the target information base according to the face image of the target object, a plurality of similar candidates are easy to find, so that the manual elimination workload is high and the efficiency is low.
In view of the foregoing, the present disclosure proposes an event processing method, apparatus, device, and storage medium.
Fig. 1 is a flowchart of an event processing method according to an embodiment of the present disclosure. It should be noted that the event processing method according to the embodiments of the present disclosure may be applied to the event processing apparatus according to the embodiments of the present disclosure, and the event processing apparatus may be configured in an electronic device. As shown in fig. 1, the method comprises the steps of:
step 101, obtaining an image to be detected, and extracting features of the image to be detected to obtain a plurality of feature information of the image to be detected.
In order to retrieve the target object as accurate as possible according to the image to be detected, feature extraction is required to be performed on the image to be detected so as to obtain a plurality of feature information of the image to be detected, and the feature information can be used as a clue for further retrieving the target object, so that the retrieval efficiency is improved.
It should be noted that, the plurality of feature information of the image to be detected may include: at least two of the first feature information, the second feature information, the vehicle feature information, and the space-time feature information may also include feature information not mentioned in other embodiments of the present disclosure according to scene requirements, which is not limited in this disclosure.
In a certain scenario, the first feature information may be facial feature information, and the second feature information may be body feature information. As one example, the human characteristic information may include a human vector, a clothing color, a gender, whether to wear glasses, whether to wear a hat, and the like. Regarding vehicle characteristic information, such as that a target object in an image to be detected is in a vehicle, information such as a license plate number, a vehicle color and the like of the vehicle can be extracted. In addition, the space-time characteristic information can be information such as snapshot time and place.
Step 102, determining event information of a target event, wherein the event information comprises at least one of occurrence place information and occurrence time information.
It is understood that at least one of the occurrence information and the occurrence time information of the target event may be used as a clue for further determining the target object. For example, the information of the occurrence place of the target event corresponds to the snapshot place in the object information base, and for example, the information of the occurrence time of the target event corresponds to the snapshot time in the object information base, wherein the object information base will be described below.
And 103, searching in a pre-established object information base according to a plurality of characteristic information of the image to be detected, and sequencing search results according to event information of the target event.
That is, a plurality of feature information of the image to be detected is used as a screening condition, a search result is searched in a pre-established object information base, and the correlation between the search result and at least one of the occurrence place information and the occurrence time information of the target event is calculated, so that the search result is ranked according to the correlation.
The pre-established object information base can be the feature information base of each object obtained by converting the video shot by the monitoring camera into an image, extracting the features, and clustering the feature information of the same object. Searching is carried out in the object information base according to the plurality of characteristic information of the image to be detected, and a search result with high matching performance is obtained, so that the accuracy of the search result can be improved, and the number of the search results can be reduced. The search results are ordered according to the event information of the target event, which is equivalent to automatic investigation aiming at the search results, so that the investigation efficiency is improved, and the cost of manual investigation is reduced.
And 104, acquiring object information of a target object in the image to be detected according to the sorting result.
It will be appreciated that according to the ranking result, it is possible to obtain which result or results of the search result is/are the most matched with the image to be detected, so that the result or results are/are the target object in the detected image.
And 105, tracking and positioning the target object according to the object information.
Because the object information comprises corresponding place information and behavior information of the corresponding target object at different times, tracking and positioning analysis can be performed on the target object according to the information. In addition, the location information and the behavior information of the target object at different times can be obtained in the related database according to the object information, so that tracking and positioning of the target object are realized.
It should be noted that, in the technical solution of the present disclosure, the feature information and the track behavior information of the related target object are acquired, stored, applied, etc. all conform to the requirements of the related laws and regulations, and do not violate the popular public order.
According to the event processing method of the embodiment of the disclosure, the plurality of pieces of characteristic information of the image to be detected are extracted, and searching is performed in the pre-established object information base according to the plurality of pieces of characteristic information of the image to be detected, so that the number of search results is reduced by introducing a plurality of clues. In addition, the search results are ordered according to the event information of the target event, so that the relevance between the search results and the target event is introduced, namely, the search results can be further screened, the accuracy of the search results is improved, and the time for manual elimination is effectively shortened. In addition, the target object is tracked and positioned according to the object information of the target object in the acquired image to be detected, and tracking and positioning analysis is performed through comprehensive multi-aspect data, so that the accuracy of event processing can be improved, and the efficiency of event processing can be improved.
In order to further describe the manner in which the object information library is created in detail, this disclosure proposes yet another embodiment.
Fig. 2 is a flowchart of creating an object information base according to an embodiment of the present disclosure. As shown in fig. 2, the object information base may be previously established by:
step 201, acquiring a monitoring video stream shot by a monitoring camera, and sampling the monitoring video stream to obtain N video frames; wherein N is a positive integer.
The monitoring camera may be a monitoring camera of a plurality of different scenes, for example: monitoring traffic roads and monitoring public places such as subway stations or stations.
Step 202, performing object detection on each video frame to determine M object samples in each video frame; wherein M is a positive integer.
It will be appreciated that each video frame corresponds to an image, and that object detection is performed for each video frame, thereby detecting M object samples in each video frame. Wherein the M target object samples in each video frame refer to M portraits in each video frame.
Instep 203, an image of each target object sample is obtained from N video frames, and feature extraction is performed on the image to obtain a plurality of feature information of each target object sample.
That is, all the target object samples corresponding to the N video frames are extracted as images, and then feature extraction is performed on the images, so as to obtain a plurality of feature information of each target object sample.
In the embodiment of the disclosure, feature extraction of the image may include extraction of at least two types of features of the first feature information, the second feature information, the vehicle feature information, the time-space feature information, and the like, so that the acquired information coverage is wide, and therefore, a plurality of types of features are extracted as much as possible to improve the quality of the object information base. In a certain scene, the first feature information may be facial feature information, and the second feature information may be body feature information. As one example, the human characteristic information may include a human vector, a clothing color, a gender, whether to wear glasses, whether to wear a hat, and the like. Regarding vehicle feature information, such as that a target object is in a vehicle in an image of a target object sample, information such as a license plate number, a vehicle color, etc. of the vehicle may be extracted. In addition, the space-time characteristic information can be information such as snapshot time and place.
Instep 204, object information of each target object sample is established according to the plurality of feature information of each target object sample.
That is, by extracting the features of the image, a plurality of pieces of feature information of each target object sample, that is, first feature information, second feature information, vehicle information, spatiotemporal information, and the like, corresponding to each target object are obtained, so that these pieces of feature information are taken as object information of each target object sample.
It should be noted that, since different target object samples may refer to the same target object, it is necessary to determine according to each target object sample and its characteristic information, so as to combine the target object samples and its characteristic information that refer to the same target object, thereby obtaining object information corresponding to each target object.
And 205, building a library according to the object information of each target object sample to obtain an object information library.
According to the event processing method provided by the embodiment of the disclosure, when the object information base is established, the characteristic extraction is respectively carried out for each target object sample so as to obtain a plurality of characteristic information corresponding to each target object sample, so that the data information coverage of the object information base can be effectively improved, the accuracy and recall rate of the object information base are greatly improved, and a basic guarantee is provided for accurately acquiring the target object information and tracking and positioning in the image to be detected.
To further illustrate the creation of object information for each target object sample in the above embodiments, the present disclosure proposes another embodiment.
Fig. 3 is a flowchart of establishing object information of each target object according to an embodiment of the present disclosure.
As shown in fig. 3, an implementation manner of establishing object information of each target object includes:
step 301, acquiring a pre-established discrimination model; wherein the discriminant model is trained using a plurality of characteristic information of the subject sample.
The pre-established discrimination model is used for judging whether the plurality of target object samples are the same object according to the plurality of characteristic information of the plurality of target object samples.
Step 302, grouping each target object sample, and inputting a plurality of feature information of each target object sample in each group into a discrimination model to determine whether each target object sample in each group is the same object.
It will be understood that, among the plurality of target object samples obtained by the above sampling, there may be a case where different target object samples refer to the same object, so in order to make each object information and each object form a one-to-one correspondence, it is necessary to perform group discrimination for each target object sample.
As an example, all target correspondence samples may be combined two by two to obtain multiple sets of target object samples. And inputting a plurality of characteristic information corresponding to each target object sample in each group of target object samples into a judging model to judge whether each target object sample in each group is the same object.
Step 303, in response to each target object sample in each group being the same object, combining the plurality of feature information of each target object sample in each group to obtain object information of the same object.
That is, if the target object samples in each group are the same object, the plurality of feature information of the target object samples in each group all belong to the same object, so that the plurality of feature information of the target object samples in each group are combined to obtain the object information corresponding to the same object.
Instep 304, in response to the target object samples in each group not being the same object, object information of the target object samples is established according to the feature information of the target object samples in each group.
That is, if the target object samples in each group are not the same object, it is explained that the plurality of feature information of the target object samples in each group are information of different objects, so that the corresponding object information is respectively established for the target object samples.
According to the event processing method provided by the embodiment of the disclosure, when object information is established, whether each group of target object samples are the same object is judged according to the judging model, and a plurality of characteristic information of each target object sample of the same object is combined into object information of the same object, so that the situation that the object information of a plurality of target objects corresponds to the same object can be avoided, and the accuracy and recall rate of the object information are further improved.
In the event processing method of the above embodiment, searching is performed in a pre-established object information base according to a plurality of feature information of an image to be detected, and searching results are ordered according to event information of a target event. To further describe the specific implementation of this section, the present disclosure proposes yet another embodiment for this section.
Fig. 4 is a flowchart of acquiring a candidate object and its ordering according to an embodiment of the disclosure. As shown in fig. 4, a specific implementation of obtaining a candidate object and its ordering may include:
step 401, retrieving in an object information base according to first feature information among a plurality of feature information of an image to be detected, to obtain at least one candidate object.
In the embodiment of the present disclosure, a plurality of feature information and object information libraries of an image to be detected include first feature information, second feature information, vehicle feature information and time-space feature information as examples. The first feature information may be facial feature information, and the second feature information may be human feature information. As an example, the implementation manner of searching the object information base according to the first characteristic information among the plurality of characteristic information of the image to be detected may be: acquiring face characteristic information of an image to be detected; acquiring a centroid face vector of each person in an object information base; according to the face characteristic information of the image to be detected, obtaining the similarity between the image to be detected and the centroid face vector of each object in the object information base; and taking the object corresponding to the object information with the similarity meeting the expectations as a candidate object. Wherein, since each object may have a plurality of face feature vectors extracted from different images, the centroid face vector refers to an average value of the plurality of face feature vectors. That is, an average value of a plurality of face feature vectors of each object may be regarded as a centroid face vector of each person. Thus, the calculated amount of the facial feature similarity can be reduced, and the resource consumption is reduced.
Step 402, acquiring space-time characteristic information from object information of each candidate object.
Step 403, calculating a first correlation between each candidate object and the target event according to the event information of the target event and the space-time characteristic information of each candidate object.
It will be appreciated that in order to narrow down the range of candidates, further matching may be achieved by adding cues.
In the embodiment of the present disclosure, the event information of the target event may include at least one of occurrence place information and occurrence time information of the target event. According to at least one of the occurrence place information and the occurrence time information of the target event and the space-time characteristic information of each candidate object, the possibility that each candidate object participates in the target event can be calculated according to time and place, and the possibility can be embodied in the form of a calculated score, so that the first correlation between each candidate object and the target event is obtained.
Step 404, obtaining corresponding feature information from the object information of each candidate object according to at least one feature information of the second feature information, the vehicle feature information and the space-time feature information among the plurality of feature information of the image to be detected.
That is, according to each feature information of the image to be detected, the feature information of the corresponding category is acquired in the object information of each candidate object. For example, if the plurality of feature information of the image to be detected includes the second feature information, the vehicle feature information, and the space-time feature information, the corresponding second feature information, vehicle feature information, and space-time feature information need to be obtained from the object information of each candidate object. The second feature information may be human feature information in some scenarios.
And step 405, inputting at least one piece of characteristic information and corresponding characteristic information into a pre-established discrimination model to obtain a second correlation between each candidate object and the target event.
It can be understood that the pre-established discrimination model can determine whether the target object and the candidate object in the image to be detected are the same object according to the feature information of the image to be detected and the corresponding feature information in the object information, so as to obtain the similarity score of each candidate object.
Atstep 406, at least one candidate object is ranked according to the first correlation and the second correlation.
In order to comprehensively consider at least one of the occurrence place information and the occurrence time information of the target event, and clues such as feature information in the image to be detected and the like, so as to further examine the candidate objects, the at least one candidate object is ordered according to the first correlation and the second correlation. In the embodiment of the disclosure, the score of the first correlation and the score of the second correlation may be weighted and ranked according to the final score size of the weighted calculation.
According to the event processing method of the embodiment of the disclosure, when object information retrieval is performed, not only the event information of the target event is introduced, but also the correlation among the second characteristic information, the vehicle and the space-time characteristic information is introduced, so that the possibility that the candidate object participates in the target event can be synthesized, and the similarity with the target object in the image to be detected is calculated, the purpose of accurately checking the candidate object is achieved, the labor cost is further saved, and the candidate object checking efficiency is improved.
In order to further improve the candidate object checking efficiency, based on the above embodiments, another way to obtain the candidate objects and the ordering thereof is proposed in the embodiments of the present disclosure. Fig. 5 is a flowchart of another method for obtaining candidates and ordering thereof according to an embodiment of the present disclosure. As shown in fig. 5, on the basis of the above embodiment, the implementation further includes:
step 507, it is determined whether the candidate object has participated in a particular event. If the candidate object does not participate in the specific event, go to step 506; if the candidate object participated in the particular event,step 508 is performed.
It will be appreciated that if the candidate object has a record in the related event database and the recorded event has a degree of coincidence with the target event, the likelihood of the candidate object being the target object will increase.
As an example, a query may be performed in the related event database based on the candidate object, and if a specific event in which the candidate object participates may be found in the related event database, it is explained that the candidate object participates in the specific event. Otherwise, the specific event is not participated.
Instep 508, in response to the candidate object participating in the specific event, descriptive information of the specific event is obtained.
Step 509, obtaining a clue description keyword of the target event.
Step 510, calculating the third relatedness of the candidate object and the target event according to the description information of the specific event and the clue description keywords of the target event.
It can be understood that, according to the description information of the specific event and the clue description keyword of the target event, the coincidence of the specific event and the target event can be calculated, so as to obtain the third correlation between the candidate object and the target event.
Step 511 ranks the at least one candidate object according to the first correlation, the second correlation, and the third correlation.
In order to comprehensively consider the occurrence place information and/or the occurrence time information of the target event, the characteristic information in the image to be detected, the correlation with the specific event and other clues, so as to further examine the candidate objects, at least one candidate object is ordered according to the first correlation, the second correlation and the third correlation. In the embodiment of the disclosure, the score of the first correlation, the score of the second correlation and the score of the third correlation may be weighted and ranked according to the final score size of the weighted calculation.
It should be noted that, steps 501 to 506 in fig. 5 are identical to the implementation manner ofsteps 401 to 406 in fig. 4, and are not described herein.
According to the event processing method of the embodiment of the disclosure, when the object information is searched, the correlation between the specific event in which the candidate object participates and the target event is increased, that is, if the candidate object participates in the specific event related to the target event, the possibility that the candidate object is the target object is increased, so that the candidate object can be further checked, and the candidate object checking efficiency can be further improved.
In the specific manner of tracking and positioning the target object according to the object information in the above embodiment, the present disclosure proposes yet another embodiment.
Fig. 6 is a flowchart of tracking and positioning a target object according to an embodiment of the present disclosure. As shown in fig. 6, an implementation manner of tracking and positioning the target object may include:
step 601, obtaining a motion trail of a target object according to object information; the motion trail comprises at least one of a snapshot trail and an identity ID (Identity document, identity number) trail of the monitoring camera.
It should be noted that the motion track of the target object may be included in the object information, that is, the capturing track of the monitoring camera is obtained based on the capturing time and the place of the monitoring camera in the object information. In addition, the motion trail of the target object can be queried in a trail database according to the object information, wherein the trail database comprises the motion trail of each object. For example, a track point obtained by a base station access dotting mode (such as a terminal SIM card (Subscriber Identity Module, user identification card) of a user is accessed to a certain base station, the base station can report the location information to obtain track information of the user reaching the location), and for example, a track obtained by a WiFi access dotting mode; as another example, a network IP address used when a user logs into a social application; for another example, a dotting mode of taking an identity card into and out of a bus is utilized; for another example, the identity card is used for handling the check-in and check-out of hotels, or other track points obtained by the ID check-in mode.
Step 602, merging at least one of the snap-shot track and the identity ID track of the monitoring camera of the target object, and performing conflict detection analysis on the motion track obtained after merging.
In the embodiment of the present disclosure, after at least one of the snapshot track and the ID track of the monitoring camera of the target object is combined, an abnormal track point may exist therein, so that conflict detection analysis needs to be performed on the combined motion track. As an example, the combined motion trail can be smoothed through speed to find abnormal points, and the abnormal reasons are analyzed according to the information of the abnormal points and the object information, so that the clustering errors of the object information base can be corrected in time. In addition, confidence coefficient calculation can be performed on each track point according to the object information, and related first characteristic information, identity ID and other information can be obtained for track points with confidence coefficient lower than a threshold value, so that a worker can conveniently and manually check key information points, and timely modify object information and ID associated information to obtain a target object motion track with high accuracy.
And 603, tracking and positioning the target object according to the motion trail after conflict detection analysis.
It can be understood that after the collision detection analysis is performed on the motion trail of the target object, the staff performs tracking and positioning on the target object according to the analyzed motion trail, so as to process the target event.
According to the event processing method provided by the embodiment of the disclosure, at least one of the snapshot track and the identity ID track of the target object motion obtained according to the object information, so that the acquisition of the target object motion track by data fusion is realized. In addition, conflict detection analysis is carried out on the motion trail of the target object, and manual verification is carried out on key trail points, so that the accuracy of the motion trail of the target object is improved.
In order to implement the above method, the present disclosure proposes an event processing apparatus.
Fig. 7 is a block diagram of an event processing device according to an embodiment of the present disclosure. As shown in fig. 7, the apparatus includes:
theimage processing module 710 is configured to obtain an image to be detected, and perform feature extraction on the image to be detected to obtain a plurality of feature information of the image to be detected;
a first determiningmodule 720 for determining event information of a target event, the event information including at least one of occurrence place information and occurrence time information;
The retrievingmodule 730 is configured to retrieve from a pre-established object information base according to a plurality of feature information of the image to be detected, and order the retrieval results according to event information of the target event;
a second determiningmodule 740, configured to determine object information of the target object in the image to be detected according to the sorting result;
and thepositioning module 750 is used for tracking and positioning the target object according to the object information.
In some embodiments of the present disclosure, theretrieval module 730 includes:
a search obtaining unit 730-1, configured to search in an object information base according to first feature information among a plurality of feature information of an image to be detected, to obtain at least one candidate object;
a first acquisition unit 730-2 for acquiring spatiotemporal feature information from object information of each candidate object;
a first calculating unit 730-3 for calculating a first correlation between each candidate object and the target event according to the event information of the target event and the space-time characteristic information of each candidate object;
a second obtaining unit 730-4, configured to obtain corresponding feature information from the object information of each candidate object according to at least one feature information of second feature information, vehicle feature information, and space-time feature information among the plurality of feature information of the image to be detected;
A second calculation unit 730-5, configured to input at least one feature information and corresponding feature information into a pre-established discrimination model, to obtain a second correlation between each candidate object and the target event;
a ranking unit 730-6 for ranking the at least one candidate object according to the first correlation and the second correlation.
Furthermore, in the embodiment of the present disclosure, the retrievingmodule 730 further includes:
a determining unit 730-7 for determining whether the candidate object has participated in the specific event;
a third acquiring unit 730-8, configured to acquire description information of a specific event in response to participation of the candidate object in the specific event;
a fourth obtaining unit 730-9 for obtaining a cue description keyword of the target event;
a third calculating unit 730-10, configured to calculate a third relativity between the candidate object and the target event according to the description information of the specific event and the clue description keyword of the target event;
wherein, the sorting unit 730-6 is specifically configured to:
at least one candidate object is ranked according to the first correlation, the second correlation, and the third correlation.
In the embodiment of the present disclosure, thepositioning module 750 is specifically configured to:
acquiring a motion trail of a target object according to object information; the motion track comprises at least one of a snap track and an identity ID track of the monitoring camera;
Merging at least one of the snap-shot track and the identity ID track of the monitoring camera of the target object, and performing conflict detection analysis on the motion track obtained after merging;
and tracking and positioning the target object according to the motion trail after conflict detection and analysis.
According to the event processing device of the embodiment of the disclosure, the plurality of pieces of characteristic information of the image to be detected are extracted, and the retrieval is performed in the pre-established object information base according to the plurality of pieces of characteristic information of the image to be detected, so that the number of retrieval results is reduced by introducing a plurality of clues. In addition, the search results are ordered according to the occurrence place information and/or the occurrence time information of the target event, so that the correlation between the search results and the target event is introduced, namely, the search results can be further screened, the accuracy of the search results is improved, and the time for manual elimination is effectively shortened. In addition, the target object is tracked and positioned according to the object information of the target object in the acquired image to be detected, and tracking and positioning analysis is performed through comprehensive multi-aspect data, so that the accuracy of event processing can be improved, and the efficiency of event processing can be improved.
Fig. 8 is a block diagram illustrating another event processing apparatus according to an embodiment of the present disclosure. As shown in fig. 8, the apparatus further includes:
the establishingmodule 860 is configured to pre-establish an object information base: the establishingmodule 860 is specifically configured to:
acquiring a monitoring video stream shot by a monitoring camera, and sampling the monitoring video stream to obtain N video frames; wherein N is a positive integer;
performing target detection on each video frame to determine M target object samples in each video frame; wherein M is a positive integer;
acquiring images of each target object sample from N video frames, and carrying out feature extraction on the images to obtain a plurality of feature information of each target object sample;
establishing object information of each target object sample according to a plurality of characteristic information of each target object sample;
and building a library according to the object information of each target object sample to obtain an object information library.
In some embodiments of the present disclosure, thecreation module 860 is specifically configured to:
acquiring a pre-established judging model; the discrimination model is trained by adopting a plurality of characteristic information of the object sample;
grouping each target object sample, inputting a plurality of characteristic information of each target object sample in each group into a judging model, and judging whether each target object sample in each group is the same object or not;
In response to the fact that each target object sample in each group is the same object, combining a plurality of characteristic information of each target object sample in each group to obtain object information of the same object;
and establishing object information of each target object sample according to the characteristic information of each target object sample in each group in response to each target object sample in each group not being the same object.
It should be noted that 810 to 850 in fig. 8 have the same functions and structures as 710 to 750 in fig. 7, and are not described here again.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
According to the event processing device provided by the embodiment of the disclosure, when the object information base is established, the characteristic extraction is respectively carried out for each target object sample so as to obtain a plurality of characteristic information corresponding to each target object sample, so that the data information coverage of the object information base can be effectively improved, the accuracy and recall rate of the object information base are greatly improved, and a basic guarantee is provided for accurately acquiring the target object information and tracking and positioning in the image to be detected. In addition, the target object samples of the same object are combined, so that the situation that object information of a plurality of target objects corresponds to the same object can be avoided, and the accuracy and recall rate of an object information base can be further improved.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium, and a computer program product.
As shown in fig. 9, is a block diagram of an electronic device of a method of event processing according to an embodiment of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the electronic device includes: one ormore processors 901,memory 902, and interfaces for connecting the components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). In fig. 9, aprocessor 901 is taken as an example.
Memory 902 is a non-transitory computer-readable storage medium provided by the present disclosure. Wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the methods of event handling provided by the present disclosure. The non-transitory computer readable storage medium of the present disclosure stores computer instructions for causing a computer to perform the method of event processing provided by the present disclosure.
Thememory 902 is used as a non-transitory computer readable storage medium for storing non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules (e.g., theimage processing module 710, thefirst determination module 720, theretrieval module 730, thesecond determination module 740, and thepositioning module 750 shown in fig. 7) corresponding to the method of event processing in the embodiments of the present disclosure. Theprocessor 901 executes various functional applications of the server and data processing, i.e., implements the event processing method in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in thememory 902. The present disclosure provides a computer program product comprising a computer program which, when executed by aprocessor 901, implements the event processing method in the above-described method embodiments.
Thememory 902 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may store data created according to the use of the electronic device for event processing, etc. In addition, thememory 902 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments,memory 902 optionally includes memory remotely located relative toprocessor 901, which may be connected to the event processing electronics via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method of event processing may further include: aninput device 903 and anoutput device 904. Theprocessor 901,memory 902,input devices 903, andoutput devices 904 may be connected by a bus or other means, for example in fig. 9.
Theinput device 903 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the event-handling electronic device, such as a touch screen, keypad, mouse, trackpad, touchpad, pointer stick, one or more mouse buttons, trackball, joystick, and the like. The output means 904 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present application may be performed in parallel or sequentially or in a different order, provided that the desired results of the disclosed embodiments are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.