Summary of the invention
Based on this, it is necessary to which in view of the above technical problems, providing one kind being capable of labor-saving match monitoring method, dressIt sets, computer equipment and storage medium.
A kind of match monitoring method, which comprises
Video to be monitored is received, and obtains the first extraction time section, video to be monitored is inquired according to the first extraction time sectionIn include facial image first object emotional information;
Obtain first object mood score corresponding with first object emotional information;
The second extraction time section is obtained, the facial image for including in video to be monitored is inquired according to the second extraction time sectionSecond target emotion information;
Obtain the second target emotion score corresponding with the second target emotion information;
The difference for calculating first object mood score and the second target emotion score is then inquired when difference is more than threshold valueThe action message for including in video to be monitored corresponding to first extraction time section;
When action message is there are when suspicious action information, then the first identity information corresponding with suspicious action information is inquired,And the first identity information is exported.
In one embodiment, the method also includes:
Audio to be monitored corresponding with first time period is extracted from video to be monitored;
It extracts from audio to be monitored with audio keyword, whether key word of the inquiry is suspicious keyword;
When keyword is suspicious keyword, then corresponding with suspicious keyword the second identity information is inquired, and by secondIdentity information is exported.
In one embodiment, the method also includes:
The detection prompt information for receiving communication signal, when detection prompt information indicates then to extract inspection there are when communication signalSurvey the current location of the communication signal carried in prompt information;
Current location is compared with predeterminated position;
When current location and predeterminated position offset are more than preset value, then the inspection information of outgoing inspection communication apparatus.
In one embodiment, described that the people for including in the video to be monitored is inquired according to the first extraction time sectionThe first object emotional information of face image, comprising:
The first image to be monitored corresponding with the first extraction time section is extracted from video to be monitored, identification first is to be monitoredFirst current emotional information of the facial image for including in image;
Count the first number of image frames of corresponding first image to be monitored of the first current emotional information;
According to the first current emotional information and the first number of image frames, first object emotional information is obtained.
In one embodiment, the first current emotional information for including in identification first image to be monitored, packetIt includes:
Receive the mood probability that the corresponding current emotional of the first image to be monitored is standard mood;
Mood probability is ranked up, and extracts the mark of quantity corresponding with preset quantity according to the mood probability after sequenceAgree to do a favour thread;
Judge whether the corresponding type of emotion of extracted standard mood is identical;
When the corresponding type of emotion difference of extracted standard mood, then the standard mood pair of mood maximum probability is obtainedThe type of emotion answered is as the first current emotional information.
In one embodiment, described that the people for including in the video to be monitored is inquired according to the second extraction time sectionSecond target emotion information of face image, comprising:
The second image to be monitored corresponding with the second extraction time section is extracted from video to be monitored, identification second is to be monitoredIt include the second current emotional information of facial image in image;
Pass through the second number of image frames of corresponding second image to be monitored of the second current emotional information;
According to the second current emotional information and the second number of image frames, the second target emotion information is obtained.
In one embodiment, the second current emotional information for including in second image to be monitored is identified, comprising:
Receive the mood probability that the corresponding current emotional of the second image to be monitored is standard mood;
Mood probability is ranked up, and extracts the mark of quantity corresponding with preset quantity according to the mood probability after sequenceAgree to do a favour thread;
Judge whether the corresponding type of emotion of extracted standard mood is identical;
When the corresponding type of emotion difference of extracted standard mood, then the standard mood pair of mood maximum probability is obtainedThe type of emotion answered is as current emotional information.
A kind of match monitoring device, described device include:
Receiving module, be used for video to be monitored, and obtain the first extraction time section, according to the first extraction time section inquiry toThe first object emotional information for the facial image for including in monitor video;
First obtains module, for obtaining first object mood score corresponding with first object emotional information;
Second obtains module, for obtaining the second extraction time section, inquires video to be monitored according to the second extraction time sectionIn include facial image the second target emotion information;
Third obtains module, for obtaining the second target emotion score corresponding with the second target emotion information;
Computing module, for calculating the difference of first object mood score Yu the second target emotion score, when difference is more thanWhen threshold value, then the action message for including in video to be monitored corresponding to the first extraction time section is inquired;
First output module, for when action message is there are when suspicious action information, then inquiry and suspicious action information pairThe first identity information answered, and the first identity information is exported.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the processingThe step of device realizes the above method when executing the computer program.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processorThe step of above-mentioned method is realized when row.
Above-mentioned match monitoring method, device, computer equipment and storage medium, no setting is required, and multiple judges compare race into rowMonitoring only need to receive video to be monitored, and obtain the first extraction time section, inquire in video to be monitored with the first extraction timeThe corresponding first object emotional information of section, obtains first object mood score corresponding with first object emotional information, acquisition theTwo extraction times section is inquired the second target emotion information corresponding with the second extraction time section in video to be monitored, is obtained and secondThe corresponding second target emotion information score of target emotion information calculates first object mood score and the second target emotion scoreDifference, when difference be more than threshold value when, then inquire the first extraction time section corresponding to video to be monitored in include movement letterBreath then inquires the first identity information corresponding with suspicious action information when action message is there are when suspicious action information, and by theOne identity information output, to save manpower.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understoodThe application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, notFor limiting the application.
Match monitoring method provided by the present application, can be applied in application environment as shown in Figure 1.Wherein, terminal 102It is communicated by network with server 104.Server 104 receives the video to be monitored that terminal 102 is shot, and obtains first and mentionThe period is taken, first object emotional information corresponding with the first extraction time section in video to be monitored, and then server 104 are inquiredFirst object mood score corresponding with first object emotional information is obtained, the second extraction time section is obtained, inquires view to be monitoredThe second target emotion information corresponding with the second extraction time section, obtains the second mesh corresponding with the second target emotion information in frequencyMood score is marked, server 104 calculates the difference of first object mood score and the second target emotion score, when difference is more than thresholdWhen value, then the action message for including in video to be monitored corresponding to the first extraction time section is inquired, when action message presence canWhen doubting action message, then corresponding first identity information of suspicious action information is inquired, and first identity information is exported,In, terminal 102 can be, but not limited to be various personal computers, laptop, smart phone, tablet computer and photographic deviceIt can be realized with the server cluster of the either multiple servers compositions of independent server Deng, server 104.
In one embodiment, as shown in Fig. 2, providing a kind of match monitoring method, it is applied in Fig. 1 in this wayIt is illustrated for server, comprising the following steps:
S202: receiving video to be monitored, and obtains the first extraction time section, to be monitored according to the section inquiry of the first extraction timeThe first object emotional information for the facial image for including in video.
Specifically, video to be monitored refers in the match video for using terminal to be recorded during the games, and competing canTo be corresponding chess category match etc..First extraction time section refers to extracts corresponding sub- monitor video from video to be monitoredPeriod, so that server can analyze the emotional information for the facial image for including in sub- monitor video, and first time period canTo be configured according to preset time, for example, first time period can be all players and go out one in chess category matchSecondary board or the out period of a chess piece.First object emotional information refers to the people for including in the video to be monitored of first time periodThe whole mood of face image.Specifically, the terminal being arranged on arena can acquire video to be monitored in real time, and terminal will acquireTo video to be monitored be sent to server, when server receives video to be monitored, then get the first extraction time section, clothesBusiness device extracts corresponding to be monitored with the first extraction time section according to the first extraction time section got from video to be monitoredImage, and identify the facial image in image to be monitored, server gets micro- expression information in the facial image of identification, looks intoMood associated by micro- expression information is ask as first object emotional information.It can be, server receives video to be monitored, andThe first extraction time section is got, searches corresponding image to be monitored in the first extraction time section, and by each frame figure to be monitoredAs extracting, and then different identity is got, prestored images corresponding to identity is searched, by pre-stored figureAs being matched from the different personages for including in image to be monitored, matching process can using the recognizer that training is completed intoRow matching, to find facial image corresponding with different identity mark, obtains the corresponding face of each identity respectivelyMicro- expression information of image, and obtain with mood corresponding to micro- expression information as first object emotional information.
S204: first object mood score corresponding with first object emotional information is obtained.
Specifically, first object mood score refers to preset score corresponding with first object emotional information, and can be withIt is that when mood is more steady or more optimistic, then score is higher.When server inquires first object mood, then obtain pre-storedMood to be matched, first object mood is matched with mood to be matched, when successful match, then by mood institute to be matchedAssociated score is as first object mood score.
S206: obtaining the second extraction time section, inquires the face for including in video to be monitored according to the second extraction time sectionSecond target emotion information of image;
Specifically, the second extraction time section refers to extracting corresponding sub- monitor video from video to be monitored and differentWith the period of first time period, the second extraction time section can be later than the first extraction time section period, in this way firstThe subsequent time period of period.Second target emotion information refers to the facial image for including in the video to be monitored of second time periodWhole mood.Specifically, server gets the second extraction time section, and extracts and the from the video to be monitored receivedTwo periods corresponding image to be monitored, and identify the facial image in image to be monitored, server gets the face of identificationMicro- expression information in image inquires mood associated by micro- expression information as the second emotional information.It can be, server obtainsThe second extraction time section is got, second extraction time section is the subsequent time period of the first extraction time section, when searching with secondBetween the corresponding image to be monitored of section, and each frame image to be monitored is extracted, the character image that will be identified in above-mentioned stepsWith include that personage in video to be monitored in second time period matches, to obtain corresponding facial image, matchProcess can be matched using the recognizer that training is completed, when or when obtaining facial image corresponding with above-mentioned steps, thenMicro- expression information of corresponding each facial image is obtained respectively, and is obtained with mood corresponding to micro- expression information as secondTarget emotion information.
S208: the second target emotion score corresponding with the second target emotion information is obtained.
Specifically, the second target emotion score refers to preset score corresponding with the second target emotion information, and can be withIt is that when mood is more steady or more optimistic, then score is higher.When server inquires the second target emotion, then obtain pre-storedMood to be matched, the second target emotion is matched with mood to be matched, when successful match, is then closed mood to be matchedThe score of connection is as the second target emotion score.
S210: calculating the difference of first object mood score and the second target emotion score, when difference is more than threshold value, thenInquire the action message for including in video to be monitored corresponding to the first extraction time section.
Specifically, when server obtains first object mood score and the second target emotion score, then the first mesh is calculatedThe difference of mood and the second mood score is marked, and then obtains corresponding threshold value, compares the difference and threshold value being calculated, works as differenceIt is when more than threshold value, then excessive in the first extraction time section and the emotional change in the second extraction time section, in order to avoid competingOccur cheating in the process, then can further be monitored, namely when difference is more than threshold value, then when inquiring the first extractionBetween it is corresponding with the action message for including in monitor video in section, can be, it is corresponding to be monitored in the first extraction time sectionThe image to be monitored for including in video is input in the obtained action recognition model of training and is identified, action recognition model can be withRequired characteristic point is extracted, such as corresponding artis, and then according to trained obtained artis and the pass of corresponding moving displacementSystem, detects corresponding action message, thus using the action message as video to be monitored corresponding to the first extraction time sectionIn include action message.It should be noted that can be by the corresponding first object mood score of each facial image and secondTarget emotion score calculates separately difference, to then inquire when the difference corresponding at least one facial image is more than threshold valueBe more than the corresponding action message of facial image of threshold value in corresponding video to be monitored in first extraction time section, namely without pairThe corresponding action message of each character image is inquired, then improves search efficiency.
S212: when action message is there are when suspicious action information, then the first identity corresponding with suspicious action information is inquiredInformation, and the first identity information is exported.
Specifically, the first identity information refer to identity of personage corresponding to suspicious action information identify, and this firstThe corresponding facial image of identity may be that the difference of above-mentioned first object mood and the second target emotion is more than threshold valueFacial image identity, the first identity can be corresponding name, ID card No. and competition number etc..ToolBody, when server gets action message, then the action message and pre-stored suspicious action information that will acquire carry outMatching includes suspicious action information in the action message that then recognizes, then inquires suspicious emotional information pair when successful matchThe first identity information answered, and first identity information is exported, so as to further be verified the identity informationWhether cheating suspicion is had.
In above-mentioned match monitoring method, without manually being analyzed, video to be monitored need to be only received, and obtain first and extractPeriod inquires the first object emotional information for the facial image for including in video to be monitored according to the first extraction time section, intoAnd the corresponding first object mood score of first object emotional information is obtained, and then obtain the second extraction time section, according to secondSecond target emotion of the facial image for including in extraction time section query monitor video, it is corresponding to obtain the second target emotion informationThe second target emotion score, the difference of first object mood score and the second target emotion score is calculated, when difference is more than thresholdWhen value, then the action message for including in video to be monitored corresponding to the first extraction time section is inquired, when action message presence canDoubtful action message is then to inquire the first identity information corresponding with suspicious emotional information, and the first identity information is exported, and is savedManpower to improve monitoring efficiency, and avoids monitoring inaccuracy caused by manual analysis, and during monitoring, can useTarget emotion information is combined with action message to be judged, the accuracy of monitoring is improved.
In one embodiment, refer to Fig. 3, provide one can suspicious keyword monitoring step flow diagram, namelyMatch monitoring method can also include: that audio to be monitored corresponding with first time period is extracted from video to be monitored;From wait superviseIt controls in audio and extracts with audio keyword, whether key word of the inquiry is suspicious keyword;When keyword is suspicious keyword, thenThe second identity information corresponding with suspicious keyword is inquired, and the second identity information is exported.
Specifically, during comparison match is monitored, the audio for including in monitor video can also be treated and supervisedControl, so that whether inquiry is in the language for whether having violation during the games, the second identity information refers to during the gamesThere is the corresponding identity information of task of suspicious language, the second identity information can be name, ID card No. or competition numberCode etc..Server extracts audio to be monitored corresponding with first time period from monitor video, and can be according to Application on Voiceprint RecognitionMode, corresponding identity is added to the audio to be monitored extracted, can be, identity is added to sound to be monitoredInclude in frequency first is extracted the sub- period, task of also time different in audio to be monitored can be marked to speak and then clothesBusiness device gets text conversion logic corresponding to audio to be monitored, and by text conversion logic by audio to be monitored be converted intoText is monitored, and then gets participle logic, is segmented to obtain according to participle logic by text to be monitored according to participle logicText segmentation sequence to be monitored calculates the fractionation accuracy of each text segmentation sequence to be monitored, highest by accuracy is splitText segmentation sequence to be monitored gets pre-stored suspicious keyword, the audio that will be extracted as audio keywordKeyword is matched with suspicious keyword, and when successful match, then audio keyword is suspicious keyword, to inquire suspiciousFirst where keyword extracts the sub- period, thus the upper entrained identity of inquiry the first inquiry sub- period, thus willThe identity is exported as the second identity namely the corresponding player's appearance of the identity is corresponding suspiciousLanguage needs further progress to monitor.Wherein, splitting accuracy is when getting the segmentation sequence of different texts to be monitored,Obtain the accuracy of the different participle phrases in the segmentation sequence of preset each text to be monitored.
In the present embodiment, server can also be monitored audio to be monitored, thus inquire in audio to be monitored whetherIt, then will identity information corresponding with suspicious keyword namely the second identity when comprising suspicious keyword comprising suspicious keywordInformation is exported, and so as to be monitored from different dimensions, improves the accuracy of monitoring.
In one embodiment, match monitoring method can also include: the detection prompt information for receiving communication signal, whenDetection prompt information indicates that there are the current locations for when communication signal, then extracting the communication signal carried in detection prompt information;Current location is compared with predeterminated position;When current location and predeterminated position offset are more than preset value, then outgoing inspectionThe inspection information of communication apparatus.
Specifically, since during the games, such as corresponding chess category is competed, the communication apparatus that players carry is both needed toIt is placed on fixed position, communication apparatus whether is carried so as to monitor player or does not place communication apparatusIn fixed position.Specifically, competition area are provided with detection device, and detection device can detecte with the presence or absence of communication signal, inspectionSurveying device will test result generation detection prompt information, and will test side prompt information and be sent to server, and detection device is rawAt detection prompt information can be interval preset time period sent, be such as separated by 2 minutes and sent, being also possible to ought be notWhen detecting communication signal, then it will test prompt information according to preset time period and sent, when detecting communication signal, thenIt will test prompt information to send in real time, server receives the detection prompt information of the communication signal of detection device transmission, then examinesPrompt information mark is surveyed there are when communication signal, then extracts the current location of communication signal detection from detection prompt information, it shouldCurrent location can be coordinate, and then server gets predeterminated position namely the available position to the predeterminated position is satMark, current location is compared with predeterminated position, can be, when current position coordinates and preset position coordinates are calculated,Namely whether inquiry current location deviates with predeterminated position, when offset, then the value of offset is inquired, by the value and preset value of offsetIt is compared, when the value of offset is more than preset value, then needs to check in competition area whether there is player to set using communicationIt is standby, thus the inspection information of outgoing inspection communication apparatus, and the corresponding present bit of communication signal can be carried in inspection informationIt sets.
In the present embodiment, it can inquire and whether communication apparatus namely server be used to can receive communication during the gamesThe detection prompt information of signal, when detection prompt information indicates that there are when communication signal, then extract to carry in detection prompt informationCommunication signal current location, current location is compared with predeterminated position, when current location and predeterminated position offset are superWhen crossing preset value, then the inspection information of outgoing inspection communication apparatus, so as to enhance applicability.
In one embodiment, the face figure for including in the video to be monitored is inquired according to the first extraction time sectionThe first object emotional information of picture, comprising: it is to be monitored that corresponding with the first extraction time section first is extracted from video to be monitoredImage identifies the first current emotional information of the facial image for including in the first image to be monitored;Count the first current emotional letterCease the first number of image frames of corresponding first image to be monitored;According to the first current emotional information and the first number of image frames, obtainFirst object emotional information.
Specifically, the first current emotional information refers to the current mood for the different personages for including in every frame image to be monitored.Specifically, when server obtains video to be monitored, then each frame image to be monitored, Jin Ercong are extracted from video to be monitoredFacial image is extracted in each frame image to be monitored, inquires micro- expression information corresponding in facial image, is believed according to micro- expressionBreath inquires the first current emotional information, and then counts corresponding first number of image frames of the first current emotional information, most by frame numberThe first more current emotional information are as first object emotional information.For example, server extracts and first from video to be monitoredExtraction time corresponding first image to be monitored, extracts facial image from the first image to be monitored, and extracting facial image can be withIt is to identify the face characteristic for including in the first image to be monitored, corresponding human face region is identified according to face characteristic, thus shouldHuman face region obtains corresponding micro- expression information in the facial image in every frame image to be assessed as facial image, andDefault emotional information is obtained according to micro- expression information of the facial image in every frame image to be assessed, which isFirst current emotional information, and then the first number of image frames of the first image to be monitored is inquired, such as the first number of image frames totally 100 frame figurePicture, then the first current emotional information obtained have glad, nervous and low, then count 100 the first images to be monitored of frame respectivelyThe middle frame number for glad image occur counts the frame number for occurring nervous image in 100 the first images to be monitored of frame, statistics 100Occurs the frame number of low image, and then the first current emotional information that the frame number of appearance is most in the image to be monitored of frame firstAs first object emotional information.
In the present embodiment, server can be extracted with the first extraction time section corresponding first from video to be monitored wait superviseImage is controlled, identifies the first current emotional information of the facial image for including in the first image to be monitored;Count the first current emotionalFirst number of image frames of corresponding first image to be monitored of information;According to the first current emotional information and the first number of image frames, obtainTo first object emotional information, so that it is simple to obtain first object emotional information, monitoring efficiency is provided.
In one embodiment, the first current emotional information for including in the first image to be monitored is identified, comprising: receive theThe corresponding current emotional of one image to be monitored is the mood probability of standard mood;Mood probability is ranked up, and according to sequenceMood probability afterwards extracts the standard mood of quantity corresponding with preset quantity;Judge the corresponding mood of extracted standard moodWhether type is identical;When the corresponding type of emotion difference of extracted standard mood, then the standard of mood maximum probability is obtainedThe corresponding type of emotion of mood is as the first current emotional information.
Specifically, standard mood refers to that mood corresponding to preset micro- expression, preset micro- expression can be not of the same raceMicro- expression of class, such as 54 kinds of micro- expressions.Mood probability refers to every kind that the micro- Expression Recognition model completed according to training obtainsThe probability of preset micro- expression, and mood probability more it is big then be this kind of micro- expression a possibility that it is higher.Type of emotion refers to will notThe different mood subregions that mood corresponding to same micro- expression is classified, can be, using similar mood as sameA type of emotion, namely can be, using similar mood in mood corresponding to 54 kinds of preset micro- expressions as the same feelingsThread type.Destination probability, which refers to, is corresponding with probability associated by the standard mood of identical type of emotion, which can be withThe mood probability as corresponding to corresponding standard mood different in identical type of emotion is calculated, and can be correspondenceMood probability corresponding to different standard moods in identical type of emotion is added to obtain.
Specifically, it is every kind that server, which receives the current emotional in the facial image for including in each frame image to be monitored,The mood probability of standard mood, and then the mood probability received is ranked up by server, sequence can sort from large to small,And then server gets preset quantity, the mood probability completed from sequence extracts the standard feelings with preset quantity corresponding numberThread, and then type of emotion corresponding to the standard mood extracted is inquired, and type of emotion corresponding to judgment criteria mood isIt is no identical, when extracting type of emotion difference corresponding to obtained standard mood, then inquire the standard feelings of mood maximum probabilityType of emotion corresponding to thread, using the type of emotion as the first current emotional information, when standard mood is corresponding with identical feelingsWhen thread type, then for accuracy of judgement, then avoid directly choosing mood classification corresponding to the standard mood of mood maximum probabilityThe mood probability of the various criterion mood of identical type of emotion is corresponded to as the first current emotional information, namely inquiry, it will be rightThe mood probability of the various criterion mood of identical type of emotion is answered to be summed to obtain destination probability, and then by the destination probabilityIt is compared with the corresponding mood probability in the standard mood of different type of emotion, by the big corresponding feelings of comparison resultThread type is as the first current emotional information.For example, then being received every when server receives the facial image in image to be monitoredCurrent emotional corresponding to facial image is the probability of standard mood in one frame image to be monitored, namely can be and first inquire firstThe corresponding 54 kinds of standard moods of the probability namely first frame image to be monitored of various criterion mood corresponding to frame image to be assessedMood probability, will acquire mood probability and be ranked up, sequence can be and be ranked up from big to small, and then get defaultQuantity then extracts the standard mood that the mood probability after sequence comes front three, and then inquire if preset quantity is 3Type of emotion different corresponding to the standard mood of front three is come, if type of emotion is respectively glad, low and nervous,Namely type of emotion is different type of emotion at this time, and then inquires mood class corresponding to the standard mood of mood maximum probabilityType, for example glad, then glad at this time is the first current emotional information corresponding to facial image in first frame image to be monitored,It can identify first in the first current emotional information and different faces image of other frames image to be assessed in facial imageCurrent emotional information.It should be noted that mood probability is to get phase by corresponding Emotion identification server in the present embodimentThe facial image answered acquires the default expressive features on facial image, and then expressive features are input to micro- table of training completionIt is identified in feelings identification model, obtains the probability that current expression is each micro- expression, also as mood probability, reception are eachCurrent emotional corresponding to frame image to be assessed is the probability of standard mood, namely can be first inquiry first frame image to be monitoredThe probability of corresponding various criterion mood, inquiry come in type of emotion corresponding to the standard mood of front three have it is identicalType of emotion, the standard mood for position of such as ranking the first correspond to happiness, and the standard mood for position of being number two and position of being number threeStandard mood be corresponding with identical type of emotion, namely it is corresponding low, then calculate the corresponding standard feelings in more type of emotionThe summation of the mood probability of thread is right by the destination probability and the corresponding standard mood institute in happy emoticon classification as destination probabilityThe mood probability answered is compared, and type of emotion corresponding to the standard mood for the probability for selecting comparison result big is worked as firstPreceding emotional information.
In the present embodiment, it is standard mood that current emotional can be inquired when analyzing the first current emotional information to facial imageMood probability, and then directly mood probability is ranked up, and extracted and preset quantity pair according to the mood probability after sequenceThe standard mood for the quantity answered judges whether the corresponding type of emotion of extracted standard mood is identical, when type of emotion differenceWhen, then type of emotion corresponding to the standard mood of mood maximum probability is obtained as the first current emotional information, then is not necessarily to peopleWork is analyzed, and the efficiency of the first current emotional information of inquiry is improved, and subjectivity analysis is avoided to lead to analysis inaccuracy, is improvedAnalyze the accuracy of mood.And when the standard mood extracted is corresponding with identical type of emotion, then avoid directlying adopt feelingsMood classification corresponding to the standard mood of thread maximum probability leads to inaccuracy as the first current emotional information, then inquires moodThe identical standard mood of type, and to obtain target general for the mood probability calculation according to corresponding to type of emotion identical standard moodRate, and then the maximum value in destination probability and the mood probability of the different standard mood of type of emotion is got, it will be maximumIt is worth corresponding type of emotion as the first current emotional information, guarantees the accuracy for getting the first current emotional information.
In one embodiment, the second of the facial image for including in video to be monitored is inquired according to the second extraction time sectionTarget emotion information, comprising: the second image to be monitored corresponding with the second extraction time section, identification are extracted from video to be monitoredIt include the second current emotional information of facial image in second image to be monitored;Pass through the second current emotional information corresponding secondSecond number of image frames of image to be monitored;According to the second current emotional information and the second number of image frames, the second target emotion is obtainedInformation.
Specifically, the second current emotional information refers in the second extraction time section includes in corresponding every frame image to be monitoredDifferent personages current mood.Specifically, it when server obtains video to be monitored, is then extracted from video to be monitored everyOne frame image to be monitored, and then facial image is extracted from each frame image to be monitored is inquired corresponding micro- in facial imageExpression information inquires the second current emotional information according to micro- expression information, and then it is corresponding to count the second current emotional informationSecond number of image frames, using the second most current emotional information of frame number as the second target emotion information.For example, server toThe second image to be monitored corresponding with the second extraction time is extracted in monitor video, and face figure is extracted from the second image to be monitoredPicture, extracting facial image can be the face characteristic for including in the second image to be monitored of identification, is identified and is corresponded to according to face characteristicHuman face region, thus using the human face region as facial image, and then obtain in the facial image in every frame image to be assessedCorresponding micro- expression information, and default mood letter is obtained according to micro- expression information of the facial image in every frame image to be assessedBreath, which is the second current emotional information, and then inquires the second number of image frames of the second image to be monitored, such asSecond number of image frames totally 100 frame image, then the second current emotional information obtained have glad, nervous and low, then unite respectivelyThe frame number for occurring glad image in 100 the second images to be monitored of frame is counted, counts and occurs anxiety in 100 the second images to be monitored of frameImage frame number, count in 100 the second images to be monitored of frame and the frame number of low image occur, and then most by the frame number of appearanceThe second more current emotional information are as the second target emotion information.
In the present embodiment, server can be extracted with the second extraction time section corresponding second from video to be monitored wait superviseImage is controlled, identifies the second current emotional information of the facial image for including in the second image to be monitored;Pass through the second current emotionalSecond number of image frames of corresponding second image to be monitored of information;According to the second current emotional information and the second number of image frames, obtainTo the second target emotion information, so that it is simple to obtain the second target emotion information, monitoring efficiency is provided.
In one embodiment, the second current emotional information for including in second image to be monitored is identified, comprising: connectReceive the mood probability that the corresponding current emotional of the described second image to be monitored is standard mood;The mood probability is arrangedSequence, and according to the standard mood of the mood probability extraction quantity corresponding with preset quantity after the sequence;Judge extractedWhether the corresponding type of emotion of standard mood is identical;When the corresponding type of emotion difference of extracted standard mood, then obtainThe corresponding type of emotion of the standard mood of mood maximum probability is as current emotional information.
Specifically, standard mood refers to that mood corresponding to preset micro- expression, preset micro- expression can be not of the same raceMicro- expression of class, such as 54 kinds of micro- expressions.Mood probability refers to every kind that the micro- Expression Recognition model completed according to training obtainsThe probability of preset micro- expression, and mood probability more it is big then be this kind of micro- expression a possibility that it is higher.Type of emotion refers to will notThe different mood subregions that mood corresponding to same micro- expression is classified, can be, using similar mood as sameA type of emotion, namely can be, using similar mood in mood corresponding to 54 kinds of preset micro- expressions as the same feelingsThread type.Destination probability, which refers to, is corresponding with probability associated by the standard mood of identical type of emotion, which can be withThe mood probability as corresponding to corresponding standard mood different in identical type of emotion is calculated, and can be correspondenceMood probability corresponding to different standard moods in identical type of emotion is added to obtain.
Specifically, it is every kind that server, which receives the current emotional in the facial image for including in each frame image to be monitored,The mood probability of standard mood, and then the mood probability received is ranked up by server, sequence can sort from large to small,And then server gets preset quantity, the mood probability completed from sequence extracts the standard feelings with preset quantity corresponding numberThread, and then type of emotion corresponding to the standard mood extracted is inquired, and type of emotion corresponding to judgment criteria mood isIt is no identical, when extracting type of emotion difference corresponding to obtained standard mood, then inquire the standard feelings of mood maximum probabilityType of emotion corresponding to thread, using the type of emotion as the second current emotional information, when standard mood is corresponding with identical feelingsWhen thread type, then for accuracy of judgement, then avoid directly choosing mood classification corresponding to the standard mood of mood maximum probabilityThe mood probability of the various criterion mood of identical type of emotion is corresponded to as the second current emotional information, namely inquiry, it will be rightThe mood probability of the various criterion mood of identical type of emotion is answered to be summed to obtain destination probability, and then by the destination probabilityIt is compared with the corresponding mood probability in the standard mood of different type of emotion, by the big corresponding feelings of comparison resultThread type is as the second current emotional information.For example, then being received every when server receives the facial image in image to be monitoredCurrent emotional corresponding to facial image is the probability of standard mood in one frame image to be monitored, namely can be and first inquire secondThe corresponding 54 kinds of standard moods of the probability namely the second frame image to be monitored of various criterion mood corresponding to frame image to be assessedMood probability, will acquire mood probability and be ranked up, sequence can be and be ranked up from big to small, and then get defaultQuantity then extracts the standard mood that the mood probability after sequence comes front three, and then inquire if preset quantity is 3Type of emotion different corresponding to the standard mood of front three is come, if type of emotion is respectively glad, low and nervous,Namely type of emotion is different type of emotion at this time, and then inquires mood class corresponding to the standard mood of mood maximum probabilityType, for example glad, then glad at this time is the second current emotional information corresponding to facial image in the second frame image to be monitored,It can identify second in the second current emotional information and different faces image of other frames image to be assessed in facial imageCurrent emotional information.It should be noted that mood probability is to get phase by corresponding Emotion identification server in the present embodimentThe facial image answered acquires the default expressive features on facial image, and then expressive features are input to micro- table of training completionIt is identified in feelings identification model, obtains the probability that current expression is each micro- expression, also as mood probability, reception are eachCurrent emotional corresponding to frame image to be assessed is the probability of standard mood, namely can be and first inquire the second frame image to be monitoredThe probability of corresponding various criterion mood, inquiry come in type of emotion corresponding to the standard mood of front three have it is identicalType of emotion, the standard mood for position of being such as number two correspond to happiness, and the standard mood for position of being number two and position of being number threeStandard mood be corresponding with identical type of emotion, namely it is corresponding low, then calculate the corresponding standard feelings in more type of emotionThe summation of the mood probability of thread is right by the destination probability and the corresponding standard mood institute in happy emoticon classification as destination probabilityThe mood probability answered is compared, and type of emotion corresponding to the standard mood for the probability for selecting comparison result big is worked as secondPreceding emotional information.
In the present embodiment, it is standard mood that current emotional can be inquired when analyzing the second current emotional information to facial imageMood probability, and then directly mood probability is ranked up, and extracted and preset quantity pair according to the mood probability after sequenceThe standard mood for the quantity answered judges whether the corresponding type of emotion of extracted standard mood is identical, when type of emotion differenceWhen, then type of emotion corresponding to the standard mood of mood maximum probability is obtained as the second current emotional information, then is not necessarily to peopleWork is analyzed, and the efficiency of the second current emotional information of inquiry is improved, and subjectivity analysis is avoided to lead to analysis inaccuracy, is improvedAnalyze the accuracy of mood.And when the standard mood extracted is corresponding with identical type of emotion, then avoid directlying adopt feelingsMood classification corresponding to the standard mood of thread maximum probability leads to inaccuracy as the second current emotional information, then inquires moodThe identical standard mood of type, and to obtain target general for the mood probability calculation according to corresponding to type of emotion identical standard moodRate, and then the maximum value in destination probability and the mood probability of the different standard mood of type of emotion is got, it will be maximumIt is worth corresponding type of emotion as the second current emotional information, guarantees the accuracy for getting the second current emotional information.
It should be understood that although each step in the flow chart of Fig. 2-3 is successively shown according to the instruction of arrow,These steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly stating otherwise herein, these stepsExecution there is no stringent sequences to limit, these steps can execute in other order.Moreover, at least one in Fig. 2-3Part steps may include that perhaps these sub-steps of multiple stages or stage are not necessarily in synchronization to multiple sub-stepsCompletion is executed, but can be executed at different times, the execution sequence in these sub-steps or stage is also not necessarily successivelyIt carries out, but can be at least part of the sub-step or stage of other steps or other steps in turn or alternatelyIt executes.
In one embodiment, as shown in figure 4, providing a kind of match monitoring device 400, comprising: receiving module 410,First, which obtains module 420, second, obtains module 430, third acquisition module 440, computing module 450 and output module 460,In:
Receiving module 410 is used for video to be monitored, and obtains the first extraction time section, is looked into according to the first extraction time sectionAsk the first object emotional information for the facial image for including in video to be monitored.
First obtains module 420, for obtaining first object mood score corresponding with first object emotional information.
Second obtains module 430, for obtaining the second extraction time section, inquires view to be monitored according to the second extraction time sectionSecond target emotion information of the facial image for including in frequency.
Third obtains module 440, for obtaining the second target emotion score corresponding with the second target emotion information.
Computing module 450, for calculating the difference of first object mood score Yu the second target emotion score, when difference is superWhen crossing threshold value, then the action message for including in video to be monitored corresponding to the first extraction time section is inquired.
First output module 460, for when action message is there are when suspicious action information, then inquiry and suspicious action informationCorresponding first identity information, and the first identity information is exported.
In one embodiment, compete monitoring device 400, can also include:
Audio obtains module, for extracting audio to be monitored corresponding with first time period from video to be monitored.
Enquiry module, for extracting from audio to be monitored with audio keyword, whether key word of the inquiry is suspicious keyWord.
Second output module, for when keyword is suspicious keyword, then inquiring corresponding with suspicious keyword secondIdentity information, and the second identity information is exported.
In one embodiment, compete monitoring device, can also include:
Position extraction module leads to for receiving the detection prompt information of communication signal when detection prompt information indicates to existWhen interrogating signal, then the current location of the communication signal carried in detection prompt information is extracted.
Comparison module, for current location to be compared with predeterminated position.
It checks message output module, is used to deviate when being more than preset value when current location and predeterminated position, then outgoing inspectionThe inspection information of communication apparatus.
In one embodiment, receiving module 410, comprising:
First recognition unit, for extracting the first figure to be monitored corresponding with the first extraction time section from video to be monitoredPicture identifies the first current emotional information of the facial image for including in the first image to be monitored.
First statistic unit, for counting the first picture frame of corresponding first image to be monitored of the first current emotional informationNumber.
First generation unit, for obtaining first object mood according to the first current emotional information and the first number of image framesInformation.
In one embodiment, recognition unit, comprising:
First receiving subelement, it is general for receiving the mood that the corresponding current emotional of the first image to be monitored is standard moodRate.
First sorting subunit, for being ranked up to mood probability, and according to the mood probability extraction after sequence and in advanceIf the standard mood of the corresponding quantity of quantity.
First judgment sub-unit, for judging whether the corresponding type of emotion of extracted standard mood is identical.
First obtains subelement, for when the corresponding type of emotion difference of extracted standard mood, then obtaining moodThe corresponding type of emotion of standard mood of maximum probability is as the first current emotional information.
In one embodiment, second module is obtained, comprising:
Second recognition unit, for extracting the second figure to be monitored corresponding with the second extraction time section from video to be monitoredPicture identifies the second current emotional information in the second image to be monitored comprising facial image.
Second statistic unit, for passing through the second picture frame of corresponding second image to be monitored of the second current emotional informationNumber.
Second generation unit, for obtaining the second target emotion according to the second current emotional information and the second number of image framesInformation.
In one embodiment, the second recognition unit, comprising:
Second receiving subelement, it is general for receiving the mood that the corresponding current emotional of the second image to be monitored is standard moodRate.
Second sorting subunit, for being ranked up to mood probability, and according to the mood probability extraction after sequence and in advanceIf the standard mood of the corresponding quantity of quantity.
Second judgment sub-unit judges whether the corresponding type of emotion of extracted standard mood is identical.
Second acquisition subelement then obtains mood probability when the corresponding type of emotion difference of extracted standard moodThe corresponding type of emotion of maximum standard mood is as current emotional information.
Specific about match monitoring device limits the restriction that may refer to above for match monitoring method, herein notIt repeats again.Modules in above-mentioned match monitoring device can be realized fully or partially through software, hardware and combinations thereof.OnStating each module can be embedded in the form of hardware or independently of in the processor in computer equipment, can also store in a software formIn memory in computer equipment, the corresponding operation of the above modules is executed in order to which processor calls.
In one embodiment, a kind of computer equipment is provided, which can be server, internal junctionComposition can be as shown in Figure 5.The computer equipment include by system bus connect processor, memory, network interface andDatabase.Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory packet of the computer equipmentInclude non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system, computer program and dataLibrary.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculatingThe database of machine equipment is for storing match monitoring data.The network interface of the computer equipment is used to pass through with external terminalNetwork connection communication.To realize a kind of match monitoring method when the computer program is executed by processor.
It will be understood by those skilled in the art that structure shown in Fig. 5, only part relevant to application scheme is tiedThe block diagram of structure does not constitute the restriction for the computer equipment being applied thereon to application scheme, specific computer equipmentIt may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
In one embodiment, a kind of computer equipment, including memory and processor are provided, which is stored withComputer program, which performs the steps of when executing computer program receives video to be monitored, and obtains first and extractPeriod inquires the first object emotional information for the facial image for including in video to be monitored according to the first extraction time section.It obtainsTake first object mood score corresponding with first object emotional information.The second extraction time section is obtained, when extracting according to secondBetween section inquire the second target emotion information of the facial image for including in video to be monitored.It obtains and the second target emotion information pairThe the second target emotion score answered.The difference for calculating first object mood score and the second target emotion score, when difference is more thanWhen threshold value, then the action message for including in video to be monitored corresponding to the first extraction time section is inquired.When action message existsWhen suspicious action information, then the first identity information corresponding with suspicious action information is inquired, and the first identity information is exported.
In one embodiment, it also performs the steps of when processor executes computer program and is mentioned from video to be monitoredTake audio to be monitored corresponding with first time period.It extracts from audio to be monitored with audio keyword, whether key word of the inquiryFor suspicious keyword.When keyword is suspicious keyword, then the second identity information corresponding with suspicious keyword is inquired, and willSecond identity information is exported.
In one embodiment, the inspection for receiving communication signal is also performed the steps of when processor executes computer programPrompt information is surveyed, when detection prompt information indicates that there are when communication signal, then extract the communication letter carried in detection prompt informationNumber current location.Current location is compared with predeterminated position.When current location and predeterminated position offset are more than preset valueWhen, then the inspection information of outgoing inspection communication apparatus.
In one embodiment, it is realized when processor executes computer program to be monitored according to the section inquiry of the first extraction timeThe first object emotional information for the facial image for including in video, comprising: extracted and the first extraction time from video to be monitoredCorresponding first image to be monitored of section, identifies the first current emotional information of the facial image for including in the first image to be monitored.Count the first number of image frames of corresponding first image to be monitored of the first current emotional information.According to the first current emotional information withFirst number of image frames obtains first object emotional information.
In one embodiment, include in the first image to be monitored of identification the is realized when processor executes computer programOne current emotional information, comprising: the corresponding current emotional of the image to be monitored of reception first is the mood probability of standard mood.To feelingsThread probability is ranked up, and the standard mood of quantity corresponding with preset quantity is extracted according to the mood probability after sequence.JudgementWhether the corresponding type of emotion of extracted standard mood is identical.When the corresponding type of emotion of extracted standard mood is differentWhen, then the corresponding type of emotion of standard mood of mood maximum probability is obtained as the first current emotional information.
In one embodiment, it is realized when processor executes computer program to be monitored according to the section inquiry of the second extraction timeSecond target emotion information of the facial image for including in video, comprising: extracted and the second extraction time from video to be monitoredCorresponding second image to be monitored of section identifies the second current emotional information in the second image to be monitored comprising facial image.It is logicalCross the second number of image frames of corresponding second image to be monitored of the second current emotional information.According to the second current emotional information and theTwo number of image frames obtain the second target emotion information.
In one embodiment, include in the second image to be monitored of identification the is realized when processor executes computer programTwo current emotional information, comprising: the corresponding current emotional of the image to be monitored of reception second is the mood probability of standard mood.To feelingsThread probability is ranked up, and the standard mood of quantity corresponding with preset quantity is extracted according to the mood probability after sequence.JudgementWhether the corresponding type of emotion of extracted standard mood is identical.When the corresponding type of emotion of extracted standard mood is differentWhen, then the corresponding type of emotion of standard mood of mood maximum probability is obtained as current emotional information.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculatedMachine program performs the steps of when being executed by processor receives video to be monitored, and obtains the first extraction time section, according to firstExtraction time section inquires the first object emotional information for the facial image for including in video to be monitored.It obtains and first object moodThe corresponding first object mood score of information.The second extraction time section is obtained, view to be monitored is inquired according to the second extraction time sectionSecond target emotion information of the facial image for including in frequency.Obtain the second target emotion corresponding with the second target emotion informationScore.The difference for calculating first object mood score and the second target emotion score then inquires first when difference is more than threshold valueThe action message for including in video to be monitored corresponding to extraction time section.When action message is there are when suspicious action information, thenThe first identity information corresponding with suspicious action information is inquired, and the first identity information is exported.
In one embodiment, it is also performed the steps of from video to be monitored when computer program is executed by processorExtract audio to be monitored corresponding with first time period.It extracts from audio to be monitored with audio keyword, key word of the inquiry isNo is suspicious keyword.When keyword is suspicious keyword, then the second identity information corresponding with suspicious keyword is inquired, andSecond identity information is exported.
In one embodiment, it is also performed the steps of when computer program is executed by processor and receives communication signalPrompt information is detected, when detection prompt information indicates that there are when communication signal, then extract the communication carried in detection prompt informationThe current location of signal.Current location is compared with predeterminated position.When current location and predeterminated position offset are more than defaultWhen value, then the inspection information of outgoing inspection communication apparatus.
In one embodiment, it realizes when computer program is executed by processor and is inquired according to the first extraction time section wait superviseThe first object emotional information for the facial image for including in control video, comprising: when extracted from video to be monitored with the first extractionBetween corresponding first image to be monitored of section, identify the first image to be monitored in include facial image the first current emotional letterBreath.Count the first number of image frames of corresponding first image to be monitored of the first current emotional information.Believed according to the first current emotionalBreath and the first number of image frames, obtain first object emotional information.
In one embodiment, it is realized when computer program is executed by processor in the first image to be monitored of identification and includesFirst current emotional information, comprising: the corresponding current emotional of the image to be monitored of reception first is the mood probability of standard mood.It is rightMood probability is ranked up, and the standard mood of quantity corresponding with preset quantity is extracted according to the mood probability after sequence.SentenceWhether the corresponding type of emotion of extracted standard mood of breaking is identical.When the corresponding type of emotion of extracted standard mood is differentWhen, then the corresponding type of emotion of standard mood of mood maximum probability is obtained as the first current emotional information.
In one embodiment, it realizes when computer program is executed by processor and is inquired according to the second extraction time section wait superviseSecond target emotion information of the facial image for including in control video, comprising: when extracted from video to be monitored with the second extractionBetween corresponding second image to be monitored of section, identify the second image to be monitored in include facial image the second current emotional information.Pass through the second number of image frames of corresponding second image to be monitored of the second current emotional information.According to the second current emotional information withSecond number of image frames obtains the second target emotion information.
In one embodiment, it is realized when computer program is executed by processor in the second image to be monitored of identification and includesSecond current emotional information, comprising: the corresponding current emotional of the image to be monitored of reception second is the mood probability of standard mood.It is rightMood probability is ranked up, and the standard mood of quantity corresponding with preset quantity is extracted according to the mood probability after sequence.SentenceWhether the corresponding type of emotion of extracted standard mood of breaking is identical.When the corresponding type of emotion of extracted standard mood is differentWhen, then the corresponding type of emotion of standard mood of mood maximum probability is obtained as current emotional information.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be withRelevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computerIn read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein,To any reference of memory, storage, database or other media used in each embodiment provided herein,Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM(PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may includeRandom access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms,Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancingType SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodimentIn each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lanceShield all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneouslyIt cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the artIt says, without departing from the concept of this application, various modifications and improvements can be made, these belong to the protection of the applicationRange.Therefore, the scope of protection shall be subject to the appended claims for the application patent.