A kind of input method based on Expression RecognitionTechnical field
The present invention relates to input method, particularly based on the input method of Expression Recognition.
Background technology
At present, for example importing expression information in the electronic equipments such as mobile phone, personal digital assistant, televisor, PC, mainly contain following method.
The most original method is to describe particular emotion by literal, and for example user input text " smile " or " indignation " etc. are used to describe the expression of particular type.Use the method for punctuation mark combination in addition.Express particular emotion by punctuation mark and be meant that the user is by importing a series of punctuation marks, be combined into the emoticon with pictograph meaning, for example " ^_^ ".
But the method for the text message of expressing one's feelings is represented in above-mentioned direct input, has problems such as input mode is single, the form of expression is stiff.The method of punctuation mark combination exists then that input operation is loaded down with trivial details, transition table affectionate person class is abundant inadequately or not accurate enough problem.
Above-mentioned relatively two kinds of methods, the method that the use expression picture is arranged that the each side performance is better.Specifically, be in instant messaging equipment such as mobile phone, computer or QQ, microsoft network service bitcom systems such as (MSN:MicrosoftService Network), the expression picture choice box is provided, the user selects expression picture suitable in the expression picture choice box to click, and then this expression picture will be output to display terminal or output to other output devices.But still there is the problem of complex operation in this method.
In addition, utilize human facial expression recognition to carry out the method for expression information input in addition,, export corresponding information to display terminal or other output devices promptly according to the user's facial expression that recognizes.This method is used for fields such as device control, information input, safety certification more.But the limitation of present this method is, realize human face expression coupling and identification by sampling in advance, so versatility is lower, can't be applicable to not specific personage's Expression Recognition and the situation of expression information input.
Summary of the invention
In view of the problem that above-mentioned input method exists, the object of the present invention is to provide a kind of easy to operate, versatility good, can be applicable to the not input method of specific personage's expression information.
To achieve these goals,, comprising according to the input method based on Expression Recognition of the present invention: acquisition step, utilize camera head to gather image; Identification and classification step identify people's face and carry out the extraction of expressive features and obtain the classification of described expression from described image; The coupling step is complementary described classification and corresponding input results; Input step uses described input results to import.
And, in above-mentioned input method, it is characterized in that described identification and classification step comprise: use reference template method, people's face rule method, sample learning method, complexion model method or the sub-face method of feature to identify described people's face and locate profile based on Expression Recognition; Use principal component analysis (PCA) or gal cypress (Gabor) wavelet method from described people's face, to extract expressive features; Use is based on the matching process of template, based on neural network method or obtain the classification of described expression from described expressive features based on the method for support vector machine.
And, in above-mentioned input method, it is characterized in that carrying out described acquisition step, identification and classification step, coupling step at interval repeatedly successively with preset time based on Expression Recognition; Before the coupling step, also comprise: determining step, judge whether described classification is identical with the classification of time interval expression before, if identical, then do not carry out follow-up described coupling step, and directly return described acquisition step in the next time interval.
And, in above-mentioned input method, it is characterized in that in described coupling step based on Expression Recognition, with described classification and matching in predefined a plurality of input results in predetermined input format with the corresponding input results of described classification.
And, in the input method of the above, it is characterized in that described predetermined input format is in text formatting, picture format, symbol combination form, video format, the audio format based on Expression Recognition.
And, in above-mentioned input method based on Expression Recognition, it is characterized in that in described coupling step, from by selecting a kind of input format in one or more input format formed text formatting, picture format, symbol combination form, video format, the audio format, and with described classification and matching in predefined a plurality of input results in selected input format with the corresponding input results of described classification.
And, in above-mentioned input method, it is characterized in that also comprising: to comprise deletion, the editing and processing of revising or increasing by one or more input format formed in text formatting, picture format, symbol combination form, video format, the audio format based on Expression Recognition.
And, in above-mentioned input method, it is characterized in that also comprising: to comprising deletion, the editing and processing of revising or increasing by one or more input results of forming glad, angry, that be taken aback, reach in the fear based on Expression Recognition.
According to the input method based on Expression Recognition of the present invention, can provide a kind of easy to operate, versatility good, can be applicable to the not input method of specific personage's expression information.
Description of drawings
By the description of carrying out below in conjunction with accompanying drawing, above-mentioned and other purposes of the present invention and characteristics will become apparent, wherein:
Fig. 1 is expression according to the process flow diagram based on the step of the input method of Expression Recognition of embodiment of the present invention;
Fig. 2 is for the form of expression type, input format and input results is described.
Fig. 3 is the figure of expression one routine input results.
Main symbol description: S1010-S1080 is a step.
Embodiment
Below, describe embodiments of the invention in detail with reference to accompanying drawing.
(embodiment)
Fig. 1 is expression according to the process flow diagram based on the step of the input method of Expression Recognition of present embodiment.
As shown in Figure 1, the input method based on Expression Recognition according to present embodiment can roughly be divided into four modules, be subdivided into eight steps, wherein step S1010 constitutes image collectingmodule 101, step S1020-S1040 constitutes Expression Recognition andsort module 102, step S1050-S1070 constitutes inputresults matching module 103, and step S1080 constitutes load module.Concrete step is as follows.
At step S1010, started input method after, utilize camera head user face to be made a video recording and, for example to gather signal of video signal in 0.1 second at interval with Fixed Time Interval Δ t.Wherein, camera head is installed on user's operating equipment, perhaps as a separate equipment.
Then at step S1020, utilize signal of video signal to identify the people's face in this image and locate profile, obtain information such as people's face quantity, profile, primary and secondary relation in the image.Wherein, about the method for recognition of face and location, existing technology has reference template method, people's face rule method, sample learning method, complexion model method, the sub-face method of feature etc.
Then, utilize the people's face and the locations of contours result that in step S1020, recognize, carry out the extraction of human face expression feature at step S1030.Prior art about human face expression feature extraction method has principal component analysis (PCA), Gabor wavelet method etc.
Then at step S1040, utilizing the expressive features of the people's face that is drawn at step S1030, is such as glad, angry, startled, unidentified etc. with this expression classification.The prior art of human face expression sorting technique mainly contains matching process based on template, based on neural network method, based on method of support vector machine etc.
Then, judge whether the classification results that obtains at step S1040 is " unidentified ", if (step S1050: "Yes"), then abandon the image information that this time collects, return step S1010 at step S1050.
If classification results is not " unidentified " (step S1050: "No"), then then at step S1060, continue to judge that the classification results of the expression whether this classification results collects with the last time interval is consistent.If consistent (step S1060: "Yes"), then illustrate the user express one's feelings time interval Δ t (Δ t=t2-t1, wherein, t2 is this time image collection time, t1 is image collection time last time) in do not change, need not to re-enter, therefore return step S1010.
If inconsistent (step S1060: "No"),, select input format, and corresponding input results in classification results and the selected input format is mated then then at step S1070.Wherein, input format is predefined some kinds of input format, comprises text, picture, symbol combination etc.Fig. 2 is the form for the input results that expression type, input format and coupling are described.For example, suppose that the expression type is glad, input format is a picture, and then Pi Pei input results is the picture shown in the 3rd row fourth line in the form.
Then at step S1080, the input results that will in step S1070, obtain be input to present embodiment input method at system show that this system comprises the display screen of mobile phone, personal digital assistant, televisor, PC.Even this input results can also be directly inputted to network interface and go out by Network Transmission, or is applied to fields such as device control, information input, safety certification.
As mentioned above, input method according to present embodiment based on Expression Recognition, the classification that the expression of user's nature of collecting is discerned and analyzed and obtains expressing one's feelings, need not in advance specific user's expression to be sampled and by comparing with sampling expression, therefore not specific user's Expression Recognition and the input of expression information can be applicable to, versatility can be improved based on the input method of Expression Recognition.
And, as mentioned above, input method according to present embodiment based on Expression Recognition, the expressing one's feelings naturally of people's face that camera head collects discerned, and the classification that obtains expressing one's feelings according to the extraction and the analysis of expressive features, classification and matching is imported to input results, above-mentioned input process need not user's manual operation, therefore can improve the efficient and the convenience of expression information input, and import user expression with auxiliary traditional input mode by the mode of image, enriched input method form, strengthened the interest of operation.
And, as mentioned above, in input method based on Expression Recognition according to present embodiment, camera head is gathered image at interval with preset time, and when the classification results of judging expression with to the classification results of time interval expression before when identical, no longer carry out follow-up step, and directly proceed image collection by camera head in the next time interval, therefore, can carry out Expression Recognition and input in real time continuously, increase the efficient and the convenience of input operation, and enlarged the usable range of input method, be specially adapted to comprise the real-time Network Transmission of Internet chat.
In addition, under the situation that does not break away from the spirit and scope of the present invention that are defined by the claims, can also carry out various changes on form and the details to the Web page subject content extraction method in the present embodiment.
For example, in the input method based on Expression Recognition of present embodiment, S1070 selects input format by the user in step, but the present invention is not limited to this, and the user can also set in advance the input format of acquiescence.Thus, save the operation of selecting input format, can further improve the efficient and the convenience of input process.
Again for example, editing and processing such as the user can also add input format, deletion, modification for example, are added other input format such as video format, audio format.
Again for example, the user can also be according to Expression Recognition and sorting result, dynamic editing (for example add, delete, revise) input results.For example, when the classification results that obtains according to Expression Recognition " happiness " can further be refined as " smile " with " laugh ", then can correspondingly add new input results, for example, add input results as shown in Figure 3 corresponding to " laugh ".
Utilizability on the industry
Input method based on Expression Recognition of the present invention is applicable to the input of the expression information in the electronic equipments such as mobile phone, personal digital assistant, television set, PC and the Internet Transmission.