Movatterモバイル変換


[0]ホーム

URL:


CN102193620A - Input method based on facial expression recognition - Google Patents

Input method based on facial expression recognition
Download PDF

Info

Publication number
CN102193620A
CN102193620ACN 201010118463CN201010118463ACN102193620ACN 102193620 ACN102193620 ACN 102193620ACN 201010118463CN201010118463CN 201010118463CN 201010118463 ACN201010118463 ACN 201010118463ACN 102193620 ACN102193620 ACN 102193620A
Authority
CN
China
Prior art keywords
input
classification
format
expression
expression recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010118463
Other languages
Chinese (zh)
Other versions
CN102193620B (en
Inventor
杜乐
谢林
朱昊亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics China R&D Center, Samsung Electronics Co LtdfiledCriticalSamsung Electronics China R&D Center
Priority to CN 201010118463priorityCriticalpatent/CN102193620B/en
Publication of CN102193620ApublicationCriticalpatent/CN102193620A/en
Application grantedgrantedCritical
Publication of CN102193620BpublicationCriticalpatent/CN102193620B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Landscapes

Abstract

Translated fromChinese

本发明公开了一种基于表情识别的输入方法,包括:利用摄像装置采集影像;使用参考模板法、人脸规则法、样品学习法、肤色模型法、或特征子脸法识别出所述人脸并定位轮廓;使用主成分分析法、或Gabor小波法从所述人脸中抽取出表情特征;使用基于模板的匹配方法、基于神经网络的方法、或基于支持向量机的方法从所述表情特征得到所述表情的分类;匹配步骤,将所述分类与相应的输入结果相匹配;输入步骤,使用所述输入结果进行输入。

Figure 201010118463

The invention discloses an input method based on facial expression recognition, comprising: using a camera to collect images; using a reference template method, a face rule method, a sample learning method, a skin color model method, or a feature subface method to identify the human face And position contour; Use principal component analysis method or Gabor wavelet method to extract expression feature from described human face; Use the matching method based on template, the method based on neural network or the method based on support vector machine from described expression feature Obtaining the classification of the expressions; a matching step, matching the classification with corresponding input results; an input step, using the input results for input.

Figure 201010118463

Description

A kind of input method based on Expression Recognition
Technical field
The present invention relates to input method, particularly based on the input method of Expression Recognition.
Background technology
At present, for example importing expression information in the electronic equipments such as mobile phone, personal digital assistant, televisor, PC, mainly contain following method.
The most original method is to describe particular emotion by literal, and for example user input text " smile " or " indignation " etc. are used to describe the expression of particular type.Use the method for punctuation mark combination in addition.Express particular emotion by punctuation mark and be meant that the user is by importing a series of punctuation marks, be combined into the emoticon with pictograph meaning, for example " ^_^ ".
But the method for the text message of expressing one's feelings is represented in above-mentioned direct input, has problems such as input mode is single, the form of expression is stiff.The method of punctuation mark combination exists then that input operation is loaded down with trivial details, transition table affectionate person class is abundant inadequately or not accurate enough problem.
Above-mentioned relatively two kinds of methods, the method that the use expression picture is arranged that the each side performance is better.Specifically, be in instant messaging equipment such as mobile phone, computer or QQ, microsoft network service bitcom systems such as (MSN:MicrosoftService Network), the expression picture choice box is provided, the user selects expression picture suitable in the expression picture choice box to click, and then this expression picture will be output to display terminal or output to other output devices.But still there is the problem of complex operation in this method.
In addition, utilize human facial expression recognition to carry out the method for expression information input in addition,, export corresponding information to display terminal or other output devices promptly according to the user's facial expression that recognizes.This method is used for fields such as device control, information input, safety certification more.But the limitation of present this method is, realize human face expression coupling and identification by sampling in advance, so versatility is lower, can't be applicable to not specific personage's Expression Recognition and the situation of expression information input.
Summary of the invention
In view of the problem that above-mentioned input method exists, the object of the present invention is to provide a kind of easy to operate, versatility good, can be applicable to the not input method of specific personage's expression information.
To achieve these goals,, comprising according to the input method based on Expression Recognition of the present invention: acquisition step, utilize camera head to gather image; Identification and classification step identify people's face and carry out the extraction of expressive features and obtain the classification of described expression from described image; The coupling step is complementary described classification and corresponding input results; Input step uses described input results to import.
And, in above-mentioned input method, it is characterized in that described identification and classification step comprise: use reference template method, people's face rule method, sample learning method, complexion model method or the sub-face method of feature to identify described people's face and locate profile based on Expression Recognition; Use principal component analysis (PCA) or gal cypress (Gabor) wavelet method from described people's face, to extract expressive features; Use is based on the matching process of template, based on neural network method or obtain the classification of described expression from described expressive features based on the method for support vector machine.
And, in above-mentioned input method, it is characterized in that carrying out described acquisition step, identification and classification step, coupling step at interval repeatedly successively with preset time based on Expression Recognition; Before the coupling step, also comprise: determining step, judge whether described classification is identical with the classification of time interval expression before, if identical, then do not carry out follow-up described coupling step, and directly return described acquisition step in the next time interval.
And, in above-mentioned input method, it is characterized in that in described coupling step based on Expression Recognition, with described classification and matching in predefined a plurality of input results in predetermined input format with the corresponding input results of described classification.
And, in the input method of the above, it is characterized in that described predetermined input format is in text formatting, picture format, symbol combination form, video format, the audio format based on Expression Recognition.
And, in above-mentioned input method based on Expression Recognition, it is characterized in that in described coupling step, from by selecting a kind of input format in one or more input format formed text formatting, picture format, symbol combination form, video format, the audio format, and with described classification and matching in predefined a plurality of input results in selected input format with the corresponding input results of described classification.
And, in above-mentioned input method, it is characterized in that also comprising: to comprise deletion, the editing and processing of revising or increasing by one or more input format formed in text formatting, picture format, symbol combination form, video format, the audio format based on Expression Recognition.
And, in above-mentioned input method, it is characterized in that also comprising: to comprising deletion, the editing and processing of revising or increasing by one or more input results of forming glad, angry, that be taken aback, reach in the fear based on Expression Recognition.
According to the input method based on Expression Recognition of the present invention, can provide a kind of easy to operate, versatility good, can be applicable to the not input method of specific personage's expression information.
Description of drawings
By the description of carrying out below in conjunction with accompanying drawing, above-mentioned and other purposes of the present invention and characteristics will become apparent, wherein:
Fig. 1 is expression according to the process flow diagram based on the step of the input method of Expression Recognition of embodiment of the present invention;
Fig. 2 is for the form of expression type, input format and input results is described.
Fig. 3 is the figure of expression one routine input results.
Main symbol description: S1010-S1080 is a step.
Embodiment
Below, describe embodiments of the invention in detail with reference to accompanying drawing.
(embodiment)
Fig. 1 is expression according to the process flow diagram based on the step of the input method of Expression Recognition of present embodiment.
As shown in Figure 1, the input method based on Expression Recognition according to present embodiment can roughly be divided into four modules, be subdivided into eight steps, wherein step S1010 constitutes image collectingmodule 101, step S1020-S1040 constitutes Expression Recognition andsort module 102, step S1050-S1070 constitutes inputresults matching module 103, and step S1080 constitutes load module.Concrete step is as follows.
At step S1010, started input method after, utilize camera head user face to be made a video recording and, for example to gather signal of video signal in 0.1 second at interval with Fixed Time Interval Δ t.Wherein, camera head is installed on user's operating equipment, perhaps as a separate equipment.
Then at step S1020, utilize signal of video signal to identify the people's face in this image and locate profile, obtain information such as people's face quantity, profile, primary and secondary relation in the image.Wherein, about the method for recognition of face and location, existing technology has reference template method, people's face rule method, sample learning method, complexion model method, the sub-face method of feature etc.
Then, utilize the people's face and the locations of contours result that in step S1020, recognize, carry out the extraction of human face expression feature at step S1030.Prior art about human face expression feature extraction method has principal component analysis (PCA), Gabor wavelet method etc.
Then at step S1040, utilizing the expressive features of the people's face that is drawn at step S1030, is such as glad, angry, startled, unidentified etc. with this expression classification.The prior art of human face expression sorting technique mainly contains matching process based on template, based on neural network method, based on method of support vector machine etc.
Then, judge whether the classification results that obtains at step S1040 is " unidentified ", if (step S1050: "Yes"), then abandon the image information that this time collects, return step S1010 at step S1050.
If classification results is not " unidentified " (step S1050: "No"), then then at step S1060, continue to judge that the classification results of the expression whether this classification results collects with the last time interval is consistent.If consistent (step S1060: "Yes"), then illustrate the user express one's feelings time interval Δ t (Δ t=t2-t1, wherein, t2 is this time image collection time, t1 is image collection time last time) in do not change, need not to re-enter, therefore return step S1010.
If inconsistent (step S1060: "No"),, select input format, and corresponding input results in classification results and the selected input format is mated then then at step S1070.Wherein, input format is predefined some kinds of input format, comprises text, picture, symbol combination etc.Fig. 2 is the form for the input results that expression type, input format and coupling are described.For example, suppose that the expression type is glad, input format is a picture, and then Pi Pei input results is the picture shown in the 3rd row fourth line in the form.
Then at step S1080, the input results that will in step S1070, obtain be input to present embodiment input method at system show that this system comprises the display screen of mobile phone, personal digital assistant, televisor, PC.Even this input results can also be directly inputted to network interface and go out by Network Transmission, or is applied to fields such as device control, information input, safety certification.
As mentioned above, input method according to present embodiment based on Expression Recognition, the classification that the expression of user's nature of collecting is discerned and analyzed and obtains expressing one's feelings, need not in advance specific user's expression to be sampled and by comparing with sampling expression, therefore not specific user's Expression Recognition and the input of expression information can be applicable to, versatility can be improved based on the input method of Expression Recognition.
And, as mentioned above, input method according to present embodiment based on Expression Recognition, the expressing one's feelings naturally of people's face that camera head collects discerned, and the classification that obtains expressing one's feelings according to the extraction and the analysis of expressive features, classification and matching is imported to input results, above-mentioned input process need not user's manual operation, therefore can improve the efficient and the convenience of expression information input, and import user expression with auxiliary traditional input mode by the mode of image, enriched input method form, strengthened the interest of operation.
And, as mentioned above, in input method based on Expression Recognition according to present embodiment, camera head is gathered image at interval with preset time, and when the classification results of judging expression with to the classification results of time interval expression before when identical, no longer carry out follow-up step, and directly proceed image collection by camera head in the next time interval, therefore, can carry out Expression Recognition and input in real time continuously, increase the efficient and the convenience of input operation, and enlarged the usable range of input method, be specially adapted to comprise the real-time Network Transmission of Internet chat.
In addition, under the situation that does not break away from the spirit and scope of the present invention that are defined by the claims, can also carry out various changes on form and the details to the Web page subject content extraction method in the present embodiment.
For example, in the input method based on Expression Recognition of present embodiment, S1070 selects input format by the user in step, but the present invention is not limited to this, and the user can also set in advance the input format of acquiescence.Thus, save the operation of selecting input format, can further improve the efficient and the convenience of input process.
Again for example, editing and processing such as the user can also add input format, deletion, modification for example, are added other input format such as video format, audio format.
Again for example, the user can also be according to Expression Recognition and sorting result, dynamic editing (for example add, delete, revise) input results.For example, when the classification results that obtains according to Expression Recognition " happiness " can further be refined as " smile " with " laugh ", then can correspondingly add new input results, for example, add input results as shown in Figure 3 corresponding to " laugh ".
Utilizability on the industry
Input method based on Expression Recognition of the present invention is applicable to the input of the expression information in the electronic equipments such as mobile phone, personal digital assistant, television set, PC and the Internet Transmission.

Claims (8)

1. input method based on Expression Recognition comprises:
Acquisition step utilizes camera head to gather image;
Identification and classification step identify people's face and carry out the extraction of expressive features and obtain the classification of described expression from described image;
The coupling step is complementary described classification and corresponding input results;
Input step uses described input results to import.
2. the input method based on Expression Recognition as claimed in claim 1 is characterized in that described identification and classification step comprise:
Use reference template method, people's face rule method, sample learning method, complexion model method or the sub-face method of feature to identify described people's face and locate profile;
Use principal component analysis (PCA) or gal cypress (Gabor) wavelet method from described people's face, to extract expressive features;
Use is based on the matching process of template, based on neural network method or obtain the classification of described expression from described expressive features based on the method for support vector machine.
3. the input method based on Expression Recognition as claimed in claim 1 is characterized in that carrying out described acquisition step, identification and classification step, coupling step at interval repeatedly successively with preset time; Before the coupling step, also comprise:
Determining step judges that whether described classification is identical with the classification of time interval expression before, if identical, then do not carry out follow-up described coupling step, and directly returns described acquisition step in the next time interval.
4. the input method based on Expression Recognition as claimed in claim 1 is characterized in that in described coupling step, with described classification and matching in predefined a plurality of input results in predetermined input format with the corresponding input results of described classification.
5. the input method based on Expression Recognition as claimed in claim 4 is characterized in that described predetermined input format is in text formatting, picture format, symbol combination form, video format, the audio format.
6. the input method based on Expression Recognition as claimed in claim 1, it is characterized in that in described coupling step, from by selecting a kind of input format in one or more input format formed text formatting, picture format, symbol combination form, video format, the audio format, and with described classification and matching in predefined a plurality of input results in selected input format with the corresponding input results of described classification.
7. as claim 5 or 6 described input methods, it is characterized in that also comprising based on Expression Recognition:
To comprise deletion, the editing and processing of revising or increasing by one or more input format formed in text formatting, picture format, symbol combination form, video format, the audio format.
8. the input method based on Expression Recognition as claimed in claim 1 is characterized in that also comprising:
To comprising deletion, the editing and processing of revising or increasing by one or more input results of forming glad, angry, that be taken aback, reach in the fear.
CN 2010101184632010-03-022010-03-02Input method based on facial expression recognitionActiveCN102193620B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN 201010118463CN102193620B (en)2010-03-022010-03-02Input method based on facial expression recognition

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN 201010118463CN102193620B (en)2010-03-022010-03-02Input method based on facial expression recognition

Publications (2)

Publication NumberPublication Date
CN102193620Atrue CN102193620A (en)2011-09-21
CN102193620B CN102193620B (en)2013-01-23

Family

ID=44601804

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN 201010118463ActiveCN102193620B (en)2010-03-022010-03-02Input method based on facial expression recognition

Country Status (1)

CountryLink
CN (1)CN102193620B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102880388A (en)*2012-09-062013-01-16北京天宇朗通通信设备股份有限公司Music processing method, music processing device and mobile terminal
CN103257736A (en)*2012-02-212013-08-21纬创资通股份有限公司User emotion detection method and handwriting input electronic device applying same
CN103399630A (en)*2013-07-052013-11-20北京百纳威尔科技有限公司Method and device for recording facial expressions
CN103474081A (en)*2012-06-052013-12-25广达电脑股份有限公司Character display method and processing device and computer program product
CN103488293A (en)*2013-09-122014-01-01北京航空航天大学Man-machine motion interaction system and method based on expression recognition
CN103514389A (en)*2012-06-282014-01-15华为技术有限公司Equipment authentication method and device
CN103677226A (en)*2012-09-042014-03-26北方工业大学expression recognition input method
CN103809759A (en)*2014-03-052014-05-21李志英Face input method
CN104063683A (en)*2014-06-062014-09-24北京搜狗科技发展有限公司Expression input method and device based on face identification
CN104244101A (en)*2013-06-212014-12-24三星电子(中国)研发中心Method and device for commenting multimedia content
CN104284131A (en)*2014-10-292015-01-14四川智诚天逸科技有限公司Video communication device adjusting image
CN104333688A (en)*2013-12-032015-02-04广州三星通信技术研究有限公司Equipment and method for generating emoticon based on shot image
CN104423547A (en)*2013-08-282015-03-18联想(北京)有限公司 An input method and electronic device
CN104933113A (en)*2014-06-062015-09-23北京搜狗科技发展有限公司Expression input method and device based on semantic understanding
CN103488293B (en)*2013-09-122016-11-30北京航空航天大学A kind of Human-Machine Emotion Interactive System based on Expression Recognition and method
CN108216254A (en)*2018-01-102018-06-29山东大学The road anger Emotion identification method merged based on face-image with pulse information

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1606347A (en)*2004-11-152005-04-13北京中星微电子有限公司A video communication method
US20070071288A1 (en)*2005-09-292007-03-29Quen-Zong WuFacial features based human face recognition method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1606347A (en)*2004-11-152005-04-13北京中星微电子有限公司A video communication method
US20070071288A1 (en)*2005-09-292007-03-29Quen-Zong WuFacial features based human face recognition method

Cited By (22)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103257736A (en)*2012-02-212013-08-21纬创资通股份有限公司User emotion detection method and handwriting input electronic device applying same
CN103257736B (en)*2012-02-212016-02-24纬创资通股份有限公司User emotion detection method and handwriting input electronic device applying same
CN103474081A (en)*2012-06-052013-12-25广达电脑股份有限公司Character display method and processing device and computer program product
CN103514389A (en)*2012-06-282014-01-15华为技术有限公司Equipment authentication method and device
CN103677226B (en)*2012-09-042016-08-03北方工业大学 Expression recognition input method
CN103677226A (en)*2012-09-042014-03-26北方工业大学expression recognition input method
CN102880388A (en)*2012-09-062013-01-16北京天宇朗通通信设备股份有限公司Music processing method, music processing device and mobile terminal
CN104244101A (en)*2013-06-212014-12-24三星电子(中国)研发中心Method and device for commenting multimedia content
CN103399630A (en)*2013-07-052013-11-20北京百纳威尔科技有限公司Method and device for recording facial expressions
CN104423547B (en)*2013-08-282018-04-27联想(北京)有限公司 An input method and electronic device
CN104423547A (en)*2013-08-282015-03-18联想(北京)有限公司 An input method and electronic device
CN103488293B (en)*2013-09-122016-11-30北京航空航天大学A kind of Human-Machine Emotion Interactive System based on Expression Recognition and method
CN103488293A (en)*2013-09-122014-01-01北京航空航天大学Man-machine motion interaction system and method based on expression recognition
CN104333688B (en)*2013-12-032018-07-10广州三星通信技术研究有限公司The device and method of image formation sheet feelings symbol based on shooting
CN104333688A (en)*2013-12-032015-02-04广州三星通信技术研究有限公司Equipment and method for generating emoticon based on shot image
CN103809759A (en)*2014-03-052014-05-21李志英Face input method
CN104933113A (en)*2014-06-062015-09-23北京搜狗科技发展有限公司Expression input method and device based on semantic understanding
CN104063683B (en)*2014-06-062017-05-17北京搜狗科技发展有限公司Expression input method and device based on face identification
CN104063683A (en)*2014-06-062014-09-24北京搜狗科技发展有限公司Expression input method and device based on face identification
CN104933113B (en)*2014-06-062019-08-02北京搜狗科技发展有限公司A kind of expression input method and device based on semantic understanding
CN104284131A (en)*2014-10-292015-01-14四川智诚天逸科技有限公司Video communication device adjusting image
CN108216254A (en)*2018-01-102018-06-29山东大学The road anger Emotion identification method merged based on face-image with pulse information

Also Published As

Publication numberPublication date
CN102193620B (en)2013-01-23

Similar Documents

PublicationPublication DateTitle
CN102193620A (en)Input method based on facial expression recognition
CN102890776B (en)The method that expression figure explanation is transferred by facial expression
US20180137119A1 (en)Image management method and apparatus thereof
CN109874053A (en) Short video recommendation method based on video content understanding and user dynamic interest
JP2022088304A (en) Methods, devices, electronics, media and computer programs for processing video
CN104063683A (en)Expression input method and device based on face identification
US10650813B2 (en)Analysis of content written on a board
CN106294774A (en)User individual data processing method based on dialogue service and device
WO2024046189A1 (en)Text generation method and apparatus
CN110557678A (en)Video processing method, device and equipment
CN102984050A (en)Method, client and system for searching voices in instant messaging
CN104598127B (en) Method and device for inserting emoticons in dialogue interface
CN106612465A (en) Live interactive method and device
CN114283422B (en)Handwriting font generation method and device, electronic equipment and storage medium
CN107992937B (en)Unstructured data judgment method and device based on deep learning
CN113094512A (en)Fault analysis system and method in industrial production and manufacturing
CN111882625A (en)Method and device for generating dynamic graph, electronic equipment and storage medium
CN117219066A (en)Digital robot for intelligent intercommunication of languages and operation method thereof
CN120014648A (en) Video resource representation method, coding model training method and device
CN114529635B (en)Image generation method, device, storage medium and equipment
CN114880512A (en)Method and device for processing expression package picture and server
CN118260380B (en)Processing method and system for multimedia scene interaction data
CN110381367B (en)Video processing method, video processing equipment and computer readable storage medium
WO2024193538A1 (en)Video data processing method and apparatus, device, and readable storage medium
CN118154143A (en)Intelligent integrated recruitment system capable of online interview

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
C14Grant of patent or utility model
GR01Patent grant
CP02Change in the address of a patent holder

Address after:5-12 / F, building 6, 57 Andemen street, Yuhuatai District, Nanjing City, Jiangsu Province

Patentee after:Samsung Electronics (China) R&D Center

Patentee after:Samsung Electronics Co.,Ltd.

Address before:No. 268 Nanjing Huijie square Zhongshan Road city in Jiangsu province 210008 8 floor

Patentee before:Samsung Electronics (China) R&D Center

Patentee before:Samsung Electronics Co.,Ltd.

CP02Change in the address of a patent holder

[8]ページ先頭

©2009-2025 Movatter.jp