Movatterモバイル変換


[0]ホーム

URL:


CN102479388A - Expression interaction method based on face tracking and analysis - Google Patents

Expression interaction method based on face tracking and analysis
Download PDF

Info

Publication number
CN102479388A
CN102479388ACN2010105670942ACN201010567094ACN102479388ACN 102479388 ACN102479388 ACN 102479388ACN 2010105670942 ACN2010105670942 ACN 2010105670942ACN 201010567094 ACN201010567094 ACN 201010567094ACN 102479388 ACN102479388 ACN 102479388A
Authority
CN
China
Prior art keywords
expression
face
people
active appearance
appearance models
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2010105670942A
Other languages
Chinese (zh)
Inventor
姚健
曾祥永
杜志军
王阳生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Interjoy Technology Ltd
Original Assignee
Beijing Interjoy Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Interjoy Technology LtdfiledCriticalBeijing Interjoy Technology Ltd
Priority to CN2010105670942ApriorityCriticalpatent/CN102479388A/en
Publication of CN102479388ApublicationCriticalpatent/CN102479388A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Landscapes

Abstract

An expression interaction method based on face tracking and analysis belongs to the field of graphic image processing and computer vision. The method comprises the steps of collecting facial expression images through a camera; analyzing and processing the captured face image in real time by using the proposed face tracking and expression analysis technology to realize face tracking and expression parameter extraction; and then driving the target three-dimensional face model to make the same expression animation by using the extracted expression parameters. The invention has the characteristics of automation, robustness and strong interactivity, and is suitable for being applied to the fields of movie making, three-dimensional games, interactive multimedia and the like.

Description

Expression interactive approach based on face tracking and analysis
Technical field
The present invention relates to graph and image processing and computer vision field, particularly based on the tracking of people's face and the interactive approach of expressing one's feelings.
Background technology
Expression is interactive to be referred to move the technology that virtual portrait is made similar expression that drives through the expression of real-time seizure people face, and it has a wide range of applications in occasions such as human-computer interaction virtual game, visual human's report, 3D productions of film and TV; Just used the expression interaction technique to make to receive prestige people's expression animation like 3D film " A Fanda ".The expression interaction based on human face expression tracking and analysis that the present invention proposes is meant and utilizes the real-time images acquired of camera, utilizes the face tracking algorithm that the people's face in the video is followed the tracks of, and analyze the expression parameter of people's face in each frame; Utilize the expression parameter that extracts to drive a three-dimensional face model then, let it generate the expression identical with the performing artist.This expression interactive approach relates generally to the technology of two aspects: the one, and the face tracking in the computer vision field and expression analytical technology, the 2nd, the expression actuation techniques of three-dimensional face model in the computer graphics.The face tracking technology is the interactive core of expression, and the accuracy of tracking will influence the synthetic of follow-up three-dimensional model expression greatly.Popular face tracking technology is based on people's face-positioning method of active appearance models (AAM) at present.Because the proposition of counter-rotating composition algorithm makes AAM in the search procedure of people's face location, can restrain rapidly, obtains a locally optimal solution.What AAM solved is a local optimum, so the setting of AAM initial value has very large influence to positioning result.The setting of AAM energy function form simultaneously also has great influence to finding the solution of parameter, and good energy function formula can guide iterative process to move towards the actual parameter value; Otherwise can in iterative process, be absorbed in a local minimum and stop search, draw a result who departs from actual value.
The face tracking result is the expression Analysis Service, could instruct the expression of follow-up three-dimensional model to synthesize after the human face expression that only analyzes present frame and the intensity.In human face expression identification field, proposed numerous methods at present, as the feature of utilizing people's motion of characteristic point on the face to there are differences in the difference expression form of expression down realizes the identification and the classification of expressing one's feelings; Realize identification of expression or the like in conjunction with shape and face texture feature.These methods are suitable for the identification of single image, not too are fit to continuously dynamic Expression Recognition and expression intensive analysis.In the expression interaction, need to extract the expression type and corresponding strength information of each frame people face.People such as Chai propose to utilize the variable in distance between the unique point to extract the intensity and the type of expression, drive three-dimensional model thus and do corresponding expression.But this method can not be general, whenever changes performing artist's parameter and all need adjust.In the countenance synthesis method of three-dimensional model, mainly contain grid deforming method and linear interpolation method based on unique point control.Wherein the distortion of the mesh algorithm complex of unique point and calculated amount do not meet the interactive demand of expression; Linear interpolation technique possesses the advantage that operand is little, synthetic effect is true to nature.In the present invention, adopt linear interpolation to realize the synthetic of three-dimensional expression.
Summary of the invention
The invention provides a kind of expression interactive approach of analyzing based on face tracking, through the facial expression image of camera collection people face; Utilize the face tracking that proposes in real time the facial image of catching to be carried out analyzing and processing, realize the tracking and the expression Parameter Extraction of people's face with the expression analytical technology; Utilize the expression driving parameter target three-dimensional face model that extracts to make identical expression animation then.Synoptic diagram of the present invention is as shown in Figure 1.
To achieve these goals, the present invention proposes following technical scheme:
(1) a certain personage's of design three-dimensional model, and make some typical case's expression models (this step off-line is accomplished) of this personage;
(2) three active appearance models under the homonymy face angle (this step off-line is accomplished) not;
(3), then utilize the initial value of previous frame parameter as active appearance models if there is people's face in previous frame; If tracking is lost or people's face gets into picture for the first time, then utilize the Adaboost algorithm to detect people's face, the people that utilization is obtained is bold little and positional information is come the initialization active appearance models;
(4) minimization of energy function obtains optimum active appearance models parameter of present frame and expression parameter, detects the state of eyes;
(5) utilize the expression parameter and the eye state that obtain to drive the three-dimensional model of making, let it generate the expression identical with the performing artist;
(6) upgrade the camera data, the expression analysis of beginning next frame and expression drive to be handled.
Advantage of the present invention is:
1. highly versatile; The user can change required three dimensional character model, and same performing artist can drive different faceforms.
2. face tracking robust, processing speed is fast; Several kinds of typical case's expressions of people's face can be caught in real time accurately,, the processing speed of 25f/s can be realized for the computing machine of Pentium4 2G.
3. need not man-machine interactively, be applicable to the ordinary populace crowd.
Description of drawings
Fig. 1 is the interactive synoptic diagram of expression of the present invention, and 1. is the camera capturing visual among the figure, 2. is the correspondence expression of three dimensional character model.
Fig. 2 is a modelling synoptic diagram of the present invention (not showing whole models), from top to bottom, from left to right the expression in the correspondence table one is numbered successively: 1,7,8,2,5,6,3,4
Embodiment
To combine accompanying drawing that the present invention is specified below, and be to be noted that described embodiment only is intended to be convenient to understanding of the present invention, and it is not played any qualification effect.The present invention describes through following embodiment:
The setting of energy function formula and expression analysis, eyes are opened and are closed state-detection, three-dimensional model driving in making three-dimensional model, training active appearance models, face tracking initialization, the face tracking, and the practical implementation process is following:
1. making three-dimensional model
Three-dimensional model is made and to be belonged to the off-line pretreatment stage, and purpose is the model under three-dimensional face model of design and corresponding 14 the expression states, and department pattern is as shown in Figure 2.In the expression interaction, let this model real time modelling performing artist's expression.In the present invention,, divided 14 kinds of basic expressions, undertaken by following form during analogue formation based on the characteristics of human face action.
Table one modelling explanation
NumberingThe expression stateMake prompting
1Initial modelAmimia, face is little to be closed, and eyes are opened
2Magnify mouthThe model mouth magnifies
3Pout one's lipsMouth has sticked up forward
4GrinFace is a word and opens
5Laugh atThe corners of the mouth upwarps
6SadThe corners of the mouth is drop-down
7Close one's eyes in a left sideThe model left eye closes, other attonitys
8Close one's eyes in the right sideThe model right eye closes, other attonitys
9Left side indignationLeft side eyebrow is doneangry shape
10Right indignationRight eyebrow is doneangry shape
11Stare in a left sideLeft eye pops
12Stare in the right sideRight eye pops
13Eye is lifted on a left sideLeft side eyebrow is raised
14Eye is lifted on the right sideRight eyebrow is raised
2. training active appearance models
Because performing artist's head pose is uncontrollable in the tracing process, in order to strengthen the robustness of face tracking, the present invention proposes the face tracking method under the multi-angle.Three not active appearance models under the homonymy face angle have been trained; Side face angle respectively corresponding
Figure BSA00000368709500031
and
Figure BSA00000368709500032
is in tracing process; If the side face angle of people's face surpasses certain number of degrees; Then be written into the active appearance models under the new angle, strengthen the accuracy of face tracking.For each active appearance models, its training process is following:
(21) off-line is gathered the people's face sample under this angle, and to demarcating the shape of sample people face;
(22) shape and the texture of normalization sample people face.Wherein texture comprises three parts: shape have nothing to do gray scale texture maps, x direction gradient figure and y direction gradient figure.Wherein the introducing of gradient map is in order to strengthen anti-light interference capability.
(23) shape after the normalization and texture are done the PCA processing; Obtain the shape and the texture model of active appearance models:
Figure BSA00000368709500034
be the shape of S representative face wherein, and p is a form parameter; A is three-channel texture image, and λ is a parametric texture.In addition, calculate the Hessian matrix that needs in the iteration.
3. face tracking initialization
When the people gets into picture for the first time or follows the tracks of when losing, need to realize automatically the face tracking initialization, position, the people who promptly the detects people's face little information of being bold, and utilize the parameter of these information initializing active appearance models.The present invention utilizes Adaboost to carry out people's face and detects automatically, and Adaboost (Adaptive boosing, self-adaptation strengthens) is a kind of statistical learning algorithm commonly used, successfully has been applied to detection of people's face and the classification of people's face.Adaboost is through the final strong classifier of the incompatible acquisition of the cascaded series of some Weak Classifiers, and the several Weak Classifiers that come the front can be got rid of a large amount of non-face image-regions in advance, and follow-up sorter concentrates in the differentiation of similar human face region.
4. energy function is set
The face tracking process is exactly a process that minimizes the energy function value of active appearance models, and the setting of energy function form has very big influence to the precision of face tracking.In order to improve tracking accuracy, the present invention proposes new energy function form, form by three parts:
(41) whole texture difference restriction:
E1(p)=||A0-I(W(p))||=Σx∈S0[A0(x)-I(W(x;p))]2
This energy function Xiang Yuyuan AAM algorithm is consistent, and difference is that A is three-channel texture image.The physical significance of this function item is for through continuing to optimize parameter p, makes the residual error minimum of the irrelevant texture image of shape that obtained by shape and average texture image.
(42) local grain difference restriction:
E2(p)=Ωj∈ΩtΣx∈Rj[At-1(x)-It(W(x;p))]2
ΩtBe the face characteristic point set that detects, RjFor being one 9 * 9 the fritter at center with j unique point, AT-1Texture image for former frame people face.The physical significance of this function item makes that for through parameters optimization p the residual error of the texture image that the regional corresponding sub-piece with the previous frame unique point of texture image of the definite sub-piece of current unique point is regional is minimum.Consistance before and after this function item guarantees in the tracing process between the frame is avoided the parameter saltus step.
(43) area of skin color restriction:
E3(p)=Σx∈S0[ID(W(x;p))]2
People's face shape that parameter p is confirmed in the iterative process departs from human face region, therefore introduces this function item.IDBe a gray level image, value is 0 in the human face region, and value is 255 in the non-face zone.Human face region confirmed by the face complexion model, and the human face region through first frame detects can train a complexion model, during follow-up tracking complexion model upgraded.The physical significance of this function item is carried out in people's face effective coverage for guaranteeing iterative process, and it is far away excessively to avoid departing from actual value.
In parameter optimisation procedure, the energy function that the present invention confirms is above-mentioned three combination:
E(p)=E1(p)+ω2E2(p)+ω3E3(p)
ω wherein2, ω3Be weight coefficient, adjust the capability of influence of each function item.Can solve optimum form parameter p through the counter-rotating composition algorithm; Can obtain the shape of people's face through expression formula.
5. expression is analyzed
For the human face expression to each frame is analyzed, the present invention analyzes several kinds of actions of typically expressing one's feelings after obtaining people's face shape.The present invention has introduced the CANDIDE three-dimensional face model, and on this basis model is revised, to cooperate several kinds of expressions of table one.The form of CANDIDE shape is following:
g(σ,α)=g‾+Sσ+Aα
Figure BSA00000368709500043
is three-dimensional average face shape; S is the change component of 3D shape, and A is the expression motion components.
Figure BSA00000368709500044
is used for describing people's face shape of persona certa, and A α representes this people's expression action.First frame at face tracking; Suppose and do not have the expression action; Confirm this people's the follow-up tracing process of people's face shape
Figure BSA00000368709500045
espressiove action thus, people's face shape then remains unchanged.The expression Parameter Extraction promptly minimizes following energy function:
E=‖S′(p)-P(Q(g′(σ,α)))‖2
In its Chinese style ' represent the certain characteristics point in the shape, S (p) is for following the tracks of the people's face shape that obtains, and Q () represents the rotary manipulation of three-dimensional shape model, i.e. the attitude of head; P () represents projection operation, and 3D shape is projected to the plane of delineation.The physical significance of this energy function formula is the most optimized parameter σ, and α makes this three-dimensional shape model through consistent with the shape of following the tracks of acquisition after rotation and the projection.σ confirms at first frame, remains unchanged in the tracing process, has only action parameter α to change, and extracts the expression action parameter of each frame thus.
6. eye state identification
Because the resolution of camera institute images acquired is limited, though overall AAM can obtain good people's face shape positioning result, the bearing accuracy of eyes is limited, so the present invention has carried out further processing to eyes.Eyes are handled the Fine Mapping comprise eye shape and eyes and are opened and close status detection.The Fine Mapping of eyes is following:
(1) the local active appearance models of an eye areas of training, process is of 2;
(2) utilize the local AAM (off-line completion) of people's face positioning result initialization of overall AAM;
(3) iterative computation obtains the convergence result of local AAM, obtains the Fine Mapping of eyes.
Close state-detection for opening of eyes, adopted histogrammic characteristic of LBP and SVM linear classifier in the present invention.Concrete implementation procedure is following:
(1) collect the sample open eyes in a large number and close one's eyes, and the LBP histogram that calculates each sample is as characteristic of division (off-line completion);
(2) utilize SVM to train eyes and open the linear classifier that closes state-detection;
(3) on the basis of eyes Fine Mapping, calculate the LBP histogram of this area image and utilize sorter to detect the state of eyes.
7. three-dimensional model drives
In the present invention, the method for linear interpolation is adopted in the driving of three-dimensional model.In conjunction with the model of making and the expression action parameter of extraction, can confirm that the displacement on each summit of model is: D under a certain expression typeii(Vi-V0); V whereiniBe i the apex coordinate under the expression type, V0Be the apex coordinate under amimia, αiBe the intensity of i expression.The model that then final band is expressed one's feelings is: the quantity of
Figure BSA00000368709500051
expression type i is consistent with table one.Utilize attitude parameter (being rotation matrix Q ()) to come rotating model at last, make that the head pose of model is consistent with performing artist's attitude.
Top description is to be used to realize the present invention and embodiment, and therefore, scope of the present invention should not described by this and limit.It should be appreciated by those skilled in the art,, all belong to claim of the present invention and come restricted portion in any modification that does not depart from the scope of the present invention or local replacement.

Claims (5)

1. the expression interactive approach based on face tracking and analysis is characterized in that, comprises the steps:
Step 1: design a certain personage's three-dimensional model, and make some typical case's expression models (this step off-line is accomplished) of this personage;
Step 2: train three not active appearance models under the homonymy face angle (this step off-line is accomplished);
Step 3:, then utilize the initial value of previous frame parameter as active appearance models if there is people's face in previous frame; If tracking is lost or people's face gets into picture for the first time, then utilize the Adaboost algorithm to detect people's face, the people that utilization is obtained is bold little and positional information is come the initialization active appearance models;
Step 4: the minimization of energy function, obtain optimum active appearance models parameter of present frame and expression parameter, detect the state of eyes;
Step 5: utilize the expression parameter and the eye state that obtain to drive the three-dimensional model of making, let it generate the expression identical with the performing artist;
Step 6: upgrade the camera data, the expression analysis of beginning next frame and expression drive to be handled.
2. the expression interactive approach based on face tracking and analysis according to claim 1 is characterized in that the training of the active appearance models in the step 2 is undertaken by following process:
Step 21: adopt three kinds of not human face expression images under the homonymy face angle respectively, and mark people's face shape of every width of cloth image;
Step 22: to the sample set under each side face angle, people's face shape of normalization sample and people's face texture image, wherein texture image is made up of gray level image, x direction gradient figure and three passages of y direction gradient figure;
Step 23: train three kinds of active appearance models under the angle through PCA.
3. the expression interactive approach based on face tracking and analysis according to claim 1 is characterized in that, the energy function in the step 4 is undertaken by following process with the expression parameter acquiring:
Step 31: set the energy function formula of active appearance models, comprise the difference minimize restriction of people's face texture and average texture, based on the consecutive frame consistency constraint of unique point local grain, based on the human face region constraint (the off-line completion of this step) of complexion model;
Step 32: make follow-on CANDIDE three-dimensional face shape grid and several kinds of typical corresponding shape grids (this step off-line is accomplished) down of expressing one's feelings;
Step 33: utilization counter-rotating composition algorithm minimizes the energy function of active appearance models, obtains people's face shape of single preceding frame;
Step 34: utilize the people's face shape and the follow-on CANDIDE three-dimensional face grid that obtain, extract the expression parameter and the head pose of single preceding frame.
4. the expression interactive approach based on face tracking and analysis according to claim 1 is characterized in that, the eye state in the step 4 detects and undertaken by following process:
Step 41: the local active appearance models of an eye areas of training (this step off-line is accomplished);
Step 42: the state classification device (this step off-line is accomplished) that utilizes eyes of LBP histogram feature and SVM training to open eyes and close one's eyes;
Step 43: on the basis of overall active appearance models location, utilize local active appearance models accurately to locate shape of eyes;
Step 44: calculate the LBP of eye areas image, and judge the state of closing of opening of eyes according to the svm classifier device.
5. the expression interactive approach based on face tracking and analysis according to claim 1 is characterized in that, the expression in the step 5 drives is undertaken by following process:
Step 51: be written into three dimensional character model and corresponding typical case's expression model (this step accomplishes) thereof when program initialization;
Step 52: utilization expression calculation of parameter goes out the displacement on every kind of each summit of typical case's expression drag, and is superimposed with the amimia model of neutrality then, obtains the corresponding to expression model of this personage and performing artist;
Step 53: the rotary manipulation to three angles of expression model enforcement makes its head pose with the performing artist consistent;
CN2010105670942A2010-11-222010-11-22Expression interaction method based on face tracking and analysisPendingCN102479388A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN2010105670942ACN102479388A (en)2010-11-222010-11-22Expression interaction method based on face tracking and analysis

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN2010105670942ACN102479388A (en)2010-11-222010-11-22Expression interaction method based on face tracking and analysis

Publications (1)

Publication NumberPublication Date
CN102479388Atrue CN102479388A (en)2012-05-30

Family

ID=46092018

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN2010105670942APendingCN102479388A (en)2010-11-222010-11-22Expression interaction method based on face tracking and analysis

Country Status (1)

CountryLink
CN (1)CN102479388A (en)

Cited By (46)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103198519A (en)*2013-03-152013-07-10苏州跨界软件科技有限公司Virtual character photographic system and virtual character photographic method
CN103426194A (en)*2013-09-022013-12-04厦门美图网科技有限公司Manufacturing method for full animation expression
CN103530900A (en)*2012-07-052014-01-22北京三星通信技术研究有限公司Three-dimensional face model modeling method, face tracking method and equipment
WO2014153689A1 (en)*2013-03-292014-10-02Intel CorporationAvatar animation, social networking and touch screen applications
WO2014205768A1 (en)*2013-06-282014-12-31中国科学院自动化研究所Feature and model mutual matching face tracking method based on increment principal component analysis
WO2015090147A1 (en)*2013-12-202015-06-25百度在线网络技术(北京)有限公司Virtual video call method and terminal
CN104753766A (en)*2015-03-022015-07-01小米科技有限责任公司Expression sending method and device
CN105022982A (en)*2014-04-222015-11-04北京邮电大学Hand motion identifying method and apparatus
CN105069745A (en)*2015-08-142015-11-18济南中景电子科技有限公司face-changing system based on common image sensor and enhanced augmented reality technology and method
CN105654537A (en)*2015-12-302016-06-08中国科学院自动化研究所Expression cloning method and device capable of realizing real-time interaction with virtual character
CN105809612A (en)*2014-12-302016-07-27广东世纪网通信设备股份有限公司 A method for converting photos into expressions and an intelligent terminal
CN106127139A (en)*2016-06-212016-11-16东北大学A kind of dynamic identifying method of MOOC course middle school student's facial expression
CN106415665A (en)*2014-07-252017-02-15英特尔公司Avatar facial expression animations with head rotation
CN106447785A (en)*2016-09-302017-02-22北京奇虎科技有限公司Method for driving virtual character and device thereof
CN106652000A (en)*2016-12-222017-05-10新乡学院Amination data generation device, system and method
CN106778628A (en)*2016-12-212017-05-31张维忠A kind of facial expression method for catching based on TOF depth cameras
CN106778200A (en)*2016-11-302017-05-31广东小天才科技有限公司User terminal unlocking method and device and user terminal
CN107103646A (en)*2017-04-242017-08-29厦门幻世网络科技有限公司A kind of countenance synthesis method and device
CN107707839A (en)*2017-09-112018-02-16广东欧珀移动通信有限公司Image processing method and device
CN107765856A (en)*2017-10-262018-03-06北京光年无限科技有限公司Visual human's visual processing method and system based on multi-modal interaction
CN108090470A (en)*2018-01-102018-05-29浙江大华技术股份有限公司A kind of face alignment method and device
CN108550170A (en)*2018-04-252018-09-18深圳市商汤科技有限公司Virtual role driving method and device
WO2018205801A1 (en)*2017-05-122018-11-15腾讯科技(深圳)有限公司Facial animation implementation method, computer device, and storage medium
CN109087379A (en)*2018-08-092018-12-25北京华捷艾米科技有限公司The moving method of human face expression and the moving apparatus of human face expression
CN109151340A (en)*2018-08-242019-01-04太平洋未来科技(深圳)有限公司 Video processing method, device and electronic device
CN109325988A (en)*2017-07-312019-02-12腾讯科技(深圳)有限公司A kind of facial expression synthetic method, device and electronic equipment
CN104732203B (en)*2015-03-052019-03-26中国科学院软件研究所A kind of Emotion identification and tracking based on video information
CN109711335A (en)*2018-12-262019-05-03北京百度网讯科技有限公司 Method and device for driving target image through human body features
CN109840019A (en)*2019-02-222019-06-04网易(杭州)网络有限公司Control method, device and the storage medium of virtual portrait
CN109903360A (en)*2017-12-082019-06-18浙江舜宇智能光学技术有限公司3 D human face animation control system and its control method
CN109954274A (en)*2017-12-232019-07-02金德奎A kind of exchange method and method for gaming based on Face datection tracking
WO2019154013A1 (en)*2018-02-092019-08-15腾讯科技(深圳)有限公司Expression animation data processing method, computer device and storage medium
CN110490093A (en)*2017-05-162019-11-22苹果公司Emoticon is recorded and is sent
CN111260692A (en)*2020-01-202020-06-09厦门美图之家科技有限公司 Face tracking method, device, device and storage medium
CN111507304A (en)*2020-04-292020-08-07广州市百果园信息技术有限公司Adaptive rigid prior model training method, face tracking method and related device
US10845968B2 (en)2017-05-162020-11-24Apple Inc.Emoji recording and sending
US10846905B2 (en)2017-05-162020-11-24Apple Inc.Emoji recording and sending
US10861248B2 (en)2018-05-072020-12-08Apple Inc.Avatar creation user interface
CN112149599A (en)*2020-09-292020-12-29网易(杭州)网络有限公司Expression tracking method and device, storage medium and electronic equipment
US11107261B2 (en)2019-01-182021-08-31Apple Inc.Virtual avatar animation based on facial feature movement
CN113989925A (en)*2021-10-222022-01-28支付宝(杭州)信息技术有限公司Face brushing interaction method and device
CN114359449A (en)*2022-01-132022-04-15北京大橘大栗文化传媒有限公司Face digital asset manufacturing method
US11733769B2 (en)2020-06-082023-08-22Apple Inc.Presenting avatars in three-dimensional environments
US12033296B2 (en)2018-05-072024-07-09Apple Inc.Avatar creation user interface
US12079458B2 (en)2016-09-232024-09-03Apple Inc.Image data for enhanced user interactions
US12218894B2 (en)2019-05-062025-02-04Apple Inc.Avatar integration with a contacts user interface

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101354795A (en)*2008-08-282009-01-28北京中星微电子有限公司Method and system for driving three-dimensional human face cartoon based on video
CN101393599A (en)*2007-09-192009-03-25中国科学院自动化研究所 A Game Character Control Method Based on Facial Expressions
CN101499128A (en)*2008-01-302009-08-05中国科学院自动化研究所Three-dimensional human face action detecting and tracing method based on video stream

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101393599A (en)*2007-09-192009-03-25中国科学院自动化研究所 A Game Character Control Method Based on Facial Expressions
CN101499128A (en)*2008-01-302009-08-05中国科学院自动化研究所Three-dimensional human face action detecting and tracing method based on video stream
CN101354795A (en)*2008-08-282009-01-28北京中星微电子有限公司Method and system for driving three-dimensional human face cartoon based on video

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HAE WON BYUN: "Online Expression Mapping for Performance-Driven Facial Animation", 《ENTERTAINMENT COMPUTING - ICEC 2007》*
杜志军 等: "利用主动外观模型合成动态人脸表情", 《计算机辅助设计与图形学学报》*
范小九 等: "一种改进的AAM人脸特征点快速定位方法", 《电子与信息学报》*

Cited By (77)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103530900A (en)*2012-07-052014-01-22北京三星通信技术研究有限公司Three-dimensional face model modeling method, face tracking method and equipment
CN103530900B (en)*2012-07-052019-03-19北京三星通信技术研究有限公司Modeling method, face tracking method and the equipment of three-dimensional face model
CN103198519A (en)*2013-03-152013-07-10苏州跨界软件科技有限公司Virtual character photographic system and virtual character photographic method
US9460541B2 (en)2013-03-292016-10-04Intel CorporationAvatar animation, social networking and touch screen applications
WO2014153689A1 (en)*2013-03-292014-10-02Intel CorporationAvatar animation, social networking and touch screen applications
WO2014205768A1 (en)*2013-06-282014-12-31中国科学院自动化研究所Feature and model mutual matching face tracking method based on increment principal component analysis
CN103426194A (en)*2013-09-022013-12-04厦门美图网科技有限公司Manufacturing method for full animation expression
WO2015090147A1 (en)*2013-12-202015-06-25百度在线网络技术(北京)有限公司Virtual video call method and terminal
CN105022982B (en)*2014-04-222019-03-29北京邮电大学Hand motion recognition method and apparatus
CN105022982A (en)*2014-04-222015-11-04北京邮电大学Hand motion identifying method and apparatus
US10248854B2 (en)2014-04-222019-04-02Beijing University Of Posts And TelecommunicationsHand motion identification method and apparatus
CN106415665A (en)*2014-07-252017-02-15英特尔公司Avatar facial expression animations with head rotation
CN106415665B (en)*2014-07-252020-05-19英特尔公司Head portrait facial expression animation with head rotation
CN105809612A (en)*2014-12-302016-07-27广东世纪网通信设备股份有限公司 A method for converting photos into expressions and an intelligent terminal
CN104753766A (en)*2015-03-022015-07-01小米科技有限责任公司Expression sending method and device
CN104732203B (en)*2015-03-052019-03-26中国科学院软件研究所A kind of Emotion identification and tracking based on video information
CN105069745A (en)*2015-08-142015-11-18济南中景电子科技有限公司face-changing system based on common image sensor and enhanced augmented reality technology and method
CN105654537A (en)*2015-12-302016-06-08中国科学院自动化研究所Expression cloning method and device capable of realizing real-time interaction with virtual character
CN105654537B (en)*2015-12-302018-09-21中国科学院自动化研究所It is a kind of to realize and the expression cloning method and device of virtual role real-time interactive
CN106127139A (en)*2016-06-212016-11-16东北大学A kind of dynamic identifying method of MOOC course middle school student's facial expression
CN106127139B (en)*2016-06-212019-06-25东北大学A kind of dynamic identifying method of MOOC course middle school student's facial expression
US12079458B2 (en)2016-09-232024-09-03Apple Inc.Image data for enhanced user interactions
CN106447785A (en)*2016-09-302017-02-22北京奇虎科技有限公司Method for driving virtual character and device thereof
CN106778200B (en)*2016-11-302019-08-13广东小天才科技有限公司 A method and device for unlocking a user terminal, and a user terminal
CN106778200A (en)*2016-11-302017-05-31广东小天才科技有限公司User terminal unlocking method and device and user terminal
CN106778628A (en)*2016-12-212017-05-31张维忠A kind of facial expression method for catching based on TOF depth cameras
CN106652000A (en)*2016-12-222017-05-10新乡学院Amination data generation device, system and method
CN107103646B (en)*2017-04-242020-10-23厦门黑镜科技有限公司Expression synthesis method and device
CN107103646A (en)*2017-04-242017-08-29厦门幻世网络科技有限公司A kind of countenance synthesis method and device
US11087519B2 (en)2017-05-122021-08-10Tencent Technology (Shenzhen) Company LimitedFacial animation implementation method, computer device, and storage medium
CN108876879A (en)*2017-05-122018-11-23腾讯科技(深圳)有限公司Method, apparatus, computer equipment and the storage medium that human face animation is realized
CN108876879B (en)*2017-05-122022-06-14腾讯科技(深圳)有限公司Method and device for realizing human face animation, computer equipment and storage medium
WO2018205801A1 (en)*2017-05-122018-11-15腾讯科技(深圳)有限公司Facial animation implementation method, computer device, and storage medium
CN110490093B (en)*2017-05-162020-10-16苹果公司Emoticon recording and transmission
US10997768B2 (en)2017-05-162021-05-04Apple Inc.Emoji recording and sending
US11532112B2 (en)2017-05-162022-12-20Apple Inc.Emoji recording and sending
US12045923B2 (en)2017-05-162024-07-23Apple Inc.Emoji recording and sending
US10846905B2 (en)2017-05-162020-11-24Apple Inc.Emoji recording and sending
US10845968B2 (en)2017-05-162020-11-24Apple Inc.Emoji recording and sending
CN110490093A (en)*2017-05-162019-11-22苹果公司Emoticon is recorded and is sent
CN109325988A (en)*2017-07-312019-02-12腾讯科技(深圳)有限公司A kind of facial expression synthetic method, device and electronic equipment
CN107707839A (en)*2017-09-112018-02-16广东欧珀移动通信有限公司Image processing method and device
CN107765856A (en)*2017-10-262018-03-06北京光年无限科技有限公司Visual human's visual processing method and system based on multi-modal interaction
CN109903360A (en)*2017-12-082019-06-18浙江舜宇智能光学技术有限公司3 D human face animation control system and its control method
CN109954274A (en)*2017-12-232019-07-02金德奎A kind of exchange method and method for gaming based on Face datection tracking
CN108090470A (en)*2018-01-102018-05-29浙江大华技术股份有限公司A kind of face alignment method and device
US11741750B2 (en)2018-01-102023-08-29Zhejiang Dahua Technology Co., Ltd.Methods and systems for face alignment
CN108090470B (en)*2018-01-102020-06-23浙江大华技术股份有限公司 A face alignment method and device
US11301668B2 (en)2018-01-102022-04-12Zhejiang Dahua Technology Co., Ltd.Methods and systems for face alignment
US11270488B2 (en)2018-02-092022-03-08Tencent Technology (Shenzhen) Company LimitedExpression animation data processing method, computer device, and storage medium
CN110135226A (en)*2018-02-092019-08-16腾讯科技(深圳)有限公司Expression animation data processing method, device, computer equipment and storage medium
WO2019154013A1 (en)*2018-02-092019-08-15腾讯科技(深圳)有限公司Expression animation data processing method, computer device and storage medium
CN110135226B (en)*2018-02-092023-04-07腾讯科技(深圳)有限公司Expression animation data processing method and device, computer equipment and storage medium
CN108550170B (en)*2018-04-252020-08-07深圳市商汤科技有限公司Virtual character driving method and device
CN108550170A (en)*2018-04-252018-09-18深圳市商汤科技有限公司Virtual role driving method and device
US10861248B2 (en)2018-05-072020-12-08Apple Inc.Avatar creation user interface
US12340481B2 (en)2018-05-072025-06-24Apple Inc.Avatar creation user interface
US12033296B2 (en)2018-05-072024-07-09Apple Inc.Avatar creation user interface
US11682182B2 (en)2018-05-072023-06-20Apple Inc.Avatar creation user interface
US11380077B2 (en)2018-05-072022-07-05Apple Inc.Avatar creation user interface
CN109087379B (en)*2018-08-092020-01-17北京华捷艾米科技有限公司 Facial expression migration method and facial expression migration device
CN109087379A (en)*2018-08-092018-12-25北京华捷艾米科技有限公司The moving method of human face expression and the moving apparatus of human face expression
CN109151340A (en)*2018-08-242019-01-04太平洋未来科技(深圳)有限公司 Video processing method, device and electronic device
CN109711335A (en)*2018-12-262019-05-03北京百度网讯科技有限公司 Method and device for driving target image through human body features
US11107261B2 (en)2019-01-182021-08-31Apple Inc.Virtual avatar animation based on facial feature movement
CN109840019A (en)*2019-02-222019-06-04网易(杭州)网络有限公司Control method, device and the storage medium of virtual portrait
CN109840019B (en)*2019-02-222023-01-10网易(杭州)网络有限公司Virtual character control method, device and storage medium
US12218894B2 (en)2019-05-062025-02-04Apple Inc.Avatar integration with a contacts user interface
CN111260692A (en)*2020-01-202020-06-09厦门美图之家科技有限公司 Face tracking method, device, device and storage medium
US12400339B2 (en)2020-04-292025-08-26Bigo Technology Pte. Ltd.Method for training adaptive rigid prior model, and method for tracking faces, electronic device, and storage medium
CN111507304A (en)*2020-04-292020-08-07广州市百果园信息技术有限公司Adaptive rigid prior model training method, face tracking method and related device
US11733769B2 (en)2020-06-082023-08-22Apple Inc.Presenting avatars in three-dimensional environments
US12282594B2 (en)2020-06-082025-04-22Apple Inc.Presenting avatars in three-dimensional environments
CN112149599A (en)*2020-09-292020-12-29网易(杭州)网络有限公司Expression tracking method and device, storage medium and electronic equipment
CN112149599B (en)*2020-09-292024-03-08网易(杭州)网络有限公司Expression tracking method and device, storage medium and electronic equipment
CN113989925A (en)*2021-10-222022-01-28支付宝(杭州)信息技术有限公司Face brushing interaction method and device
CN114359449A (en)*2022-01-132022-04-15北京大橘大栗文化传媒有限公司Face digital asset manufacturing method

Similar Documents

PublicationPublication DateTitle
CN102479388A (en)Expression interaction method based on face tracking and analysis
Li et al.Robust visual tracking based on convolutional features with illumination and occlusion handing
CN109472198B (en)Gesture robust video smiling face recognition method
CN101393599B (en) A Game Character Control Method Based on Facial Expressions
CN101499128B (en) 3D Face Action Detection and Tracking Method Based on Video Stream
Xia et al.A survey on human performance capture and animation
Micilotta et al.Real-time upper body detection and 3D pose estimation in monoscopic images
Schmaltz et al.Region-based pose tracking with occlusions using 3d models
CN102402691A (en)Method for tracking human face posture and motion
CN101763636A (en)Method for tracing position and pose of 3D human face in video sequence
CN102332095A (en)Face motion tracking method, face motion tracking system and method for enhancing reality
CN107886558A (en)A kind of human face expression cartoon driving method based on RealSense
Ong et al.Viewpoint invariant exemplar-based 3D human tracking
CN103310194A (en)Method for detecting head and shoulders of pedestrian in video based on overhead pixel gradient direction
Thalhammer et al.SyDPose: Object detection and pose estimation in cluttered real-world depth images trained using only synthetic data
CN105893984A (en)Face projection method for facial makeup based on face features
Fossati et al.Bridging the gap between detection and tracking for 3D monocular video-based motion capture
Irie et al.Improvements to facial contour detection by hierarchical fitting and regression
Piater et al.Video analysis for continuous sign language recognition
Zalewski et al.Synthesis and recognition of facial expressions in virtual 3d views
CN106940792A (en)The human face expression sequence truncation method of distinguished point based motion
Gao et al.Learning and synthesizing MPEG-4 compatible 3-D face animation from video sequence
Fossati et al.From canonical poses to 3D motion capture using a single camera
Altaf et al.Presenting an effective algorithm for tracking of moving object based on support vector machine
Lefevre et al.Structure and appearance features for robust 3d facial actions tracking

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
C02Deemed withdrawal of patent application after publication (patent law 2001)
WD01Invention patent application deemed withdrawn after publication

Application publication date:20120530


[8]ページ先頭

©2009-2025 Movatter.jp