Movatterモバイル変換


[0]ホーム

URL:


CN104077804B - A kind of method based on multi-frame video picture construction three-dimensional face model - Google Patents

A kind of method based on multi-frame video picture construction three-dimensional face model
Download PDF

Info

Publication number
CN104077804B
CN104077804BCN201410253326.5ACN201410253326ACN104077804BCN 104077804 BCN104077804 BCN 104077804BCN 201410253326 ACN201410253326 ACN 201410253326ACN 104077804 BCN104077804 BCN 104077804B
Authority
CN
China
Prior art keywords
dimensional
video
dimensional face
face
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410253326.5A
Other languages
Chinese (zh)
Other versions
CN104077804A (en
Inventor
刘威
张丛喆
汤勇
谢佳亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GUANGZHOU JIAQI INTELLIGENT TECHNOLOGY CO LTD
Original Assignee
GUANGZHOU JIAQI INTELLIGENT TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GUANGZHOU JIAQI INTELLIGENT TECHNOLOGY CO LTDfiledCriticalGUANGZHOU JIAQI INTELLIGENT TECHNOLOGY CO LTD
Priority to CN201410253326.5ApriorityCriticalpatent/CN104077804B/en
Publication of CN104077804ApublicationCriticalpatent/CN104077804A/en
Application grantedgrantedCritical
Publication of CN104077804BpublicationCriticalpatent/CN104077804B/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Landscapes

Abstract

The invention discloses a kind of method based on multi-frame video picture construction three-dimensional face model, including:The two-dimentional monitored picture that the video camera of fixing irradiation position and angle is shot carries out three-dimensional reconstruction, thus obtaining three-dimensional space model and the three-dimensional spatial information of camera supervised picture;Extract, from the video image of input, the multiframe continuous sequence video comprising target motion, shape, texture and colouring information;Facial features localization, three-dimensional fix, face characteristic synchronized tracking and identification are carried out to multiframe continuous sequence video, thus obtaining three-dimensional face features' point of multiframe continuous sequence video;Three-dimensional space model according to camera supervised picture carries out superposition calculation to three-dimensional face features' point of multiframe continuous sequence video, thus forming three-dimensional face grid and generating three-dimensional face model data.The present invention has the advantages that simple and convenient, preferably and degree of accuracy is higher for real-time.The composite can be widely applied to field of video image processing.

Description

A kind of method based on multi-frame video picture construction three-dimensional face model
Technical field
The present invention relates to field of video image processing, especially one kind are based on multi-frame video picture construction three-dimensional face modelMethod.
Background technology
At present, based on biological characteristic(As fingerprint, palmmprint and footmark etc.)Identity identifying technology be widely used to security protectionField, the various service application based on Identification of Images also gradually spread to different industries and field.Traditional Identification of Images sideMethod based on two-dimension human face recognition methodss, including Fisher face recognition methodss and Eigenface recognition methodss etc..But twoThe discrimination of dimension recognition of face method is low, there is certain error, cannot meet the urgent needss of service application.And it is based on three-dimensionalThe Identification of Images method of faceform, compared with two-dimension human face method of identification, has more abundant information, and supports to revolve by spaceTurn the comparison realizing multi-angle, recognition accuracy is higher, has replaced the trend of two-dimension human face method of identification.
The structure of three-dimensional face model is core and the key of the Identification of Images method based on three-dimensional face model.At present, structureThe method building three-dimensional face model mainly has two kinds:A kind of is to a fixed people by the three-dimensional camera of multi-angleFace is shot, and is then spliced into a threedimensional model;Another kind is to build three-dimensional mould by way of surface profile scansType.Although both approaches are reconstructed three-dimensional face model to a certain extent, its operation is more complicated, not convenient.
The structure of three-dimensional face model includes feature extraction, master pattern change, positioning feature point and texture mapping to be waitedJourney.Current feature extraction, master pattern change and texture mapping process are carried out mainly for Static Human Face image, are difficult to reflectionThe information such as the face parameter with movement locus and attribute(Situation as facial expression distortion is just difficult to describe)It is impossible to adopt phaseMethod like property tolerance or comparison reduces real human face to the full extent, and real-time is relatively low and error in data is larger.
In sum, need in the industry three-dimensional face model construction method a kind of convenient, real-time and that degree of accuracy is high at present badly.
Content of the invention
In order to solve above-mentioned technical problem, the purpose of the present invention is:A kind of convenient, real-time and degree of accuracy height and application are providedScope is wide, the method based on multi-frame video picture construction three-dimensional face model.
The technical solution adopted for the present invention to solve the technical problems is:One kind is based on multi-frame video picture construction three-dimensional peopleThe method of face model, including:
A. the two-dimentional monitored picture video camera of fixing irradiation position and angle being shot carries out three-dimensional reconstruction, thus obtainingThe three-dimensional space model of camera supervised picture and three-dimensional spatial information;
B. from input video image extract comprise target motion, shape, texture and colouring information multiframe continuousSequence video;
C. multiframe continuous sequence video is carried out facial features localization, three-dimensional fix, face characteristic synchronized tracking withIdentification, thus obtain three-dimensional face features' point of multiframe continuous sequence video;
D. three-dimensional face features' point to multiframe continuous sequence video for the three-dimensional space model according to camera supervised pictureIt is overlapped calculating, thus forming three-dimensional face grid and generating three-dimensional face model data.
Further, described step A, it includes:
A1. homography solution is set up according to the intrinsic parameter matrix of video camera, described homography solution reflects actual ground levelHomography relation with ground level in video camera shooting image;
A2. according to known to the height of video camera, given two length and the reference line perpendicular to ground level, to video cameraVisual angle initial point calculated;
A3. camera angles are rebuild according to homography solution, the visual angle initial point of video camera and given Visualization Model three-dimensionalModel, thus obtain three-dimensional space model and the three-dimensional spatial information of camera supervised picture.
Further, described step C, it includes:
C1. choose single frame of video as current video frame from multiframe continuous sequence video;
C2. facial features localization and extract facial feature are carried out to current video frame, thus obtaining the people of current video frameFace characteristic point;
C3. three-dimensional fix is carried out to the human face characteristic point of current video frame, and detect face contained by current video frameThe spatial information of characteristic point, the movement locus of face characteristic and temporal information;
C4. face characteristic synchronized tracking and automatic identification are carried out to current video frame according to the result of detection, so that it is determined thatThe human face characteristic point of current video frame each locus coordinate in moving process;
C5. continue to choose next single frame of video from multiframe continuous sequence video as current video frame, then returnReturn step C2, thus the continuous kinestate under not in the same time generates the three-dimensional of face characteristic according to multiframe continuous sequence videoCoordinate system matrix.
Further, described step D, it is specially:
Three-dimensional face key frame superposition calculation is carried out by multigroup three-dimensional face features' point of multiframe continuous sequence video, rawBecome three-dimensional face model data directory list, thus setting up structurized three-dimensional face model data list and to three-dimensional faceModel data index list carries out storage process.
Further, described step D, it includes:
D1. by the three-dimensional space model for the camera supervised picture of people for the human face characteristic point of multiframe continuous sequence video,Thus obtaining three-dimensional face features' space of points coordinate of multiframe continuous sequence video;
D2. according to three-dimensional face features' space of points coordinate, texture image is generated using 3-D view stitching algorithm, and oppositeThe texture image becoming is mapped, thus obtaining real three-dimensional face model data.
Further, described step D2, it includes:
D21. the dilute of face markers point is rebuild from multiframe continuous sequence video according to three-dimensional face features' space of points coordinateRare set closes, and carries out trial one by one using thin plate spline TPS to sparse set;
D22. the result according to TPS trial carries out nonlinear transformation to Generic face model, thus obtaining the three-dimensional matedFaceform;
D23. the face texture feature information of multiframe continuous sequence video is obtained using 3-D view stitching algorithm, and willTo face texture feature information be mapped to coupling three-dimensional face model in, thus obtaining real three-dimensional face model numberAccording to.
Further, it is additionally provided with step E after described step D, described step E, it is specially:
Retain the special characteristic of Generic face model using SFM algorithm, by comparing with Generic face model, revise and generateThree-dimensional face model data and Generic face model data error;Then triangle close classification is adopted to pass through the depth letter of pointBreath builds final three-dimensional face model.
Further, the depth information that triangle close classification passes through point is adopted to build final three-dimensional face in described step EThe step for model, it includes:
E21. being filtered out from three-dimensional face grid according to default threshold value needs the triangle of subdivision, and to filtering outTriangle is marked;
E22. the triangle of labelling is combined into n grid block according to neighbouring relations, then this n grid block is independently gone outCome, be designated as Bb1,b2,b3,…,bn, the part not simultaneously being labeled three-dimensional face grid is designated asRi
E23. willBiThe weight on middle four summits of grid block is adjusted to 0,1/2,1/2 and 0 respectively, thus rightBiCarry out gridInterpolation subdividing;
E24. segment to not doingRiCarry out interpolation subdividing in boundary, so that being located at midpoint in borderline insertion pointPlace;
E25. willBWithRSynthesized, and judged whether the grid model after synthesis meets the length of side of its all triangle allLess than default threshold value, if so, then using the grid model after synthesis as final three-dimensional face model, conversely, then returning stepRapid E21.
The invention has the beneficial effects as follows:Built by the image information that the video camera of single fixing irradiation position and angle shootsVertical three-dimensional face model, simple to operate, very convenient;By facial features localization, three dimensions are carried out to continuous sequence videoPositioning, face characteristic synchronized tracking and identification, extract the key frame comprising face, enter Mobile state to the change in displacement of face locationFollow the trail of, set up three-dimensional relationship, determine the spatial relation of each human face characteristic point, solving prior art cannot be to dynamicThe information such as the face characteristic parameter such as movement locus of state facial image and attribute synchronizes the problem followed the tracks of with identification, real-timePreferably and degree of accuracy is higher.Further, the special characteristic of Generic face model is retained using SFM algorithm, and thin using trianglePoint-score smooths to facial image, further increases degree of accuracy and the sense of reality of faceform.
Brief description
The invention will be further described with reference to the accompanying drawings and examples.
Fig. 1 is a kind of flow chart of steps of the method based on multi-frame video picture construction three-dimensional face model of the present invention;
Fig. 2 is the flow chart of step A of the present invention;
Fig. 3 is the flow chart of step C of the present invention;
Fig. 4 is the flow chart of step D of the present invention;
Fig. 5 is the flow chart of step D2 of the present invention;
Fig. 6 is the flow chart of step E triangle close classification of the present invention;
Fig. 7 is the schematic diagram according to video camera intrinsic Reconstruction three-dimensional space model for the embodiment one;
Fig. 8 is the schematic diagram that in embodiment one, multiple image is set up with three-dimensional portrait flow process.
Specific embodiment
Reference Fig. 1, a kind of method based on multi-frame video picture construction three-dimensional face model, including:
A. the two-dimentional monitored picture video camera of fixing irradiation position and angle being shot carries out three-dimensional reconstruction, thus obtainingThe three-dimensional space model of camera supervised picture and three-dimensional spatial information;
B. from input video image extract comprise target motion, shape, texture and colouring information multiframe continuousSequence video;
C. multiframe continuous sequence video is carried out facial features localization, three-dimensional fix, face characteristic synchronized tracking withIdentification, thus obtain three-dimensional face features' point of multiframe continuous sequence video;
D. three-dimensional face features' point to multiframe continuous sequence video for the three-dimensional space model according to camera supervised pictureIt is overlapped calculating, thus forming three-dimensional face grid and generating three-dimensional face model data.
With reference to Fig. 2, it is further used as preferred embodiment, described step A, it includes:
A1. homography solution is set up according to the intrinsic parameter matrix of video camera, described homography solution reflects actual ground levelHomography relation with ground level in video camera shooting image;
A2. according to known to the height of video camera, given two length and the reference line perpendicular to ground level, to video cameraVisual angle initial point calculated;
A3. camera angles are rebuild according to homography solution, the visual angle initial point of video camera and given Visualization Model three-dimensionalModel, thus obtain three-dimensional space model and the three-dimensional spatial information of camera supervised picture.
With reference to Fig. 3, it is further used as preferred embodiment, described step C, it includes:
C1. choose single frame of video as current video frame from multiframe continuous sequence video;
C2. facial features localization and extract facial feature are carried out to current video frame, thus obtaining the people of current video frameFace characteristic point;
C3. three-dimensional fix is carried out to the human face characteristic point of current video frame, and detect face contained by current video frameThe spatial information of characteristic point, the movement locus of face characteristic and temporal information;
C4. face characteristic synchronized tracking and automatic identification are carried out to current video frame according to the result of detection, so that it is determined thatThe human face characteristic point of current video frame each locus coordinate in moving process;
C5. continue to choose next single frame of video from multiframe continuous sequence video as current video frame, then returnReturn step C2, thus the continuous kinestate under not in the same time generates the three-dimensional of face characteristic according to multiframe continuous sequence videoCoordinate system matrix.
It is further used as preferred embodiment, described step D, it is specially:
Three-dimensional face key frame superposition calculation is carried out by multigroup three-dimensional face features' point of multiframe continuous sequence video, rawBecome three-dimensional face model data directory list, thus setting up structurized three-dimensional face model data list and to three-dimensional faceModel data index list carries out storage process.
With reference to Fig. 4, it is further used as preferred embodiment, described step D, it includes:
D1. by the three-dimensional space model for the camera supervised picture of people for the human face characteristic point of multiframe continuous sequence video,Thus obtaining three-dimensional face features' space of points coordinate of multiframe continuous sequence video;
D2. according to three-dimensional face features' space of points coordinate, texture image is generated using 3-D view stitching algorithm, and oppositeThe texture image becoming is mapped, thus obtaining real three-dimensional face model data.
With reference to Fig. 5, it is further used as preferred embodiment, described step D2, it includes:
D21. the dilute of face markers point is rebuild from multiframe continuous sequence video according to three-dimensional face features' space of points coordinateRare set closes, and carries out trial one by one using thin plate spline TPS to sparse set;
D22. the result according to TPS trial carries out nonlinear transformation to Generic face model, thus obtaining the three-dimensional matedFaceform;
D23. the face texture feature information of multiframe continuous sequence video is obtained using 3-D view stitching algorithm, and willTo face texture feature information be mapped to coupling three-dimensional face model in, thus obtaining real three-dimensional face model numberAccording to.
It is further used as preferred embodiment, after described step D, being additionally provided with step E, described step E, it is specially:
Retain the special characteristic of Generic face model using SFM algorithm, by comparing with Generic face model, revise and generateThree-dimensional face model data and Generic face model data error;Then triangle close classification is adopted to pass through the depth letter of pointBreath builds final three-dimensional face model.
Generic face model, refers to be standard faces model known in the industry.
With reference to Fig. 6, it is further used as preferred embodiment, in described step E, adopt triangle close classification to pass through pointThe step for depth information builds final three-dimensional face model, it includes:
E21. being filtered out from three-dimensional face grid according to default threshold value needs the triangle of subdivision, and to filtering outTriangle is marked;
E22. the triangle of labelling is combined into n grid block according to neighbouring relations, then this n grid block is independently gone outCome, be designated as Bb1,b2,b3,…,bn, the part not simultaneously being labeled three-dimensional face grid is designated asRi
E23. willBiThe weight on middle four summits of grid block is adjusted to 0,1/2,1/2 and 0 respectively, thus rightBiCarry out gridInterpolation subdividing;
E24. segment to not doingRiCarry out interpolation subdividing in boundary, so that being located at midpoint in borderline insertion pointPlace;
E25. willBWithRSynthesized, and judged whether the grid model after synthesis meets the length of side of its all triangle allLess than default threshold value, if so, then using the grid model after synthesis as final three-dimensional face model, conversely, then returning stepRapid E21.
With reference to specific embodiment, the present invention is described in further detail.
Embodiment one
The present embodiment is carried out specifically by the process that video acquisition high-speed downloads device carries out faceform to the present inventionBright.
The process that video acquisition high-speed downloads device carries out faceform is:
(One)3 D scene rebuilding
In video acquisition high-speed downloads device, the intrinsic parameter of video camera is known, and the picture that video camera shoots carries out three-dimensionalScene rebuilding, reconstruction procedures are:First between the ground level in actual ground level and image, set up homography solution(homography)H;Utilize setting height(from bottom) h of reality and the ground level of video camera afterwards, default known length and perpendicular to groundThe line of plane, video camera is calibrated.Specific embodiment is as follows:
(1)According to the pin-hole model of video camera, define matrixMFor:, it follows that actual HorizonThe homography relation of the ground level in face and video camera shooting image is represented by…(1).
Wherein, A is the intrinsic parameter matrix of video camera.r1,r2,r3For spin matrixRThree column vectors,tFor translation ginsengNumber.If the corresponding point between ground level in actual ground level and image are more than 4 groups, formula can be passed through(1), H can be madeMore extended.
(2)Define video camera optic center point, that is, the visual angle initial point of video camera be (xc,yc,h), order,Spatial relationship according to video camera can draw
(3)The given reference line perpendicular to actual ground levell*, and its projection on ground level in photography imagel, the spatial relationship according to video camera can learn the straight line through impact pointHTlOn actual ground level and passing point (xc,yc, 0).
Therefore, according to step(1)-(3), given camera heighthWith two perpendicular to actual ground level and lengthThe reference line known, can calculatexc,ycWithK.
(4)Rebuild camera angles threedimensional model then according to default Visualization Model.
As shown in fig. 7, will (xc,yc,h) it is set to the central point of user coordinate system, and this Visualization Model is projected to realityGround level.According to space geometry projection relation, any point in user coordinate system (xw,yw,zw) throwing in actual ground levelShadow (xw,yw, 0) and formula can be passed through(2)Calculate, formula(2)For:
…(2)
(5)Finally utilize homography solution H, can be by projection mapping on actual ground level for this Visualization Model to imageIn ground level, thus setting up the reconstruction that mapping relations complete 3 D monitoring scene.After reconstruction terminates, can be to three setting upOn dimension portrait, any point carries out depth information calibration.
(Two)Obtain three-dimensional face features' spatial data
After the completion of 3 D scene rebuilding, the picture in this video camera is calculated, using facial features localization method pairWherein one framef1Picture carries out extract facial feature, and the characteristic point sequence collected is substituted into the three-dimensional space model rebuildIn, draw each characteristic point three-dimensional space data [xf1,yf1,zf1].Then begin to read the next frame in videof2, sameMethod obtain the second frame face characteristic three-dimensional space data [xf2,yf2,zf2], until obtainingfnThe three-dimensional face features of frameSpatial data [xfn,yfn,zfn].
(Three)Three-dimensional splicing and mapping
In addition it is also necessary to be modeled to the camera imaging figure of multiple angles after the three-dimensional face features' spatial data obtainingShape, this process can be converted into the three-dimensional splicing problem of 3-D view.Specific practice is:The dilute of face markers point is rebuild from videoRare set closes, using thin plate spline TPS(Thin Plate Spline)These sparse set are carried out trial one by one, then in TPSOn the basis of trial, Generic face model is carried out with the three-dimensional face model that nonlinear transformation obtains mating, finally again by videoFace texture information to map this coupling three-dimensional face model in, thus obtaining real three-dimensional face model.
For example, 3-D view I1 and I2 that a pair can splice is located in the range of one group of given N number of image.I1 firstCarry out splicing a kind of new 3-D view I11 obtaining with I2, then I11 image and I3 image carry out splicing and obtain image I12,Carry out splicing followed by I12 image and I3 image and obtain image I13.Repeat said process, till can not being spliced again,Thus obtaining a complete three-dimensional face model.
(Four)SFM algorithm
In order to ensure the precision of model, the present invention additionally uses SFM(Structure From Motion)Algorithm retains logicalWith the special characteristic of faceform, by comparing with Generic face model, revise the error between two width faces.Concretely comprise the following steps:
Determine first with fromf1Obtain [xf1,yf1,zf1] on the basis of follow the trail of data.Subsequently estimate the fortune of human face characteristic pointMove and structure change.Next motion estimated values are refined, finally by estimated value and next framef2Face coordinate figure enterRow compares, and judges that it, whether in the interval that estimation is calculated, if it is not, then abandoning, continues the extraction of next frame data,Thus circulate the face result drawing and can ensure that its grown form.
(Five)Triangle close classification
Three-dimensional portrait is set up by the depth information of point using triangle close classification, idiographic flow is as follows:
Step 1, screening needs the triangle of subdivision:If i-th triangle maximal side is k, work as K>This triangle of labelling during mShape, m is default threshold value, and the present invention is taken as 0.15.All trianglees in traversal grid model, and labelling needs the three of divisionAngular.
Step 2, the triangle of composite marking:The triangle of labelling is combined into n block according to neighbouring relations, and this nIndividual block is independent, be designated as Bb1,b2,b3,…,bn, the part not simultaneously being labeled three-dimensional face grid is designated asRi.
Step 3, segments independent grid block:RightBiDo grid subdivision, the weight on four summits need to be adjusted, be changed to 0,1/ respectively2,1/2,0, can be all midpoint always so in borderline insertion point, so that boundary shape is consistent.
Step 4, adjustmentRiBorder:BiBorder inserts new point in midpoint, is not done and segmentRi?Boundary does same adjustment, so that the grid of synthesis coincide in splicing boundary.
Step 5, synthesizes R and B.
Through above-mentioned steps 1-5, the subdivision of R grid completes, and R, and B is also consistent in borderline division points, finally twoPerson combines and just achieves the once subdivision to whole original mesh.Repeat above step until the length of side of all triangleesBoth less than threshold value, is finally reached model accuracy requirement.
Fig. 8 is the embodiment schematic diagram setting up three-dimensional portrait model by said method.
Compared with prior art, the image information that the present invention is shot by the video camera of single fixing irradiation position and angleSet up three-dimensional face model, simple to operate, very convenient;By facial features localization, three-dimensional space are carried out to continuous sequence videoBetween positioning, face characteristic synchronized tracking and identification, extract the key frame comprising face, action entered to the change in displacement of face locationState is followed the trail of, and sets up three-dimensional relationship, determines the spatial relation of each human face characteristic point, solving prior art cannot be rightThe information such as the face characteristic parameter such as movement locus of dynamic human face image and attribute synchronizes the problem followed the tracks of with identification, in real timeProperty preferably and degree of accuracy is higher;Retain the special characteristic of Generic face model using SFM algorithm, and adopt triangle close classification pairFacial image is smoothed, and further increases degree of accuracy and the sense of reality of faceform.
It is more than that the preferable enforcement to the present invention is illustrated, but the invention is not limited to described enforcementExample, those of ordinary skill in the art also can make a variety of equivalent variations without prejudice on the premise of present invention spirit or replaceChange, these equivalent deformation or replacement are all contained in the application claim limited range.

Claims (7)

CN201410253326.5A2014-06-092014-06-09A kind of method based on multi-frame video picture construction three-dimensional face modelExpired - Fee RelatedCN104077804B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201410253326.5ACN104077804B (en)2014-06-092014-06-09A kind of method based on multi-frame video picture construction three-dimensional face model

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201410253326.5ACN104077804B (en)2014-06-092014-06-09A kind of method based on multi-frame video picture construction three-dimensional face model

Publications (2)

Publication NumberPublication Date
CN104077804A CN104077804A (en)2014-10-01
CN104077804Btrue CN104077804B (en)2017-03-01

Family

ID=51599043

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201410253326.5AExpired - Fee RelatedCN104077804B (en)2014-06-092014-06-09A kind of method based on multi-frame video picture construction three-dimensional face model

Country Status (1)

CountryLink
CN (1)CN104077804B (en)

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104268578B (en)*2014-10-152017-06-09深圳市晓舟科技有限公司The target identification method that a kind of small image is compared and fuzzy diagnosis is combined
CN104408416A (en)*2014-11-252015-03-11苏州福丰科技有限公司Three-dimensional face recognition monitoring system for gold or jewelry shops
CN104361131B (en)*2014-12-082018-06-15黑龙江大学 Method for establishing four-dimensional face model database
CN104574309B (en)*2014-12-302018-03-16北京像素软件科技股份有限公司The method and device of triangular mesh subdivision in moving game application
CN105046725B (en)*2015-07-102017-03-29清华大学Head shoulder images method for reconstructing in low-bit rate video call based on model and object
CN105138954B (en)*2015-07-122019-06-04上海微桥电子科技有限公司A kind of image automatic screening inquiry identifying system
CN104951773B (en)*2015-07-122018-10-02上海微桥电子科技有限公司A kind of real-time face recognition monitoring system
CN105046219B (en)*2015-07-122018-12-18上海微桥电子科技有限公司A kind of face identification system
WO2017029488A2 (en)*2015-08-142017-02-23Metail LimitedMethods of generating personalized 3d head models or 3d body models
KR102659114B1 (en)*2015-09-232024-04-23코닌클리케 필립스 엔.브이. Generation of triangular meshes for three-dimensional images
CN105184860A (en)*2015-09-302015-12-23南京邮电大学Method for reconstructing dense three-dimensional structure and motion field of dynamic face simultaneously
CN106683163B (en)*2015-11-062020-10-27杭州海康威视数字技术股份有限公司Imaging method and system for video monitoring
CN106500628B (en)*2016-10-192019-02-19杭州思看科技有限公司 A three-dimensional scanning method and scanner containing multiple lasers of different wavelengths
CN106778474A (en)*2016-11-142017-05-31深圳奥比中光科技有限公司3D human body recognition methods and equipment
KR101923420B1 (en)*2017-04-032019-02-27(주)아모레퍼시픽Manufacturing apparatus and method for customized mask pack
CN107749084A (en)*2017-10-242018-03-02广州增强信息科技有限公司 A virtual try-on method and system based on image three-dimensional reconstruction technology
CN109753857A (en)*2017-11-072019-05-14北京虹图吉安科技有限公司 A 3D face recognition device and system based on photometric stereo vision imaging
CN110136270A (en)*2018-02-022019-08-16北京京东尚科信息技术有限公司 Method and device for creating makeup data
TWI658815B (en)*2018-04-252019-05-11國立交通大學Non-contact heartbeat rate measurement system, non-contact heartbeat rate measurement method and non-contact heartbeat rate measurement apparatus
CN108629333A (en)*2018-05-252018-10-09厦门市美亚柏科信息股份有限公司A kind of face image processing process of low-light (level), device, equipment and readable medium
CN109101915B (en)*2018-08-012021-04-27中国计量大学 Network structure design method for face, pedestrian and attribute recognition based on deep learning
CN109108968A (en)*2018-08-172019-01-01深圳市三宝创新智能有限公司Exchange method, device, equipment and the storage medium of robot head movement adjustment
CN109345621A (en)*2018-08-282019-02-15广州智美科技有限公司Interactive face three-dimensional modeling method and device
CN111144166A (en)*2018-11-022020-05-12银河水滴科技(北京)有限公司Method, system and storage medium for establishing abnormal crowd information base
CN111179408B (en)*2018-11-122024-04-12北京物语科技有限公司Three-dimensional modeling method and equipment
CN109685873B (en)*2018-12-142023-09-05广州市百果园信息技术有限公司Face reconstruction method, device, equipment and storage medium
CN111435268A (en)*2019-01-112020-07-21合肥虹慧达科技有限公司Human-computer interaction method based on image recognition and reconstruction and system and device using same
CN110135308A (en)*2019-04-302019-08-16天津工业大学 A Direct Free Kick Type Discrimination Method Based on Video Analysis
WO2020258304A1 (en)*2019-06-282020-12-30深圳市汇顶科技股份有限公司Door lock and recognition method
CN112188182A (en)*2019-07-022021-01-05西安诺瓦星云科技股份有限公司Stereoscopic display control system, stereoscopic display system and stereoscopic display controller
CN110505403A (en)*2019-08-202019-11-26维沃移动通信有限公司 Video recording method and device
CN110688905B (en)*2019-08-302023-04-18中山大学Three-dimensional object detection and tracking method based on key frame
CN111339958B (en)*2020-02-282023-08-29南京鑫之派智能科技有限公司Face living body detection method and system based on monocular vision
CN111369670B (en)*2020-03-132024-10-18江西科骏实业有限公司Method for constructing practical training digital twin model in real time
TWI808336B (en)*2020-07-302023-07-11杰悉科技股份有限公司Image display method and image monitoring system
CN112700523B (en)*2020-12-312022-06-07魔珐(上海)信息科技有限公司Virtual object face animation generation method and device, storage medium and terminal

Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2006106465A2 (en)*2005-04-072006-10-12Nxp B.V.Method and device for three-dimentional reconstruction and rendering
CN101404091A (en)*2008-11-072009-04-08重庆邮电大学Three-dimensional human face reconstruction method and system based on two-step shape modeling
CN101499128A (en)*2008-01-302009-08-05中国科学院自动化研究所Three-dimensional human face action detecting and tracing method based on video stream
CN101593365A (en)*2009-06-192009-12-02电子科技大学 A general adjustment method for 3D face model
CN101625768A (en)*2009-07-232010-01-13东南大学Three-dimensional human face reconstruction method based on stereoscopic vision
CN101866497A (en)*2010-06-182010-10-20北京交通大学 Intelligent 3D face reconstruction method and system based on binocular stereo vision
CN102254154A (en)*2011-07-052011-11-23南京大学Method for authenticating human-face identity based on three-dimensional model reconstruction
CN102708385A (en)*2012-04-062012-10-03张丛喆Method and system for comparison and recognition of three-dimensional vehicle types in video monitoring scenes

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2006106465A2 (en)*2005-04-072006-10-12Nxp B.V.Method and device for three-dimentional reconstruction and rendering
CN101499128A (en)*2008-01-302009-08-05中国科学院自动化研究所Three-dimensional human face action detecting and tracing method based on video stream
CN101404091A (en)*2008-11-072009-04-08重庆邮电大学Three-dimensional human face reconstruction method and system based on two-step shape modeling
CN101593365A (en)*2009-06-192009-12-02电子科技大学 A general adjustment method for 3D face model
CN101625768A (en)*2009-07-232010-01-13东南大学Three-dimensional human face reconstruction method based on stereoscopic vision
CN101866497A (en)*2010-06-182010-10-20北京交通大学 Intelligent 3D face reconstruction method and system based on binocular stereo vision
CN102254154A (en)*2011-07-052011-11-23南京大学Method for authenticating human-face identity based on three-dimensional model reconstruction
CN102708385A (en)*2012-04-062012-10-03张丛喆Method and system for comparison and recognition of three-dimensional vehicle types in video monitoring scenes

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"融合SFM和动态纹理映射的视频流三维表情重建";张剑;《计算机辅助设计与图形学学报》;20100630;第22卷(第6期);949-958*

Also Published As

Publication numberPublication date
CN104077804A (en)2014-10-01

Similar Documents

PublicationPublication DateTitle
CN104077804B (en)A kind of method based on multi-frame video picture construction three-dimensional face model
CN113012293B (en)Stone carving model construction method, device, equipment and storage medium
CN106997605B (en) A method for obtaining three-dimensional foot shape by collecting foot shape video and sensor data through smart phones
Luo et al.Multi-view hair capture using orientation fields
CN110349247A (en)A kind of indoor scene CAD 3D method for reconstructing based on semantic understanding
CN105279789B (en)A kind of three-dimensional rebuilding method based on image sequence
Starck et al.The multiple-camera 3-d production studio
CN106023303A (en)Method for improving three-dimensional reconstruction point-clout density on the basis of contour validity
CN106155299B (en)A kind of pair of smart machine carries out the method and device of gesture control
CN101650834A (en)Three dimensional reconstruction method of human body surface under complex scene
Ramirez et al.Booster: a benchmark for depth from images of specular and transparent surfaces
CN102222357A (en)Foot-shaped three-dimensional surface reconstruction method based on image segmentation and grid subdivision
JP4761670B2 (en) Moving stereo model generation apparatus and method
CN103093460A (en)Moving camera virtual array calibration method based on parallel parallax
CN108573231A (en) Human action recognition method based on deep motion map generated from motion history point cloud
Khan et al.An efficient encoder–decoder model for portrait depth estimation from single images trained on pixel-accurate synthetic data
CN101383046B (en)Three-dimensional reconstruction method on basis of image
Zhou et al.Constant velocity constraints for self-supervised monocular depth estimation
Zhou et al.PersDet: Monocular 3D detection in perspective bird's-eye-view
CN119311123A (en) Immersive space virtual-reality interaction method and system
CN108564043A (en)A kind of Human bodys' response method based on time-space distribution graph
Gao et al.MC-NeRF: Multi-camera neural radiance fields for multi-camera image acquisition systems
CN1917658B (en)Method for generating sequence of stereo images from monocular image sequence
CN109360270B (en)3D face pose alignment method and device based on artificial intelligence
CN107103620B (en) A Depth Extraction Method for Multi-Light Encoded Cameras Based on Spatial Sampling from Independent Camera Perspectives

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
CF01Termination of patent right due to non-payment of annual fee
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20170301


[8]ページ先頭

©2009-2025 Movatter.jp