Method and system based on real time multi-human motion capture in three-dimensional animation and live streamingTechnical field
The present invention relates to animation fields.Particularly, the present invention relates to caught based on real time multi-human movement in three-dimensional animationThe method and system caught and be broadcast live.
Background technique
Be within 2017 the year of virtual idol outburst, lift virtual idol, estimate that everybody expects at the first time be exactly at the beginning of soundFuture.Really, the first sound future before being born in 10 years is one of most successful virtual idol in current worldwide.And closeSeveral years, with the arrival of Chinese " Quadratic Finite Element economy " upsurge, more and more Chinese companies started to aim at " virtual idol " this blockNew cake is proposed the virtual idol of oneself one after another.
In addition, with the development of electronic entertainment industry and network transmission technology, net cast is as a kind of completely new onlineEntertainment way, increasingly by the favor of young viewers.In view of the real-time, interactive with spectators, current such big portion of net castDivide all is to use true performer directly and face-to-face exchange.However, in order to solve true performer during net castThe influence limited by space-time, a small number of strong network service content quotient use virtual portrait in a manner of three-dimensional animation and sightThe direct-seeding that crowd carries out real-time, interactive compares less.And traditional virtual idol live streaming is all the mode for taking recorded broadcast, poleA other model for taking motion capture real-time live broadcast, but can not all accomplish the three-dimensional animation live streaming of more people.
However, using virtual portrait and the technical threshold for carrying out net cast in a manner of three-dimensional animation is very high.This isSince the production of three-dimensional animation still quite expends energy.For general three-dimensional animation, in addition to create relevant cartoon scene andIt designs outside person model, it is also necessary to which for the movement of person model, node is set.In order to realize the real-time friendship of virtual newscaster and spectatorsStream interaction, the movement that true performer is implemented are captured and reflect corresponding node, and person model is done in real timeCorresponding movement out.
But for the needs of net cast, the movement of performer is captured in real time, generates the respective action of person model simultaneouslyIt is fused in three-dimensional animation, the movement of single-character given name performer can only be captured at present.Which greatly limits the videos of three-dimensional animationLive content and mode.
Summary of the invention
The present invention provides the method and system based on real time multi-human motion capture in three-dimensional animation and live streaming, improves performerMovement acquisition mode, obtain can the several performers of real-time capture movement and generate the technical effect of corresponding three-dimensional animation.
Technical solution of the present invention first aspect be it is a kind of based on real time multi-human motion capture in three-dimensional animation and be broadcast liveMethod.The above method is included: A, is created each three-dimensional character model based on the role in three-dimensional animation, and textures file is added,Limb action parameter is set;B, for three-dimension modeling and binding skeleton model, prepare animation for preset limb action, and leadEnter to graphics engine;C, each true dynamic skeleton data for catching performer is extracted, corresponding three-dimensional character model is then configured;D,It is dynamic catch it is indoor the dynamic action message for catching performer acquire by dynamic equipment of catching in real time, will move and catch the data of equipment and be transferred to figure and drawIn holding up, then real-time rendering and the corresponding actor model of driving, generate real-time video, and be broadcast live.
Further, the step A of the above method includes: production based on the three-dimensional character model in three-dimensional animation, configuration systemSmooth group of normal for making three-dimensional character, and adjust the UV coordinate of three-dimensional character model;Textures are made according to model UV coordinate, according toTextures make corresponding material, finally check actor model and file preservation in the associated database of graphics engine.
Further, the step B of the above method includes: to make animation bone based on the three-dimensional character model in three-dimensional animationBone, and bone is tied in respective model;And corresponding animation is made, and adjust the weight and controller of model, soAfter imported into the associated database of graphics engine and filed.
Further, the step C of the above method includes: that several move catches performer between dynamic police station, and wearing label crawl is caughtClothes wear helmet-type facial expression and capture system, and then respective performer corresponds to respective three-dimensional character model;Software is caught by dynamicIt is distinguished according to Skeletal size ratio, respective performer binds respective three-dimensional animation model, then debugs, and completes beam workerMake.
Further, the step D of the above method includes: to catch camera reality by infrared move during the more people of progress move and catchThe movement of Shi Jilu performer is simultaneously transferred to and dynamic catches software;By treated, action data is transferred in graphics engine work station, completeAt real-time rendering;The picture of rendering is acquired and is transferred to direct broadcast server, live streaming platform carries out real-time live broadcast in push.
Further, the above method can comprise the further steps of: the limb action and/or facial expression for capturing performer, turnBe changed to limb action data, facial motion data and the role's audio mixing data that association of characteristics is set with the people of role, be then associated with toCorresponding three-dimensional character model in graphics engine, and configure the limb action and/or facial expression and three-dimensional character of performerThe limb action and/or facial expression of model animation being capable of real-time synchronizations.
Further, the above method can comprise the further steps of: according to the facial skeleton come real-time capture and conversion performerFacial motion data, the facial motion data of real-time capture is generated into facial expression control instruction, passes through above-mentioned graphics engineGenerate the facial expression shape of corresponding actor model;The operation between the facial expression shape of the corresponding facial positions of actor modelGenerate facial expression animation transition.
Technical solution of the present invention second aspect is a kind of computer installation, including memory, processor and being stored in is depositedOn reservoir and the computer program that can run on a processor.Above-mentioned processor executes following steps when executing above procedure:A, each three-dimensional character model is created based on the role in three-dimensional animation, textures file is added, limb action parameter is set;B,For three-dimension modeling and binding skeleton model, prepare animation for preset limb action, and is directed into graphics engine;C, it extractsEach true dynamic skeleton data for catching performer, then configures corresponding three-dimensional character model;D, pass through dynamic catch in dynamic interior of catchingEquipment acquires the dynamic action message for catching performer in real time, the dynamic data for catching equipment is transferred in graphics engine, then real-time renderingWith the corresponding actor model of driving, real-time video is generated, and be broadcast live.
It further, include: to make based on the three-dimensional character model in three-dimensional animation when above-mentioned processor executes step A,Smooth group of normal of configuration production three-dimensional character, and adjust the UV of three-dimensional character model;Textures are made according to model UV, according to patchFigure makes corresponding material, finally checks actor model and files preservation in the associated database of graphics engine.
It further, include: based on the three-dimensional character model in three-dimensional animation, production when above-mentioned processor executes step BAnimation bone, and bone is tied in respective model;And corresponding animation is made, and adjust the weight and control of modelDevice is then introduced into the associated database of graphics engine and is filed.
Further, include: that several move catches performer between dynamic police station when above-mentioned processor executes step C, dress labelCrawl catches clothes, wears helmet-type facial expression and captures system, then respective performer corresponds to respective three-dimensional character model;By dynamicIt catches software to be distinguished according to Skeletal size ratio, respective performer binds respective three-dimensional animation model, then debugs, and completes quasi-Standby work.
It further, include: to be taken the photograph during the more people of progress move and catch by infrared dynamic catch when above-mentioned processor executes step DSoftware is caught as head records the movement of performer in real time and is transferred to move;By treated, action data is transferred to graphics engine work stationIn, complete real-time rendering;The picture of rendering is acquired and is transferred to direct broadcast server, live streaming platform carries out straight in real time in pushIt broadcasts.
Further, above-mentioned processor can also carry out following steps: the limb action and/or facial expression of performer are captured,Limb action data, facial motion data and the role's audio mixing data for setting association of characteristics with the people of role are converted to, are then associated withCorresponding three-dimensional character model into graphics engine, and configure the limb action and/or facial expression and three dimensional angular of performerThe limb action and/or facial expression of color model animation being capable of real-time synchronizations.
Further, above-mentioned processor can also carry out following steps: be drilled according to the facial skeleton come real-time capture and conversionThe facial motion data of real-time capture is generated facial expression control instruction, is drawn by above-mentioned figure by the facial motion data of memberHold up the facial expression shape for generating corresponding actor model;It is transported between the facial expression shape of the corresponding facial positions of actor modelIt calculates and generates facial expression animation transition.
The technical solution of the present invention third aspect be it is a kind of based on real time multi-human motion capture in three-dimensional animation and be broadcast liveSystem, including aforementioned any one computer installation;The graphics engine being connect with the computer installation;It is worn on performerMotion capture clothes;It is worn on the facial expression catcher on performer head;Picture pick-up device and lighting system for performer's shooting.
Technical solution of the present invention fourth aspect is a kind of computer readable storage medium, stores computer journey thereonSequence, the computer program perform the steps of A, create each based on the role in three-dimensional animation when being executed by processorTextures file is added in three-dimensional character model, and limb action parameter is arranged;B, it is three-dimension modeling and binding skeleton model, isPreset limb action prepares animation, and is directed into graphics engine;C, each true dynamic skeleton data for catching performer is extracted, soAfter configure corresponding three-dimensional character model;D, it is dynamic catch it is indoor by it is dynamic catch equipment and acquire in real time move the action message for catching performer,The dynamic data for catching equipment are transferred in graphics engine, then real-time rendering and the corresponding actor model of driving, generate view in real timeFrequently, it and is broadcast live.
It further, include: production when above-mentioned computer program is executed by processor step A based on three in three-dimensional animationActor model is tieed up, configuration makes smooth group of normal of three-dimensional character, and adjusts the UV of three-dimensional character model;It is made according to model UVTextures, make corresponding material according to textures, finally check actor model and file in the associated database of graphics engineIt saves.
It further, include: based on the three dimensional angular in three-dimensional animation when above-mentioned computer program is executed by processor step BColor model makes animation bone, and bone is tied in respective model;And corresponding animation is made, and adjust modelWeight and controller, be then introduced into the associated database of graphics engine and filed.
It further, include: that several move catches performer into dynamic police station when above-mentioned computer program is executed by processor step CBetween, wearing label crawl catches clothes, wears helmet-type facial expression and captures system, then respective performer corresponds to respective three-dimensional characterModel;It catches software by dynamic and is distinguished according to Skeletal size ratio, respective performer binds respective three-dimensional animation model, thenPreparation is completed in debugging.
It further, include: to lead to when above-mentioned computer program is executed by processor step D during the more people of progress move and catchCross it is infrared it is dynamic catch camera and record the movement of performer in real time and be transferred to dynamic catch software;By treated, action data is transferred to figureIn shape engine work station, real-time rendering is completed;The picture of rendering is acquired and be transferred to direct broadcast server, platform is broadcast live in pushCarry out real-time live broadcast.
Further, above-mentioned computer program can also be executed by processor following steps: capture performer limb action and/Or facial expression, be converted to limb action data, facial motion data and the role's audio mixing number that association of characteristics is set with the people of roleAccording to being then associated with the corresponding three-dimensional character model into graphics engine, and configure the limb action and/or facial table of performerThe limb action and/or facial expression of feelings and three-dimensional character model animation being capable of real-time synchronizations.
Further, above-mentioned computer program can also be executed by processor following steps: according to the facial skeleton come in real timeIt captures and the facial motion data of conversion performer leads to the facial motion data generation facial expression control instruction of real-time captureCross the facial expression shape that above-mentioned graphics engine generates corresponding actor model;In the facial table of the corresponding facial positions of actor modelOperation generates facial expression animation transition between situation shape.
The invention has the benefit that by the multiple three-dimensional character models being pre-created and to above-mentioned three-dimensional character modelPretreatment is executed respectively, so that the several dynamic action messages for catching performer can be captured simultaneously in real time, to reduce generation and straightBroadcast the cost of corresponding three-dimensional animation.
Detailed description of the invention
Fig. 1 show overview flow chart according to the method for the present invention;
Fig. 2 show the sub-step method flow diagram of first embodiment according to the present invention;
Fig. 3 show the sub-step method flow diagram of second embodiment according to the present invention;
Fig. 4 show the sub-step method flow diagram of third embodiment according to the present invention;
Fig. 5 show the sub-step method flow diagram of fourth embodiment according to the present invention;
Fig. 6 show the data interaction schematic diagram of the dynamic action message for catching performer and three-dimensional character model;
Fig. 7 show the usage scenario schematic diagram of the system according to the present invention.
Specific embodiment
It is carried out below with reference to technical effect of the embodiment and attached drawing to design of the invention, specific structure and generation clearChu, complete description, to be completely understood by the purpose of the present invention, scheme and effect.
It should be noted that unless otherwise specified, when a certain feature referred to as " fixation ", " connection " are in another feature,It can directly fix, be connected to another feature, and can also fix, be connected to another feature indirectly.In addition, thisThe descriptions such as the upper and lower, left and right used in open are only the mutual alignment pass relative to each component part of the disclosure in attached drawingFor system.The "an" of used singular, " described " and "the" are also intended to including most forms in the disclosure, are removedNon- context clearly expresses other meaning.In addition, unless otherwise defined, all technical and scientific terms used hereinIt is identical as the normally understood meaning of those skilled in the art.Term used in the description is intended merely to describe hereinSpecific embodiment is not intended to be limiting of the invention.Term as used herein "and/or" includes one or more relevantThe arbitrary combination of listed item.
It will be appreciated that though various elements, but this may be described using term first, second, third, etc. in the disclosureA little elements should not necessarily be limited by these terms.These terms are only used to for same type of element being distinguished from each other out.For example, not departing fromIn the case where disclosure range, first element can also be referred to as second element, and similarly, second element can also be referred to asOne element.The use of provided in this article any and all example or exemplary language (" such as ", " such as ") is intended merely to moreIllustrate the embodiment of the present invention well, and unless the context requires otherwise, otherwise the scope of the present invention will not be applied and be limited.
With reference to Fig. 1, a kind of method based on real time multi-human motion capture in three-dimensional animation and live streaming according to the present invention, packetInclude following steps:
A, each three-dimensional character model is created based on the role in three-dimensional animation, textures file is added, setting limbs are dynamicMake parameter;
B, for three-dimension modeling and binding skeleton model, prepare animation for preset limb action, and be directed into figureEngine;
C, each true dynamic skeleton data for catching performer is extracted, corresponding three-dimensional character model is then configured;
D, it is dynamic catch it is indoor the dynamic action message for catching performer acquire by dynamic equipment of catching in real time, dynamic will catch the data biography of equipmentIt is defeated into graphics engine, then real-time rendering and drive corresponding actor model, generate real-time video, and be broadcast live.
Wherein, graphics engine can be 3D game engine.
As shown in Fig. 2, further including: in step
S11, production are based on the three-dimensional character model in three-dimensional animation;
Smooth group of normal of S12, configuration production three-dimensional character;
S13, the UV coordinate for adjusting three-dimensional character model;
S14, textures are made according to model UV, make corresponding material according to textures, finally check actor model andFile in the associated database of graphics engine and saves.
As shown in figure 3, in stepb comprising steps of
S21, for the 3D actor model in three-dimensional animation, make animation bones for them;
S22, bone is tied in respective model;
S23, the weight and controller for adjusting model;
S24, the corresponding skeleton cartoon of production, are then introduced into 3D game engine and are filed.
Step C can be used as preparation process.After executing step C, several move catches performer between dynamic police station.Wearing markNote crawl catches clothes, wears helmet-type facial expression and captures system (step S31 as shown in Figure 4);Then respective performer is corresponding eachFrom 3D model, catch software by dynamic and distinguished according to Skeletal size ratio, respective performer binds respective 3D animation model(step S32 as shown in Figure 4), is then debugged, and completes preparation.
As shown in figure 4, in step D, may comprise steps of:
S33, carry out that more people are dynamic to be caught, it is several dynamic when catching performer and carrying out floor show according to respective drama and story board, it is infraredIt is dynamic catch camera and record the movement of performer in real time and be transferred to dynamic catch software;
S34, data are transferred in three-dimensional game engine work station, complete real-time rendering;
S35, the picture of rendering is acquired and is transferred to direct broadcast server, be pushed to live streaming platform and carry out real-time live broadcast.
System according to the present invention based on real time multi-human motion capture in three-dimensional animation and live streaming, may include: to implementThe computer installation of the above method;The graphics engine being connect with the computer installation;The motion capture being worn on performerClothes;It is worn on the facial expression catcher on performer head;Picture pick-up device and lighting system for performer's shooting.Preferably, may be usedTo use UPS safety power supply, for guaranteeing stable and real-time electric current.High bandwidth gigabit grade network interface card can also be used, for passingDefeated a large amount of information signal and data.
Technical solution of the present invention and embodiment are more intuitively described below by attached drawing 5 to Fig. 7.
As shown in figure 5, making the parameter corresponding to performance three of the skeleton model 20 of each true performer 10 in the preparation stageThe skeleton model parameter of dimension role 40 is matched and is mapped, and is recorded in graphics engine 30.Skeleton model parameter includes peopleThe distance between body limb action joint, the rotational angle limitation in each joint etc..Further, the facial skeleton of performer is come realWhen capture and the facial motion data of conversion performer, the facial motion data of real-time capture is generated into facial expression control instruction,The facial expression shape of corresponding actor model is generated by graphics engine 30.In the facial table of the corresponding facial positions of actor modelOperation generates facial expression animation transition between situation shape.
With continued reference to Fig. 5, a true performer is made to put on facial expression catcher, then passes through the motion capture of wearingIt takes to capture the limb motion of performer.Motion capture clothes and the data of facial expression catcher acquisition are transferred to graphics engine 30(for example being 3D Run-time engine), to be associated with control three-dimensional character model 40.
Referring to Fig. 6, multiple performers enter the performance region between dynamic police station.Each performer is according to shown in Fig. 5 embodimentComplete preparation.At this point, catching camera and lighting system in dynamic multiple move for catching room wall volume ground surrounding arrangement, it is used forThe big limb action of multiple performers is captured, and for being matched with the dynamic movement for catching clothes identification, to improve action recognitionPrecision.Performer in Performance Area can freely perform preset movement, or according to the display reaction between dynamic police stationAudience interaction is performed in real time.For example, lifting the right hand, the second performer 12 extension both hands and third performer 13 on the first performer 11Upper lift left hand, at this point, it is dynamic catch equipment their action data be transferred to graphics engine 30 handle;So that the first performer 11,Two performers 12 and corresponding sprout of third performer 13 are three-dimensional character 31,32,33 by 30 real time imagery of graphics engine, and atAs corresponding limb action and facial expression out, as shown in Figure 7.Then, picture compression shown in Fig. 7 can pass through at videoNetwork transmission live streaming.
It should be appreciated that the embodiment of the present invention can be by computer hardware, the combination of hardware and software or by depositingThe computer instruction in non-transitory computer-readable memory is stored up to be effected or carried out.Standard volume can be used in the methodJourney technology-includes that the non-transitory computer-readable storage media configured with computer program is realized in computer program,In configured in this way storage medium computer is operated in a manner of specific and is predefined --- according in a particular embodimentThe method and attached drawing of description.Each program can with the programming language of level process or object-oriented come realize with department of computer scienceSystem communication.However, if desired, the program can be realized with compilation or machine language.Under any circumstance, which can be volumeThe language translated or explained.In addition, the program can be run on the specific integrated circuit of programming for this purpose.
In addition, the operation of process described herein can be performed in any suitable order, unless herein in addition instruction orOtherwise significantly with contradicted by context.Process described herein (or modification and/or combination thereof) can be held being configured withIt executes, and is can be used as jointly on the one or more processors under the control of one or more computer systems of row instructionThe code (for example, executable instruction, one or more computer program or one or more application) of execution, by hardware or its groupIt closes to realize.The computer program includes the multiple instruction that can be performed by one or more processors.
Further, the method can be realized in being operably coupled to suitable any kind of computing platform, wrapInclude but be not limited to PC, mini-computer, main frame, work station, network or distributed computing environment, individual or integratedComputer platform or communicated with charged particle tool or other imaging devices etc..Each aspect of the present invention can be to depositThe machine readable code on non-transitory storage medium or equipment is stored up to realize no matter be moveable or be integrated to calculatingPlatform, such as hard disk, optical reading and/or write-in storage medium, RAM, ROM, so that it can be read by programmable calculator, whenStorage medium or equipment can be used for configuration and operation computer to execute process described herein when being read by computer.ThisOutside, machine readable code, or part thereof can be transmitted by wired or wireless network.When such media include combining microprocessorOr other data processors realize steps described above instruction or program when, invention as described herein including these and other notThe non-transitory computer-readable storage media of same type.When methods and techniques according to the present invention programming, the present inventionIt further include computer itself.
Computer program can be applied to input data to execute function as described herein, to convert input data with lifeAt storing to the output data of nonvolatile memory.Output information can also be applied to one or more output equipments as shownDevice.In the preferred embodiment of the invention, the data of conversion indicate physics and tangible object, including the object generated on displayReason and the particular visual of physical objects are described.
The above, only presently preferred embodiments of the present invention, the invention is not limited to above embodiment, as long asIt reaches technical effect of the invention with identical means, all within the spirits and principles of the present invention, any modification for being made,Equivalent replacement, improvement etc., should be included within the scope of the present invention.Its technical solution within the scope of the present inventionAnd/or embodiment can have a variety of different modifications and variations.