Movatterモバイル変換


[0]ホーム

URL:


CN108986189A - Method and system based on real time multi-human motion capture in three-dimensional animation and live streaming - Google Patents

Method and system based on real time multi-human motion capture in three-dimensional animation and live streaming
Download PDF

Info

Publication number
CN108986189A
CN108986189ACN201810640630.3ACN201810640630ACN108986189ACN 108986189 ACN108986189 ACN 108986189ACN 201810640630 ACN201810640630 ACN 201810640630ACN 108986189 ACN108986189 ACN 108986189A
Authority
CN
China
Prior art keywords
dynamic
model
performer
animation
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810640630.3A
Other languages
Chinese (zh)
Other versions
CN108986189B (en
Inventor
强项
芦振华
胡文彬
蒋晓光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Jinshan Shiyou Technology Co ltd
Original Assignee
Zhuhai Xishan Residence Interactive Entertainment Technology Co Ltd
Zhuhai Kingsoft Online Game Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Xishan Residence Interactive Entertainment Technology Co Ltd, Zhuhai Kingsoft Online Game Technology Co LtdfiledCriticalZhuhai Xishan Residence Interactive Entertainment Technology Co Ltd
Priority to CN201810640630.3ApriorityCriticalpatent/CN108986189B/en
Publication of CN108986189ApublicationCriticalpatent/CN108986189A/en
Application grantedgrantedCritical
Publication of CN108986189BpublicationCriticalpatent/CN108986189B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The method based on real time multi-human motion capture in three-dimensional animation and live streaming that the present invention relates to a kind of is added textures file, limb action parameter is arranged including creating each three-dimensional character model based on the role in three-dimensional animation;For three-dimension modeling and binding skeleton model, prepare animation for preset limb action, and is directed into graphics engine;Each true dynamic skeleton data for catching performer is extracted, corresponding three-dimensional character model is then configured;It is dynamic catch it is indoor the dynamic action message for catching performer acquire by dynamic equipment of catching in real time, the data for dynamic catching equipment are transferred in graphics engine, then real-time rendering and drive corresponding actor model, generation real-time video, and being broadcast live.The present invention also accordingly proposes a kind of computer installation, system and computer program based on real time multi-human motion capture in three-dimensional animation and live streaming.

Description

Method and system based on real time multi-human motion capture in three-dimensional animation and live streaming
Technical field
The present invention relates to animation fields.Particularly, the present invention relates to caught based on real time multi-human movement in three-dimensional animationThe method and system caught and be broadcast live.
Background technique
Be within 2017 the year of virtual idol outburst, lift virtual idol, estimate that everybody expects at the first time be exactly at the beginning of soundFuture.Really, the first sound future before being born in 10 years is one of most successful virtual idol in current worldwide.And closeSeveral years, with the arrival of Chinese " Quadratic Finite Element economy " upsurge, more and more Chinese companies started to aim at " virtual idol " this blockNew cake is proposed the virtual idol of oneself one after another.
In addition, with the development of electronic entertainment industry and network transmission technology, net cast is as a kind of completely new onlineEntertainment way, increasingly by the favor of young viewers.In view of the real-time, interactive with spectators, current such big portion of net castDivide all is to use true performer directly and face-to-face exchange.However, in order to solve true performer during net castThe influence limited by space-time, a small number of strong network service content quotient use virtual portrait in a manner of three-dimensional animation and sightThe direct-seeding that crowd carries out real-time, interactive compares less.And traditional virtual idol live streaming is all the mode for taking recorded broadcast, poleA other model for taking motion capture real-time live broadcast, but can not all accomplish the three-dimensional animation live streaming of more people.
However, using virtual portrait and the technical threshold for carrying out net cast in a manner of three-dimensional animation is very high.This isSince the production of three-dimensional animation still quite expends energy.For general three-dimensional animation, in addition to create relevant cartoon scene andIt designs outside person model, it is also necessary to which for the movement of person model, node is set.In order to realize the real-time friendship of virtual newscaster and spectatorsStream interaction, the movement that true performer is implemented are captured and reflect corresponding node, and person model is done in real timeCorresponding movement out.
But for the needs of net cast, the movement of performer is captured in real time, generates the respective action of person model simultaneouslyIt is fused in three-dimensional animation, the movement of single-character given name performer can only be captured at present.Which greatly limits the videos of three-dimensional animationLive content and mode.
Summary of the invention
The present invention provides the method and system based on real time multi-human motion capture in three-dimensional animation and live streaming, improves performerMovement acquisition mode, obtain can the several performers of real-time capture movement and generate the technical effect of corresponding three-dimensional animation.
Technical solution of the present invention first aspect be it is a kind of based on real time multi-human motion capture in three-dimensional animation and be broadcast liveMethod.The above method is included: A, is created each three-dimensional character model based on the role in three-dimensional animation, and textures file is added,Limb action parameter is set;B, for three-dimension modeling and binding skeleton model, prepare animation for preset limb action, and leadEnter to graphics engine;C, each true dynamic skeleton data for catching performer is extracted, corresponding three-dimensional character model is then configured;D,It is dynamic catch it is indoor the dynamic action message for catching performer acquire by dynamic equipment of catching in real time, will move and catch the data of equipment and be transferred to figure and drawIn holding up, then real-time rendering and the corresponding actor model of driving, generate real-time video, and be broadcast live.
Further, the step A of the above method includes: production based on the three-dimensional character model in three-dimensional animation, configuration systemSmooth group of normal for making three-dimensional character, and adjust the UV coordinate of three-dimensional character model;Textures are made according to model UV coordinate, according toTextures make corresponding material, finally check actor model and file preservation in the associated database of graphics engine.
Further, the step B of the above method includes: to make animation bone based on the three-dimensional character model in three-dimensional animationBone, and bone is tied in respective model;And corresponding animation is made, and adjust the weight and controller of model, soAfter imported into the associated database of graphics engine and filed.
Further, the step C of the above method includes: that several move catches performer between dynamic police station, and wearing label crawl is caughtClothes wear helmet-type facial expression and capture system, and then respective performer corresponds to respective three-dimensional character model;Software is caught by dynamicIt is distinguished according to Skeletal size ratio, respective performer binds respective three-dimensional animation model, then debugs, and completes beam workerMake.
Further, the step D of the above method includes: to catch camera reality by infrared move during the more people of progress move and catchThe movement of Shi Jilu performer is simultaneously transferred to and dynamic catches software;By treated, action data is transferred in graphics engine work station, completeAt real-time rendering;The picture of rendering is acquired and is transferred to direct broadcast server, live streaming platform carries out real-time live broadcast in push.
Further, the above method can comprise the further steps of: the limb action and/or facial expression for capturing performer, turnBe changed to limb action data, facial motion data and the role's audio mixing data that association of characteristics is set with the people of role, be then associated with toCorresponding three-dimensional character model in graphics engine, and configure the limb action and/or facial expression and three-dimensional character of performerThe limb action and/or facial expression of model animation being capable of real-time synchronizations.
Further, the above method can comprise the further steps of: according to the facial skeleton come real-time capture and conversion performerFacial motion data, the facial motion data of real-time capture is generated into facial expression control instruction, passes through above-mentioned graphics engineGenerate the facial expression shape of corresponding actor model;The operation between the facial expression shape of the corresponding facial positions of actor modelGenerate facial expression animation transition.
Technical solution of the present invention second aspect is a kind of computer installation, including memory, processor and being stored in is depositedOn reservoir and the computer program that can run on a processor.Above-mentioned processor executes following steps when executing above procedure:A, each three-dimensional character model is created based on the role in three-dimensional animation, textures file is added, limb action parameter is set;B,For three-dimension modeling and binding skeleton model, prepare animation for preset limb action, and is directed into graphics engine;C, it extractsEach true dynamic skeleton data for catching performer, then configures corresponding three-dimensional character model;D, pass through dynamic catch in dynamic interior of catchingEquipment acquires the dynamic action message for catching performer in real time, the dynamic data for catching equipment is transferred in graphics engine, then real-time renderingWith the corresponding actor model of driving, real-time video is generated, and be broadcast live.
It further, include: to make based on the three-dimensional character model in three-dimensional animation when above-mentioned processor executes step A,Smooth group of normal of configuration production three-dimensional character, and adjust the UV of three-dimensional character model;Textures are made according to model UV, according to patchFigure makes corresponding material, finally checks actor model and files preservation in the associated database of graphics engine.
It further, include: based on the three-dimensional character model in three-dimensional animation, production when above-mentioned processor executes step BAnimation bone, and bone is tied in respective model;And corresponding animation is made, and adjust the weight and control of modelDevice is then introduced into the associated database of graphics engine and is filed.
Further, include: that several move catches performer between dynamic police station when above-mentioned processor executes step C, dress labelCrawl catches clothes, wears helmet-type facial expression and captures system, then respective performer corresponds to respective three-dimensional character model;By dynamicIt catches software to be distinguished according to Skeletal size ratio, respective performer binds respective three-dimensional animation model, then debugs, and completes quasi-Standby work.
It further, include: to be taken the photograph during the more people of progress move and catch by infrared dynamic catch when above-mentioned processor executes step DSoftware is caught as head records the movement of performer in real time and is transferred to move;By treated, action data is transferred to graphics engine work stationIn, complete real-time rendering;The picture of rendering is acquired and is transferred to direct broadcast server, live streaming platform carries out straight in real time in pushIt broadcasts.
Further, above-mentioned processor can also carry out following steps: the limb action and/or facial expression of performer are captured,Limb action data, facial motion data and the role's audio mixing data for setting association of characteristics with the people of role are converted to, are then associated withCorresponding three-dimensional character model into graphics engine, and configure the limb action and/or facial expression and three dimensional angular of performerThe limb action and/or facial expression of color model animation being capable of real-time synchronizations.
Further, above-mentioned processor can also carry out following steps: be drilled according to the facial skeleton come real-time capture and conversionThe facial motion data of real-time capture is generated facial expression control instruction, is drawn by above-mentioned figure by the facial motion data of memberHold up the facial expression shape for generating corresponding actor model;It is transported between the facial expression shape of the corresponding facial positions of actor modelIt calculates and generates facial expression animation transition.
The technical solution of the present invention third aspect be it is a kind of based on real time multi-human motion capture in three-dimensional animation and be broadcast liveSystem, including aforementioned any one computer installation;The graphics engine being connect with the computer installation;It is worn on performerMotion capture clothes;It is worn on the facial expression catcher on performer head;Picture pick-up device and lighting system for performer's shooting.
Technical solution of the present invention fourth aspect is a kind of computer readable storage medium, stores computer journey thereonSequence, the computer program perform the steps of A, create each based on the role in three-dimensional animation when being executed by processorTextures file is added in three-dimensional character model, and limb action parameter is arranged;B, it is three-dimension modeling and binding skeleton model, isPreset limb action prepares animation, and is directed into graphics engine;C, each true dynamic skeleton data for catching performer is extracted, soAfter configure corresponding three-dimensional character model;D, it is dynamic catch it is indoor by it is dynamic catch equipment and acquire in real time move the action message for catching performer,The dynamic data for catching equipment are transferred in graphics engine, then real-time rendering and the corresponding actor model of driving, generate view in real timeFrequently, it and is broadcast live.
It further, include: production when above-mentioned computer program is executed by processor step A based on three in three-dimensional animationActor model is tieed up, configuration makes smooth group of normal of three-dimensional character, and adjusts the UV of three-dimensional character model;It is made according to model UVTextures, make corresponding material according to textures, finally check actor model and file in the associated database of graphics engineIt saves.
It further, include: based on the three dimensional angular in three-dimensional animation when above-mentioned computer program is executed by processor step BColor model makes animation bone, and bone is tied in respective model;And corresponding animation is made, and adjust modelWeight and controller, be then introduced into the associated database of graphics engine and filed.
It further, include: that several move catches performer into dynamic police station when above-mentioned computer program is executed by processor step CBetween, wearing label crawl catches clothes, wears helmet-type facial expression and captures system, then respective performer corresponds to respective three-dimensional characterModel;It catches software by dynamic and is distinguished according to Skeletal size ratio, respective performer binds respective three-dimensional animation model, thenPreparation is completed in debugging.
It further, include: to lead to when above-mentioned computer program is executed by processor step D during the more people of progress move and catchCross it is infrared it is dynamic catch camera and record the movement of performer in real time and be transferred to dynamic catch software;By treated, action data is transferred to figureIn shape engine work station, real-time rendering is completed;The picture of rendering is acquired and be transferred to direct broadcast server, platform is broadcast live in pushCarry out real-time live broadcast.
Further, above-mentioned computer program can also be executed by processor following steps: capture performer limb action and/Or facial expression, be converted to limb action data, facial motion data and the role's audio mixing number that association of characteristics is set with the people of roleAccording to being then associated with the corresponding three-dimensional character model into graphics engine, and configure the limb action and/or facial table of performerThe limb action and/or facial expression of feelings and three-dimensional character model animation being capable of real-time synchronizations.
Further, above-mentioned computer program can also be executed by processor following steps: according to the facial skeleton come in real timeIt captures and the facial motion data of conversion performer leads to the facial motion data generation facial expression control instruction of real-time captureCross the facial expression shape that above-mentioned graphics engine generates corresponding actor model;In the facial table of the corresponding facial positions of actor modelOperation generates facial expression animation transition between situation shape.
The invention has the benefit that by the multiple three-dimensional character models being pre-created and to above-mentioned three-dimensional character modelPretreatment is executed respectively, so that the several dynamic action messages for catching performer can be captured simultaneously in real time, to reduce generation and straightBroadcast the cost of corresponding three-dimensional animation.
Detailed description of the invention
Fig. 1 show overview flow chart according to the method for the present invention;
Fig. 2 show the sub-step method flow diagram of first embodiment according to the present invention;
Fig. 3 show the sub-step method flow diagram of second embodiment according to the present invention;
Fig. 4 show the sub-step method flow diagram of third embodiment according to the present invention;
Fig. 5 show the sub-step method flow diagram of fourth embodiment according to the present invention;
Fig. 6 show the data interaction schematic diagram of the dynamic action message for catching performer and three-dimensional character model;
Fig. 7 show the usage scenario schematic diagram of the system according to the present invention.
Specific embodiment
It is carried out below with reference to technical effect of the embodiment and attached drawing to design of the invention, specific structure and generation clearChu, complete description, to be completely understood by the purpose of the present invention, scheme and effect.
It should be noted that unless otherwise specified, when a certain feature referred to as " fixation ", " connection " are in another feature,It can directly fix, be connected to another feature, and can also fix, be connected to another feature indirectly.In addition, thisThe descriptions such as the upper and lower, left and right used in open are only the mutual alignment pass relative to each component part of the disclosure in attached drawingFor system.The "an" of used singular, " described " and "the" are also intended to including most forms in the disclosure, are removedNon- context clearly expresses other meaning.In addition, unless otherwise defined, all technical and scientific terms used hereinIt is identical as the normally understood meaning of those skilled in the art.Term used in the description is intended merely to describe hereinSpecific embodiment is not intended to be limiting of the invention.Term as used herein "and/or" includes one or more relevantThe arbitrary combination of listed item.
It will be appreciated that though various elements, but this may be described using term first, second, third, etc. in the disclosureA little elements should not necessarily be limited by these terms.These terms are only used to for same type of element being distinguished from each other out.For example, not departing fromIn the case where disclosure range, first element can also be referred to as second element, and similarly, second element can also be referred to asOne element.The use of provided in this article any and all example or exemplary language (" such as ", " such as ") is intended merely to moreIllustrate the embodiment of the present invention well, and unless the context requires otherwise, otherwise the scope of the present invention will not be applied and be limited.
With reference to Fig. 1, a kind of method based on real time multi-human motion capture in three-dimensional animation and live streaming according to the present invention, packetInclude following steps:
A, each three-dimensional character model is created based on the role in three-dimensional animation, textures file is added, setting limbs are dynamicMake parameter;
B, for three-dimension modeling and binding skeleton model, prepare animation for preset limb action, and be directed into figureEngine;
C, each true dynamic skeleton data for catching performer is extracted, corresponding three-dimensional character model is then configured;
D, it is dynamic catch it is indoor the dynamic action message for catching performer acquire by dynamic equipment of catching in real time, dynamic will catch the data biography of equipmentIt is defeated into graphics engine, then real-time rendering and drive corresponding actor model, generate real-time video, and be broadcast live.
Wherein, graphics engine can be 3D game engine.
As shown in Fig. 2, further including: in step
S11, production are based on the three-dimensional character model in three-dimensional animation;
Smooth group of normal of S12, configuration production three-dimensional character;
S13, the UV coordinate for adjusting three-dimensional character model;
S14, textures are made according to model UV, make corresponding material according to textures, finally check actor model andFile in the associated database of graphics engine and saves.
As shown in figure 3, in stepb comprising steps of
S21, for the 3D actor model in three-dimensional animation, make animation bones for them;
S22, bone is tied in respective model;
S23, the weight and controller for adjusting model;
S24, the corresponding skeleton cartoon of production, are then introduced into 3D game engine and are filed.
Step C can be used as preparation process.After executing step C, several move catches performer between dynamic police station.Wearing markNote crawl catches clothes, wears helmet-type facial expression and captures system (step S31 as shown in Figure 4);Then respective performer is corresponding eachFrom 3D model, catch software by dynamic and distinguished according to Skeletal size ratio, respective performer binds respective 3D animation model(step S32 as shown in Figure 4), is then debugged, and completes preparation.
As shown in figure 4, in step D, may comprise steps of:
S33, carry out that more people are dynamic to be caught, it is several dynamic when catching performer and carrying out floor show according to respective drama and story board, it is infraredIt is dynamic catch camera and record the movement of performer in real time and be transferred to dynamic catch software;
S34, data are transferred in three-dimensional game engine work station, complete real-time rendering;
S35, the picture of rendering is acquired and is transferred to direct broadcast server, be pushed to live streaming platform and carry out real-time live broadcast.
System according to the present invention based on real time multi-human motion capture in three-dimensional animation and live streaming, may include: to implementThe computer installation of the above method;The graphics engine being connect with the computer installation;The motion capture being worn on performerClothes;It is worn on the facial expression catcher on performer head;Picture pick-up device and lighting system for performer's shooting.Preferably, may be usedTo use UPS safety power supply, for guaranteeing stable and real-time electric current.High bandwidth gigabit grade network interface card can also be used, for passingDefeated a large amount of information signal and data.
Technical solution of the present invention and embodiment are more intuitively described below by attached drawing 5 to Fig. 7.
As shown in figure 5, making the parameter corresponding to performance three of the skeleton model 20 of each true performer 10 in the preparation stageThe skeleton model parameter of dimension role 40 is matched and is mapped, and is recorded in graphics engine 30.Skeleton model parameter includes peopleThe distance between body limb action joint, the rotational angle limitation in each joint etc..Further, the facial skeleton of performer is come realWhen capture and the facial motion data of conversion performer, the facial motion data of real-time capture is generated into facial expression control instruction,The facial expression shape of corresponding actor model is generated by graphics engine 30.In the facial table of the corresponding facial positions of actor modelOperation generates facial expression animation transition between situation shape.
With continued reference to Fig. 5, a true performer is made to put on facial expression catcher, then passes through the motion capture of wearingIt takes to capture the limb motion of performer.Motion capture clothes and the data of facial expression catcher acquisition are transferred to graphics engine 30(for example being 3D Run-time engine), to be associated with control three-dimensional character model 40.
Referring to Fig. 6, multiple performers enter the performance region between dynamic police station.Each performer is according to shown in Fig. 5 embodimentComplete preparation.At this point, catching camera and lighting system in dynamic multiple move for catching room wall volume ground surrounding arrangement, it is used forThe big limb action of multiple performers is captured, and for being matched with the dynamic movement for catching clothes identification, to improve action recognitionPrecision.Performer in Performance Area can freely perform preset movement, or according to the display reaction between dynamic police stationAudience interaction is performed in real time.For example, lifting the right hand, the second performer 12 extension both hands and third performer 13 on the first performer 11Upper lift left hand, at this point, it is dynamic catch equipment their action data be transferred to graphics engine 30 handle;So that the first performer 11,Two performers 12 and corresponding sprout of third performer 13 are three-dimensional character 31,32,33 by 30 real time imagery of graphics engine, and atAs corresponding limb action and facial expression out, as shown in Figure 7.Then, picture compression shown in Fig. 7 can pass through at videoNetwork transmission live streaming.
It should be appreciated that the embodiment of the present invention can be by computer hardware, the combination of hardware and software or by depositingThe computer instruction in non-transitory computer-readable memory is stored up to be effected or carried out.Standard volume can be used in the methodJourney technology-includes that the non-transitory computer-readable storage media configured with computer program is realized in computer program,In configured in this way storage medium computer is operated in a manner of specific and is predefined --- according in a particular embodimentThe method and attached drawing of description.Each program can with the programming language of level process or object-oriented come realize with department of computer scienceSystem communication.However, if desired, the program can be realized with compilation or machine language.Under any circumstance, which can be volumeThe language translated or explained.In addition, the program can be run on the specific integrated circuit of programming for this purpose.
In addition, the operation of process described herein can be performed in any suitable order, unless herein in addition instruction orOtherwise significantly with contradicted by context.Process described herein (or modification and/or combination thereof) can be held being configured withIt executes, and is can be used as jointly on the one or more processors under the control of one or more computer systems of row instructionThe code (for example, executable instruction, one or more computer program or one or more application) of execution, by hardware or its groupIt closes to realize.The computer program includes the multiple instruction that can be performed by one or more processors.
Further, the method can be realized in being operably coupled to suitable any kind of computing platform, wrapInclude but be not limited to PC, mini-computer, main frame, work station, network or distributed computing environment, individual or integratedComputer platform or communicated with charged particle tool or other imaging devices etc..Each aspect of the present invention can be to depositThe machine readable code on non-transitory storage medium or equipment is stored up to realize no matter be moveable or be integrated to calculatingPlatform, such as hard disk, optical reading and/or write-in storage medium, RAM, ROM, so that it can be read by programmable calculator, whenStorage medium or equipment can be used for configuration and operation computer to execute process described herein when being read by computer.ThisOutside, machine readable code, or part thereof can be transmitted by wired or wireless network.When such media include combining microprocessorOr other data processors realize steps described above instruction or program when, invention as described herein including these and other notThe non-transitory computer-readable storage media of same type.When methods and techniques according to the present invention programming, the present inventionIt further include computer itself.
Computer program can be applied to input data to execute function as described herein, to convert input data with lifeAt storing to the output data of nonvolatile memory.Output information can also be applied to one or more output equipments as shownDevice.In the preferred embodiment of the invention, the data of conversion indicate physics and tangible object, including the object generated on displayReason and the particular visual of physical objects are described.
The above, only presently preferred embodiments of the present invention, the invention is not limited to above embodiment, as long asIt reaches technical effect of the invention with identical means, all within the spirits and principles of the present invention, any modification for being made,Equivalent replacement, improvement etc., should be included within the scope of the present invention.Its technical solution within the scope of the present inventionAnd/or embodiment can have a variety of different modifications and variations.

Claims (10)

CN201810640630.3A2018-06-212018-06-21Method and system for capturing and live broadcasting of real-time multi-person actions based on three-dimensional animationActiveCN108986189B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201810640630.3ACN108986189B (en)2018-06-212018-06-21Method and system for capturing and live broadcasting of real-time multi-person actions based on three-dimensional animation

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201810640630.3ACN108986189B (en)2018-06-212018-06-21Method and system for capturing and live broadcasting of real-time multi-person actions based on three-dimensional animation

Publications (2)

Publication NumberPublication Date
CN108986189Atrue CN108986189A (en)2018-12-11
CN108986189B CN108986189B (en)2023-12-19

Family

ID=64541571

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201810640630.3AActiveCN108986189B (en)2018-06-212018-06-21Method and system for capturing and live broadcasting of real-time multi-person actions based on three-dimensional animation

Country Status (1)

CountryLink
CN (1)CN108986189B (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109753151A (en)*2018-12-192019-05-14武汉西山艺创文化有限公司Motion capture method and system based on KINCET and facial camera
CN109785415A (en)*2018-12-182019-05-21武汉西山艺创文化有限公司A kind of movement acquisition system and its method based on ectoskeleton technology
CN109816773A (en)*2018-12-292019-05-28深圳市瑞立视多媒体科技有限公司 A driving method, plug-in and terminal device for a skeleton model of a virtual character
CN110070065A (en)*2019-04-302019-07-30李冠津The sign language systems and the means of communication of view-based access control model and speech-sound intelligent
CN110225400A (en)*2019-07-082019-09-10北京字节跳动网络技术有限公司A kind of motion capture method, device, mobile terminal and storage medium
CN110503707A (en)*2019-07-312019-11-26北京毛毛虫森林文化科技有限公司A kind of true man's motion capture real-time animation system and method
CN110636315A (en)*2019-08-192019-12-31北京达佳互联信息技术有限公司Multi-user virtual live broadcast method and device, electronic equipment and storage medium
CN111179389A (en)*2019-12-262020-05-19武汉西山艺创文化有限公司Three-dimensional real-time calculation animation production device and method
CN111179392A (en)*2019-12-192020-05-19武汉西山艺创文化有限公司Virtual idol comprehensive live broadcast method and system based on 5G communication
CN111292427A (en)*2020-03-062020-06-16腾讯科技(深圳)有限公司Bone displacement information acquisition method, device, equipment and storage medium
CN111325818A (en)*2020-02-102020-06-23腾讯科技(深圳)有限公司Three-dimensional animation generation method and device, storage medium and computer equipment
CN111530088A (en)*2020-04-172020-08-14完美世界(重庆)互动科技有限公司 A method and device for generating real-time expression pictures of game characters
CN111696183A (en)*2020-05-092020-09-22北京农业信息技术研究中心Projection interaction method and system and electronic equipment
CN112003998A (en)*2020-08-052020-11-27上海视觉艺术学院VAS virtual director system
CN112819932A (en)*2021-02-242021-05-18上海莉莉丝网络科技有限公司Method and system for manufacturing three-dimensional digital content and computer readable storage medium
CN113473159A (en)*2020-03-112021-10-01广州虎牙科技有限公司Digital human live broadcast method and device, live broadcast management equipment and readable storage medium
CN114170357A (en)*2021-12-172022-03-11上海米哈游海渊城科技有限公司 An image processing method, device, medium and electronic device based on data acquisition
CN114187395A (en)*2021-12-172022-03-15上海米哈游海渊城科技有限公司 An image processing method, device, medium and device based on real-time computing
CN117876549A (en)*2024-02-022024-04-12广州一千零一动漫有限公司 Animation generation method and system based on 3D character model and motion capture
CN118172451A (en)*2024-02-052024-06-11深圳萌想文化传播有限公司Interactive three-dimensional animation generation method and system
CN118212391A (en)*2024-05-202024-06-18云图数字视觉科技(杭州)有限公司Dynamic capturing equipment for computer 3D modeling
CN119941934A (en)*2025-04-082025-05-06长春大学 Intelligent animation modeling method and system based on three-dimensional technology
CN120147484A (en)*2025-05-152025-06-13西安宏源视讯设备有限责任公司 A method and device for real-time generation of cartoon actions

Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR20000050029A (en)*2000-05-122000-08-05박기봉Moving picture producing system capable of automatically directing by virtual director and live broadcast method using the apparatus
CN103150016A (en)*2013-02-202013-06-12兰州交通大学Multi-person motion capture system fusing ultra wide band positioning technology with inertia sensing technology
CN103324905A (en)*2012-03-212013-09-25天津生态城动漫园投资开发有限公司Next-generation virtual photostudio facial capture system
US8854376B1 (en)*2009-07-302014-10-07Lucasfilm Entertainment Company Ltd.Generating animation from actor performance
US20170238859A1 (en)*2010-06-072017-08-24Affectiva, Inc.Mental state data tagging and mood analysis for data collected from multiple sources
CN107277599A (en)*2017-05-312017-10-20珠海金山网络游戏科技有限公司A kind of live broadcasting method of virtual reality, device and system
CN107274466A (en)*2017-05-312017-10-20珠海金山网络游戏科技有限公司The methods, devices and systems that a kind of real-time double is caught
CN107798726A (en)*2017-11-142018-03-13杭州玉鸟科技有限公司The preparation method and device of 3-D cartoon

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR20000050029A (en)*2000-05-122000-08-05박기봉Moving picture producing system capable of automatically directing by virtual director and live broadcast method using the apparatus
US8854376B1 (en)*2009-07-302014-10-07Lucasfilm Entertainment Company Ltd.Generating animation from actor performance
US20170238859A1 (en)*2010-06-072017-08-24Affectiva, Inc.Mental state data tagging and mood analysis for data collected from multiple sources
CN103324905A (en)*2012-03-212013-09-25天津生态城动漫园投资开发有限公司Next-generation virtual photostudio facial capture system
CN103150016A (en)*2013-02-202013-06-12兰州交通大学Multi-person motion capture system fusing ultra wide band positioning technology with inertia sensing technology
CN107277599A (en)*2017-05-312017-10-20珠海金山网络游戏科技有限公司A kind of live broadcasting method of virtual reality, device and system
CN107274466A (en)*2017-05-312017-10-20珠海金山网络游戏科技有限公司The methods, devices and systems that a kind of real-time double is caught
CN107798726A (en)*2017-11-142018-03-13杭州玉鸟科技有限公司The preparation method and device of 3-D cartoon

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CIRLEPIG: "解谜阿凡达CG技术", 《HTTPS://BLOG.CSDN.NET/CIRCLEPIG/ARTICLE/DETAILS/8278554》*
CIRLEPIG: "解谜阿凡达CG技术", 《HTTPS://BLOG.CSDN.NET/CIRCLEPIG/ARTICLE/DETAILS/8278554》, 10 December 2012 (2012-12-10), pages 2 - 5*

Cited By (30)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109785415A (en)*2018-12-182019-05-21武汉西山艺创文化有限公司A kind of movement acquisition system and its method based on ectoskeleton technology
CN109753151A (en)*2018-12-192019-05-14武汉西山艺创文化有限公司Motion capture method and system based on KINCET and facial camera
CN109753151B (en)*2018-12-192022-05-24武汉西山艺创文化有限公司Motion capture method and system based on KINCET and facial camera
CN109816773A (en)*2018-12-292019-05-28深圳市瑞立视多媒体科技有限公司 A driving method, plug-in and terminal device for a skeleton model of a virtual character
CN110070065A (en)*2019-04-302019-07-30李冠津The sign language systems and the means of communication of view-based access control model and speech-sound intelligent
CN110225400B (en)*2019-07-082022-03-04北京字节跳动网络技术有限公司Motion capture method and device, mobile terminal and storage medium
CN110225400A (en)*2019-07-082019-09-10北京字节跳动网络技术有限公司A kind of motion capture method, device, mobile terminal and storage medium
CN110503707A (en)*2019-07-312019-11-26北京毛毛虫森林文化科技有限公司A kind of true man's motion capture real-time animation system and method
CN110636315A (en)*2019-08-192019-12-31北京达佳互联信息技术有限公司Multi-user virtual live broadcast method and device, electronic equipment and storage medium
CN111179392A (en)*2019-12-192020-05-19武汉西山艺创文化有限公司Virtual idol comprehensive live broadcast method and system based on 5G communication
CN111179389A (en)*2019-12-262020-05-19武汉西山艺创文化有限公司Three-dimensional real-time calculation animation production device and method
CN111179389B (en)*2019-12-262024-04-19武汉西山艺创文化有限公司Three-dimensional instant calculation animation production device and method
CN111325818A (en)*2020-02-102020-06-23腾讯科技(深圳)有限公司Three-dimensional animation generation method and device, storage medium and computer equipment
CN111292427B (en)*2020-03-062021-01-01腾讯科技(深圳)有限公司Bone displacement information acquisition method, device, equipment and storage medium
CN111292427A (en)*2020-03-062020-06-16腾讯科技(深圳)有限公司Bone displacement information acquisition method, device, equipment and storage medium
CN113473159B (en)*2020-03-112023-08-18广州虎牙科技有限公司Digital person live broadcast method and device, live broadcast management equipment and readable storage medium
CN113473159A (en)*2020-03-112021-10-01广州虎牙科技有限公司Digital human live broadcast method and device, live broadcast management equipment and readable storage medium
CN111530088A (en)*2020-04-172020-08-14完美世界(重庆)互动科技有限公司 A method and device for generating real-time expression pictures of game characters
CN111530088B (en)*2020-04-172022-04-22完美世界(重庆)互动科技有限公司Method and device for generating real-time expression picture of game role
CN111696183A (en)*2020-05-092020-09-22北京农业信息技术研究中心Projection interaction method and system and electronic equipment
CN111696183B (en)*2020-05-092023-12-05北京农业信息技术研究中心Projection interaction method and system and electronic equipment
CN112003998A (en)*2020-08-052020-11-27上海视觉艺术学院VAS virtual director system
CN112819932A (en)*2021-02-242021-05-18上海莉莉丝网络科技有限公司Method and system for manufacturing three-dimensional digital content and computer readable storage medium
CN114187395A (en)*2021-12-172022-03-15上海米哈游海渊城科技有限公司 An image processing method, device, medium and device based on real-time computing
CN114170357A (en)*2021-12-172022-03-11上海米哈游海渊城科技有限公司 An image processing method, device, medium and electronic device based on data acquisition
CN117876549A (en)*2024-02-022024-04-12广州一千零一动漫有限公司 Animation generation method and system based on 3D character model and motion capture
CN118172451A (en)*2024-02-052024-06-11深圳萌想文化传播有限公司Interactive three-dimensional animation generation method and system
CN118212391A (en)*2024-05-202024-06-18云图数字视觉科技(杭州)有限公司Dynamic capturing equipment for computer 3D modeling
CN119941934A (en)*2025-04-082025-05-06长春大学 Intelligent animation modeling method and system based on three-dimensional technology
CN120147484A (en)*2025-05-152025-06-13西安宏源视讯设备有限责任公司 A method and device for real-time generation of cartoon actions

Also Published As

Publication numberPublication date
CN108986189B (en)2023-12-19

Similar Documents

PublicationPublication DateTitle
CN108986189A (en)Method and system based on real time multi-human motion capture in three-dimensional animation and live streaming
CN108986190A (en)A kind of method and system of the virtual newscaster based on human-like persona non-in three-dimensional animation
CN109145788B (en)Video-based attitude data capturing method and system
KR101713772B1 (en)Apparatus and method for pre-visualization image
CN107274464A (en)A kind of methods, devices and systems of real-time, interactive 3D animations
CN107274466A (en)The methods, devices and systems that a kind of real-time double is caught
CN113822970B (en)Live broadcast control method and device, storage medium and electronic equipment
CN108961367A (en)The method, system and device of role image deformation in the live streaming of three-dimensional idol
CN109829976A (en)One kind performing method and its system based on holographic technique in real time
CN105931283B (en)A kind of 3-dimensional digital content intelligence production cloud platform based on motion capture big data
CN107231531A (en)A kind of networks VR technology and real scene shooting combination production of film and TV system
CN104103081A (en)Virtual multi-camera target tracking video material generation method
CN109035415B (en)Virtual model processing method, device, equipment and computer readable storage medium
CN113382275B (en)Live broadcast data generation method and device, storage medium and electronic equipment
CN108833810A (en)The method and device of subtitle is generated in a kind of live streaming of three-dimensional idol in real time
CN108961376A (en)The method and system of real-time rendering three-dimensional scenic in virtual idol live streaming
CN111179392A (en)Virtual idol comprehensive live broadcast method and system based on 5G communication
CN107995481B (en)A kind of display methods and device of mixed reality
CN108668050A (en) Video shooting method and device based on virtual reality
CN108961368A (en)The method and system of real-time live broadcast variety show in three-dimensional animation environment
Ganoni et al.A framework for visually realistic multi-robot simulation in natural environment
US11443450B2 (en)Analyzing screen coverage of a target object
KR20160136160A (en) Virtual Reality Performance System and Performance Method
CN106530408A (en)Museum temporary exhibition planning and design system
CN108833740B (en)Real-time prompter method and device based on three-dimensional animation live broadcast

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
TA01Transfer of patent application right

Effective date of registration:20211215

Address after:430000 Room 408, floor 4, building B24, phase 2.7, financial background service center base construction project, No. 77, Guanggu Avenue, Donghu New Technology Development Zone, Wuhan, Hubei Province

Applicant after:Wuhan Jinshan Shiyou Technology Co.,Ltd.

Address before:519000 building 3, Jinshan Software Park, 325 Qiandao Ring Road, Xiangzhou District, Zhuhai City, Guangdong Province

Applicant before:ZHUHAI KINGSOFT ONLINE GAME TECHNOLOGY Co.,Ltd.

Applicant before:ZHUHAI XISHANJU INTERACTIVE ENTERTAINMENT TECHNOLOGY Co.,Ltd.

TA01Transfer of patent application right
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp