Movatterモバイル変換


[0]ホーム

URL:


CN109753151A - Motion capture method and system based on KINCET and facial camera - Google Patents

Motion capture method and system based on KINCET and facial camera
Download PDF

Info

Publication number
CN109753151A
CN109753151ACN201811553858.5ACN201811553858ACN109753151ACN 109753151 ACN109753151 ACN 109753151ACN 201811553858 ACN201811553858 ACN 201811553858ACN 109753151 ACN109753151 ACN 109753151A
Authority
CN
China
Prior art keywords
kincet
motion capture
catching
dynamic
performer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811553858.5A
Other languages
Chinese (zh)
Other versions
CN109753151B (en
Inventor
贺子彬
芦振华
杜庆焜
蒋晓光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Xishan Yichuang Culture Co Ltd
Original Assignee
Wuhan Xishan Yichuang Culture Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Xishan Yichuang Culture Co LtdfiledCriticalWuhan Xishan Yichuang Culture Co Ltd
Priority to CN201811553858.5ApriorityCriticalpatent/CN109753151B/en
Publication of CN109753151ApublicationCriticalpatent/CN109753151A/en
Application grantedgrantedCritical
Publication of CN109753151BpublicationCriticalpatent/CN109753151B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Landscapes

Abstract

A kind of motion capture method based on KINCET and facial camera, comprising: be based on motion capture scene, the position of facial camera is set and calibrates the relative position between more KINCET;According to position calibrated facial camera and Duo Tai KINCET, the motion capture range in motion capture scene is demarcated;The dynamic action data and face data for catching performer is captured respectively by KINCET and facial camera;Action data and face data are matched to three-dimensional character model.Disclosed herein as well is a kind of motion capture systems based on KINCET and facial camera accordingly.The application's has the beneficial effect that by combining using the hardware devices such as KINCET and facial camera, the action data and face data of the required accuracy when meeting general cartoon making is provided, to reduce the cost and operation threshold of motion capture equipment.

Description

Motion capture method and system based on KINCET and facial camera
Technical field
The present invention relates to field of Computer Graphics more particularly to a kind of movement based on KINCET and facial camera to catchCatch method and system.
Background technique
Three-dimensional animation and game in recent years it is burning hot, allow a new noun " motion capture " emerged in large numbers gradually in public faceBefore.In simple terms, so-called " motion capture " is exactly that (such as dynamic with mark point catches clothes, scene by specifically capturing equipmentDepth camera and sensor etc.) the dynamic motion information for catching performer is recorded, the initial data as production animation.ThisA little initial data will be used to drive virtual three-dimensional animation actor model.Relative to the movement of direct drawing three-dimensional cartoon role,On the one hand the above method reduces the dependence to fine arts person works' experience, reduce the human cost of related cartoon making;SeparatelyOn the one hand it is but also made animation smoothness more true to nature.
However, the relevant device due to above-mentioned motion capture needs a large amount of hardware configurations, so entire exercise captures systemThe price is very expensive, general few then hundreds of thousands, how then millions of.Meanwhile in order to accurately capture the dynamic movement for catching performer, this isEach hardware device needs accurate calibration in system, to improve the operation threshold that entire entire exercise captures system.But forFor general three-dimensional animation production, the requirement on operation precision is relatively low.Before existing motion capture system is hugePhase investment, higher operation threshold and high-precision action data relative to general small-sized animation manufacturing company or individual andSpeech, it is clear that be inappropriate.
Summary of the invention
The purpose of the application is to solve the deficiencies in the prior art, provides a kind of movement based on KINCET and facial cameraMethod for catching and system can obtain under the premise of meeting the action data required precision of general animation, reduce to move and catch equipmentPrice and using difficulty effect.
To achieve the goals above, the following technical solution is employed by the present invention.
Firstly, the application proposes a kind of motion capture method based on KINCET and facial camera.This method is suitable forSingle double captures comprising following steps:
S100 it) is based on motion capture scene, the position of facial camera is set and is calibrated opposite between more KINCETPosition;
S200) the facial camera and Duo Tai KINCET calibrated according to position demarcate the movement in motion capture sceneCatching range;
S300 the dynamic action data and face data for catching performer) is captured by KINCET and facial camera respectively;
S400 action data and face data) are matched to three-dimensional character model.
Further, in the above method of the application, the step S100 can also include following sub-step:
S101 it) is based on motion capture scene, facial camera and Duo Tai KINCET are set;
S102) shift action captures the luminous marker in scene, to calibrate the relative position between more KINCET;
S103) according to more calibrated KINCET, the background positions of motion capture scene are determined.
Further, in the above method of the application, the step S200 can also include following sub-step:
S201) according to the relative position between more KINCET after calibration, initial actuating catching range is determined;
S202 it) on the boundary and vertex of initial actuating catching range, executes preset move and catches checkout action;
S203 the dynamic action data for catching checkout action) is obtained, the error of acquired action data is calculated;
S204 it) according to the error of acquired action data, scales initial actuating catching range and is caught with forming final movementCatch range.
Further, in the above method of the application, according to the error of acquired action data, iteration scales just initiatingMake catching range to determine motion capture range.
Still further, the step S300 can also include following sub-step in the above method of the application:
S301) positioning is dynamic catches performer in the position of motion capture scene;
S302) the relatively more dynamic position for catching performer and calibrated motion capture range, and guide according to the result of the comparison dynamicPerformer is caught to correct position;
S303) determine it is dynamic catch performer in correct position after, start to obtain the dynamic action data for catching performer and facial numberAccording to.
Further, in the above method of the application, the dynamic current location for catching performer is periodically detected, and catch when dynamicWhen performer is close to motion capture range, prompt information is issued.
Still further, the step S400 can also include following sub-step in the above method of the application:
S401 skeleton model and mask) are bound for corresponding three-dimensional character model, and is directed into graphics engine;
S402 the corresponding three-dimensional character mould in value graphics engine is respectively associated in the action data and face data that) will acquireType, to match skeleton model and mask;
S403 it) is based on action data and face data, makes three-dimensional character animation.
Secondly, the application also proposed a kind of motion capture system based on KINCET and facial camera.The system is suitableCapture for single double comprising with lower module: setup module, for being based on motion capture scene, setting face is taken the photographAs head position and calibrate the relative position between more KINCET;Demarcating module, for being taken the photograph according to the calibrated face in positionAs head and more KINCET, the motion capture range in motion capture scene is demarcated;Capture module, for passing through KINCET and facePortion's camera captures the dynamic action data and face data for catching performer respectively;Matching module is used for action data and facial numberAccording to being matched to three-dimensional character model.
Further, in the above system of the application, the setup module further includes following submodule: execution module,For being based on motion capture scene, facial camera and Duo Tai KINCET are set;Calibration module captures scene for shift actionIn luminous marker, to calibrate the relative position between more KINCET;Initialization module, for according to calibrated moreKINCET determines the background positions of motion capture scene.
Further, in the above system of the application, the demarcating module can also include following submodule: determine mouldBlock, for determining initial actuating catching range according to the relative position between more KINCET after calibration;Presetting module is usedCheckout action is caught on the boundary and vertex in initial actuating catching range, executing preset move;Computing module, it is dynamic for obtainingThe action data of checkout action is caught, the error of acquired action data is calculated;Zoom module, for according to acquired action dataError, scale initial actuating catching range to form final motion capture range.
Further, in the above system of the application, according to the error of acquired action data, iteration scales just initiatingMake catching range to determine motion capture range.
Still further, the capture module can also include following submodule in the above system of the application: positioningModule catches performer in the position of motion capture scene for positioning to move;Comparison module, for the relatively more dynamic position for catching performer andThe motion capture range of calibration, and guide to move according to the result of the comparison and catch performer to correct position;Start module, for trueIt is moved after catching performer in correct position calmly, starts to obtain the dynamic action data and face data for catching performer.
Further, in the above system of the application, the dynamic current location for catching performer is periodically detected, and catch when dynamicWhen performer is close to motion capture range, prompt information is issued.
Still further, the matching module can also include following submodule in the above system of the application: bindingModule for binding skeleton model and mask for corresponding three-dimensional character model, and is directed into graphics engine;It is associated with mouldThe corresponding three-dimensional character model in value graphics engine is respectively associated in block, action data and face data for will acquire, withWith skeleton model and mask;Module is made, for being based on action data and face data, makes three-dimensional character animation.
Finally, the application also also discloses a kind of computer readable storage medium, computer program is stored thereon, is applicable inIt is captured in single double.Above-mentioned computer program performs the steps of when being executed by processor
S100 it) is based on motion capture scene, the position of facial camera is set and is calibrated opposite between more KINCETPosition;
S200) the facial camera and Duo Tai KINCET calibrated according to position demarcate the movement in motion capture sceneCatching range;
S300 the dynamic action data and face data for catching performer) is captured by KINCET and facial camera respectively;
S400 action data and face data) are matched to three-dimensional character model.
Further, when processor executes above-mentioned computer program, the step S100 can also include following sub-stepIt is rapid:
S101 it) is based on motion capture scene, facial camera and Duo Tai KINCET are set;
S102) shift action captures the luminous marker in scene, to calibrate the relative position between more KINCET;
S103) according to more calibrated KINCET, the background positions of motion capture scene are determined.
Further, when processor executes above-mentioned computer program, the step S200 can also include following sub-stepIt is rapid:
S201) according to the relative position between more KINCET after calibration, initial actuating catching range is determined;
S202 it) on the boundary and vertex of initial actuating catching range, executes preset move and catches checkout action;
S203 the dynamic action data for catching checkout action) is obtained, the error of acquired action data is calculated;
S204 it) according to the error of acquired action data, scales initial actuating catching range and is caught with forming final movementCatch range.
Further, when processor executes above-mentioned computer program, according to the error of acquired action data, iteration contractingInitial actuating catching range is put to determine motion capture range.
Still further, the step S300 can also include following son when processor executes above-mentioned computer programStep:
S301) positioning is dynamic catches performer in the position of motion capture scene;
S302) the relatively more dynamic position for catching performer and calibrated motion capture range, and guide according to the result of the comparison dynamicPerformer is caught to correct position;
S303) determine it is dynamic catch performer in correct position after, start to obtain the dynamic action data for catching performer and facial numberAccording to.
Further, when processor executes above-mentioned computer program, the dynamic current location for catching performer is periodically detected,And when it is dynamic catch performer close to motion capture range when, issue prompt information.
Still further, the step S400 can also include following son when processor executes above-mentioned computer programStep:
S401 skeleton model and mask) are bound for corresponding three-dimensional character model, and is directed into graphics engine;
S402 the corresponding three-dimensional character mould in value graphics engine is respectively associated in the action data and face data that) will acquireType, to match skeleton model and mask;
S403 it) is based on action data and face data, makes three-dimensional character animation.
The application's has the beneficial effect that by combining using the hardware devices such as KINCET and facial camera, provides satisfactionThe action data and face data of the required accuracy when general cartoon making, to reduce the cost and operation door of motion capture equipmentSill.
Detailed description of the invention
Fig. 1 show the method flow of the motion capture method disclosed in the present application based on KINCET and facial cameraFigure;
Fig. 2 is shown in one embodiment of the application, and the position of facial camera is arranged and calibrates more KINCETBetween relative position submethod flow chart;
Fig. 3 is shown in one embodiment of the application, demarcates the son of the motion capture range in motion capture sceneMethod flow diagram;
Fig. 4 is shown in one embodiment of the application, captures the son of the dynamic action data for catching performer and face dataMethod flow diagram;
Fig. 5 is shown in one embodiment of the application, and action data and face data are matched to three-dimensional character mouldThe submethod flow chart of type;
Fig. 6 show the modular structure of the motion capture system disclosed in the present application based on KINCET and facial cameraFigure.
Specific embodiment
It is carried out below with reference to technical effect of the embodiment and attached drawing to the design of the application, specific structure and generation clearChu, complete description, to be completely understood by the purpose, scheme and effect of the application.It should be noted that the case where not conflictingUnder, the features in the embodiments and the embodiments of the present application can be combined with each other.
It should be noted that unless otherwise specified, when a certain feature referred to as " fixation ", " connection " are in another feature,It can directly fix, be connected to another feature, and can also fix, be connected to another feature indirectly.In addition, thisThe descriptions such as upper and lower, left and right used in application are only the mutual alignment pass relative to each component part of the application in attached drawingFor system.In the application and the "an" of singular used in the attached claims, " described " and "the" also purportIt is including most forms, unless the context clearly indicates other meaning.
In addition, unless otherwise defined, the technology of all technical and scientific terms used herein and the artThe normally understood meaning of personnel is identical.Term used in the description is intended merely to description specific embodiment herein, withoutIt is to limit the application.Term as used herein "and/or" includes the arbitrary of one or more relevant listed itemsCombination.
It will be appreciated that though various elements may be described in this application using term first, second, third, etc., butThese elements should not necessarily be limited by these terms.These terms are only used to for same type of element being distinguished from each other out.For example, not taking offIn the case where the application range, first element can also be referred to as second element, and similarly, second element can also be referred to asFirst element.Depending on context, word as used in this " if " can be construed to " ... when " or " when ...When ".
Method flow diagram shown in referring to Fig.1, in one or more embodiments of the application, it is described based on KINCET andThe motion capture method of facial camera the following steps are included:
S100 it) is based on motion capture scene, the position of facial camera is set and is calibrated opposite between more KINCETPosition;
S200) the facial camera and Duo Tai KINCET calibrated according to position demarcate the movement in motion capture sceneCatching range;
S300 the dynamic action data and face data for catching performer) is captured by KINCET and facial camera respectively;
S400 action data and face data) are matched to three-dimensional character model.
Specifically, facial camera and Duo Tai can be arranged according to the range of motion capture scene in those skilled in the artKINCET.Simultaneously as more KINCET can be collected simultaneously the same dynamic movement for catching performer from different perspectives, therefore acquire instituteThe action data obtained will be more accurate, can satisfy the requirement of general three-dimensional animation production.For example, it is contemplated that arriving general three-dimensional animationThe scope of activities for catching performer is moved in required precision and expansion as much as possible to action data, real in the one or more of the applicationApply in example, can three KINCET of setting are as motion capture equipment in early motion capture scene, and phase between every KINCETAway from 4~6 meters, respectively apart from 1.0~1.2 meters of ground.
Specifically, referring to submethod flow chart shown in Fig. 2, in one or more embodiments of the application, in order to moreAccurately to calibrate the relative position between each KINCET, the step S100 further includes following sub-step:
S101 it) is based on motion capture scene, facial camera and Duo Tai KINCET are set;
S102) shift action captures the luminous marker in scene, to calibrate the relative position between more KINCET;
S103) according to more calibrated KINCET, the background positions of motion capture scene are determined.
Wherein, luminous marker can be glo-stick or flashlight.However, in order to enable KINCET can more accurately withTrack and catching position, luminous marker should be small volume, brightness uniformity and unobstructed point light source.Meanwhile in above-mentioned schoolDuring quasi- when mobile luminous marker, it should the mobile luminous marker in a manner of constant rate.Further, luminous markerMovement routine should in overlap action catching range, dynamic catch the attainable upper space of performer's limbs institute and lower space.BaseMutual alignment between the video shot simultaneously from different perspectives, each KINCET can be determining with this field routine techniques, thisApplication not limits this.In fact, KINCET itself is with the function of mutually calibrating between more KINCET.In the applicationOne or more embodiments in, can also directly using KINCET carry calibration function be calibrated.
Referring to submethod flow chart shown in Fig. 3, in one or more embodiments of the application, the step S200 canMotion capture range is determined as follows:
S201) according to the relative position between more KINCET after calibration, initial actuating catching range is determined;
S202 it) on the boundary and vertex of initial actuating catching range, executes preset move and catches checkout action;
S203 the dynamic action data for catching checkout action) is obtained, the error of acquired action data is calculated;
S204 it) according to the error of acquired action data, scales initial actuating catching range and is caught with forming final movementCatch range.Wherein, it is preset it is dynamic catch checkout action and can be simply stand, wave or lift arm etc. and act.It is dynamic catch performer can be withIt successively stands to make on the boundary and vertex of initial actuating catching range and above-mentioned dynamic catches checkout action.Acquired action dataError will be as the foundation for scaling initial actuating capture range.For example, when the action data obtained on certain boundary have it is smaller/Biggish error, then normal direction stretching/contraction from initial actuating catching range to this boundary that can correspondingly by.Stretching/receiptsThe distance of contracting can be with the size of error at simple inverse relation.Further, in order to obtain optimal motion capture range,In one or more embodiments of the application, above-mentioned steps S202~S204 can be performed a plurality of times with iteration.At this point, final is dynamicMake that catching range is sufficiently large, and it is dynamic catch movement that performer is made in the range can be each in specified error rangePlatform KINCET is received.
Referring to submethod flow chart shown in Fig. 4, in one or more embodiments of the application, the step S300 canThe dynamic action data and face data for catching performer is captured as follows:
S301) positioning is dynamic catches performer in the position of motion capture scene;
S302) the relatively more dynamic position for catching performer and calibrated motion capture range, and guide according to the result of the comparison dynamicPerformer is caught to correct position;
S303) determine it is dynamic catch performer in correct position after, start to obtain the dynamic action data for catching performer and facial numberAccording to.Before starting capturing motion data and face data, dynamic can will catch performer guide to using determined by aforesaid way mostGood motion capture range, with the dynamic action data and face data for catching performer of the smallest error acquisition.Meanwhile the one of the applicationIt is dynamic for convenience to catch performer in a or multiple embodiments, it can be by motion capture range determined by abovementioned steps S200 with canDisassembly label substance markers are marked directly on dynamic catch in scene.It further, can be periodically in the above method of the applicationThe dynamic current location for catching performer of detection, and when it is dynamic catch performer close to motion capture range when, prompt information is issued, so that control existsThe collected action data of institute and face data error are all in the range preset during entire performance.
Submethod flow chart referring to Figure 5, in one or more embodiments of the application, the step S400 canTo utilize the animation of action data collected and face data driving production three-dimensional character model by sub-step below:
S401 skeleton model and mask) are bound for corresponding three-dimensional character model, and is directed into graphics engine;
S402 the corresponding three-dimensional character mould in value graphics engine is respectively associated in the action data and face data that) will acquireType, to match skeleton model and mask;
S403 it) is based on action data and face data, makes three-dimensional character animation.
Referring to function structure chart shown in fig. 6, in one or more embodiments of the application, it is described based on KINCET andThe motion capture system of facial camera may include with lower module: setup module, and for being based on motion capture scene, face is arrangedSimultaneously calibrate the relative position between more KINCET in the position of portion's camera;Demarcating module, for the face calibrated according to positionPortion's camera and Duo Tai KINCET demarcate the motion capture range in motion capture scene;Capture module, for passing through KINCETCapture the dynamic action data and face data for catching performer respectively with facial camera;Matching module is used for action data and facePortion's Data Matching is to three-dimensional character model.Specifically, those skilled in the art can be arranged according to the range of motion capture sceneFacial camera and Duo Tai KINCET.Simultaneously as more KINCET can be collected simultaneously same move from different perspectives catches performerMovement, therefore acquiring resulting action data will be more accurate, can satisfy the requirement of general three-dimensional animation production.For example,Expand the dynamic scope of activities for catching performer to the required precision of action data and as much as possible in view of general three-dimensional animation, at thisIn one or more embodiments of application, can in early motion capture scene three KINCET of setting as motion capture equipment,And at a distance of 4~6 meters between every KINCET, respectively apart from 1.0~1.2 meters of ground.
Specifically, in one or more embodiments of the application, in order to more precisely calibrate between each KINCETRelative position, the setup module further includes following submodule: execution module, for be based on motion capture scene, be arranged facePortion's camera and Duo Tai KINCET;Calibration module captures the luminous marker in scene for shift action, to calibrate moreRelative position between KINCET;Initialization module, for determining motion capture scene according to more calibrated KINCETBackground positions.Wherein, luminous marker can be glo-stick or flashlight.However, in order to enable KINCET can be more accuratelyTracking and catching position, luminous marker should be small volume, brightness uniformity and unobstructed point light source.Meanwhile above-mentionedIn calibration process when mobile luminous marker, it should the mobile luminous marker in a manner of constant rate.Further, luminescent markingThe movement routine of object should overlap action catching range be interior, moves and catches the attainable upper space of performer's limbs institute and lower space.Based on the video shot simultaneously from different perspectives, the mutual alignment between each KINCET can be determined with this field routine techniques,The application not limits this.In fact, KINCET itself is with the function of mutually calibrating between more KINCET.In this ShenIn one or more embodiments please, can also directly it be calibrated using the calibration function that KINCET is carried.
In one or more embodiments of the application, the demarcating module can also include following submodule, thus reallyMotion capture range: determining module is determined, for determining initial actuating according to the relative position between more KINCET after calibrationCatching range;Presetting module, for executing on the boundary and vertex of initial actuating catching range, preset dynamic to catch verification dynamicMake;Computing module calculates the error of acquired action data for obtaining the dynamic action data for catching checkout action;Zoom module,For the error according to acquired action data, initial actuating catching range is scaled to form final motion capture range.ItsIn, it is preset it is dynamic catch checkout action and can be simply stand, wave or lift arm etc. and act.It is dynamic to catch performer and successively standAbove-mentioned move is made on the boundary and vertex of initial actuating catching range catches checkout action.The error of acquired action data is by conductScale the foundation of initial actuating capture range.For example, when the action data obtained on certain boundary has smaller/biggish mistakeDifference, then normal direction stretching/contraction from initial actuating catching range to this boundary that can correspondingly by.Stretching/contraction distanceIt can be with the size of error at simple inverse relation.Further, in order to obtain optimal motion capture range, in the applicationOne or more embodiments in, above-mentioned steps S202~S204 can be performed a plurality of times with iteration.At this point, final motion capture modelEnclose it is sufficiently large, and it is dynamic catch movement that performer is made in the range can be in specified error range by each KINCETIt receives.
In one or more embodiments of the application, the capture module can also include following submodule, with specificThe dynamic action data and face data for catching performer: locating module is captured, catches performer in the position of motion capture scene for positioning to moveIt sets;Comparison module for the relatively more dynamic position for catching performer and calibrated motion capture range, and guides according to the result of the comparisonIt is dynamic to catch performer to correct position;Start module, for determine it is dynamic catch performer in correct position after, start to obtain dynamic catch and drillThe action data and face data of member.Before starting capturing motion data and face data, it dynamic can will catch performer and guide to adoptingBest motion capture range determined by fashion described above, with the dynamic action data for catching performer of the smallest error acquisition and facial numberAccording to.It is dynamic for convenience to catch performer meanwhile in one or more embodiments of the application, it can be by abovementioned steps S200 institute reallyFixed motion capture range is marked directly on dynamic catch in scene with removable markers substance markers.Further, in the upper of the applicationState in method, can periodically detect the dynamic current location for catching performer, and when it is dynamic catch performer close to motion capture range when, hairPrompt information out, so that control is in the collected action data of entire performance during institute and face data error all presetIn range.
In one or more embodiments of the application, the matching module can also include following submodule, to utilizeThe animation of action data and face data driving production three-dimensional character model collected: binding module, for being corresponding threeActor model binding skeleton model and mask are tieed up, and is directed into graphics engine;Relating module, the movement for will acquireThe corresponding three-dimensional character model in value graphics engine is respectively associated in data and face data, to match skeleton model and facial mouldType;Module is made, for being based on action data and face data, makes three-dimensional character animation.
It should be appreciated that embodiments herein can be by computer hardware, the combination of hardware and software or by depositingThe computer instruction in non-transitory computer-readable memory is stored up to be effected or carried out.Standard program can be used in this methodTechnology-include realized in computer program configured with the non-transitory computer-readable storage media of computer program, whereinConfigured in this way storage medium operates computer in a manner of specific and is predefined --- according to retouching in a particular embodimentThe method and attached drawing stated.Each program can with the programming language of level process or object-oriented come realize with computer systemCommunication.However, if desired, the program can be realized with compilation or machine language.Under any circumstance, which can be compilingOr the language explained.In addition, the program can be run on the specific integrated circuit of programming for this purpose.
Further, this method can be realized in being operably coupled to suitable any kind of computing platform, wrapInclude but be not limited to PC, mini-computer, main frame, work station, network or distributed computing environment, individual or integratedComputer platform or communicated with charged particle tool or other imaging devices etc..The various aspects of the application can be to depositThe machine readable code on non-transitory storage medium or equipment is stored up to realize no matter be moveable or be integrated to calculatingPlatform, such as hard disk, optical reading and/or write-in storage medium, RAM, ROM, so that it can be read by programmable calculator, whenStorage medium or equipment can be used for configuration and operation computer to execute process described herein when being read by computer.ThisOutside, machine readable code, or part thereof can be transmitted by wired or wireless network.When such media include combining microprocessorOr when other data processors realization instruction or program of the step above, application as described herein includes that these and other are differentThe non-transitory computer-readable storage media of type.When being programmed according to methods and techniques described herein, the application is alsoIncluding computer itself.
Computer program can be applied to input data to execute function as described herein, to convert input data with lifeAt storing to the output data of nonvolatile memory.Output information can also be applied to one or more output equipments as shownDevice.In the application preferred embodiment, the data of conversion indicate physics and tangible object, including the object generated on displayReason and the particular visual of physical objects are described.
Other modifications are in spirit herein.Therefore, although disclosed technology may be allowed various modifications and substitution structureIt makes, but has shown that in the accompanying drawings and its some embodiments shown in being described in detail above.It will be appreciated, however, that notIt is intended to for the application to be confined to disclosed one or more concrete forms;On the contrary, its intention covers such as the appended claimsDefined in fall in all modifications, alternative constructions and equivalent in spirit and scope.

Claims (9)

CN201811553858.5A2018-12-192018-12-19Motion capture method and system based on KINCET and facial cameraActiveCN109753151B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201811553858.5ACN109753151B (en)2018-12-192018-12-19Motion capture method and system based on KINCET and facial camera

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201811553858.5ACN109753151B (en)2018-12-192018-12-19Motion capture method and system based on KINCET and facial camera

Publications (2)

Publication NumberPublication Date
CN109753151Atrue CN109753151A (en)2019-05-14
CN109753151B CN109753151B (en)2022-05-24

Family

ID=66402864

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201811553858.5AActiveCN109753151B (en)2018-12-192018-12-19Motion capture method and system based on KINCET and facial camera

Country Status (1)

CountryLink
CN (1)CN109753151B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110442153A (en)*2019-07-102019-11-12佛山科学技术学院A kind of passive optical is dynamic to catch system video cameras Corrective control method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106023316A (en)*2016-05-192016-10-12华南理工大学Kinect-based dynamic sequence capture method
CN107248195A (en)*2017-05-312017-10-13珠海金山网络游戏科技有限公司A kind of main broadcaster methods, devices and systems of augmented reality
CN107274466A (en)*2017-05-312017-10-20珠海金山网络游戏科技有限公司The methods, devices and systems that a kind of real-time double is caught
KR20180095407A (en)*2017-02-172018-08-27동서대학교 산학협력단3D image acquisition and delivery method of user viewpoint correspondence remote point
CN108495057A (en)*2018-02-132018-09-04深圳市瑞立视多媒体科技有限公司A kind of camera configuration method and apparatus
CN108564643A (en)*2018-03-162018-09-21中国科学院自动化研究所Performance based on UE engines captures system
CN108572731A (en)*2018-03-162018-09-25中国科学院自动化研究所 Motion capture data presentation method and device based on multiple Kinects and UE4
CN108986189A (en)*2018-06-212018-12-11珠海金山网络游戏科技有限公司Method and system based on real time multi-human motion capture in three-dimensional animation and live streaming

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106023316A (en)*2016-05-192016-10-12华南理工大学Kinect-based dynamic sequence capture method
KR20180095407A (en)*2017-02-172018-08-27동서대학교 산학협력단3D image acquisition and delivery method of user viewpoint correspondence remote point
CN107248195A (en)*2017-05-312017-10-13珠海金山网络游戏科技有限公司A kind of main broadcaster methods, devices and systems of augmented reality
CN107274466A (en)*2017-05-312017-10-20珠海金山网络游戏科技有限公司The methods, devices and systems that a kind of real-time double is caught
CN108495057A (en)*2018-02-132018-09-04深圳市瑞立视多媒体科技有限公司A kind of camera configuration method and apparatus
CN108564643A (en)*2018-03-162018-09-21中国科学院自动化研究所Performance based on UE engines captures system
CN108572731A (en)*2018-03-162018-09-25中国科学院自动化研究所 Motion capture data presentation method and device based on multiple Kinects and UE4
CN108986189A (en)*2018-06-212018-12-11珠海金山网络游戏科技有限公司Method and system based on real time multi-human motion capture in three-dimensional animation and live streaming

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110442153A (en)*2019-07-102019-11-12佛山科学技术学院A kind of passive optical is dynamic to catch system video cameras Corrective control method and system
CN110442153B (en)*2019-07-102022-03-25佛山科学技术学院Camera correction control method and system for passive optical dynamic capturing system

Also Published As

Publication numberPublication date
CN109753151B (en)2022-05-24

Similar Documents

PublicationPublication DateTitle
US12056885B2 (en)Method for automatically generating hand marking data and calculating bone length
CN103267491B (en)The method and system of automatic acquisition complete three-dimensional data of object surface
CN108986164A (en)Method for detecting position, device, equipment and storage medium based on image
CN102647449B (en)Based on the intelligent photographic method of cloud service, device and mobile terminal
CN104427252B (en) Method for synthesizing images and electronic device thereof
CN110826549A (en)Inspection robot instrument image identification method and system based on computer vision
US11386578B2 (en)Image labeling system of a hand in an image
CN104173054A (en)Measuring method and measuring device for height of human body based on binocular vision technique
CN114202613B (en) House type determination method, device and system, electronic device and storage medium
CN211928594U (en)Self-service temperature measurement system and device thereof
CN111476827A (en)Target tracking method, system, electronic device and storage medium
CN102679961B (en)Portable four-camera three-dimensional photographic measurement system and method
CN103765870A (en)Image processing apparatus, projector and projector system including image processing apparatus, image processing method
CN111951326A (en)Target object skeleton key point positioning method and device based on multiple camera devices
CN207216293U (en)Wrist-watch based on image recognition technology walks diurnal inequality test system in fact
CN110503144A (en)A kind of pointer instrument recognition methods for crusing robot
CN104279960A (en)Method for measuring size of object by mobile equipment
CN112070021A (en)Distance measurement method, distance measurement system, distance measurement equipment and storage medium based on face detection
CN106595523A (en)Portable three-dimensional morphology measurement system and portable three-dimensional morphology measurement system based on smart phone
CN108334697B (en)Simulation experiment method for evaluating three-dimensional reconstruction software
CN109448018A (en)Track localization method, device, equipment and the storage medium of target
CN109099889A (en)Close range photogrammetric system and method
CN114004891A (en)Distribution network line inspection method based on target tracking and related device
CN109753151A (en)Motion capture method and system based on KINCET and facial camera
JP2004086929A (en)Image collation device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp