Movatterモバイル変換


[0]ホーム

URL:


CN105635669A - Movement contrast system based on three-dimensional motion capture data and actually photographed videos and method thereof - Google Patents

Movement contrast system based on three-dimensional motion capture data and actually photographed videos and method thereof
Download PDF

Info

Publication number
CN105635669A
CN105635669ACN201510994098.1ACN201510994098ACN105635669ACN 105635669 ACN105635669 ACN 105635669ACN 201510994098 ACN201510994098 ACN 201510994098ACN 105635669 ACN105635669 ACN 105635669A
Authority
CN
China
Prior art keywords
action
chapters
sections
scene
demonstration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510994098.1A
Other languages
Chinese (zh)
Other versions
CN105635669B (en
Inventor
蔡震宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dison Digital Entertainment Technology Co Ltd
Original Assignee
Beijing Dison Digital Entertainment Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dison Digital Entertainment Technology Co LtdfiledCriticalBeijing Dison Digital Entertainment Technology Co Ltd
Priority to CN201510994098.1ApriorityCriticalpatent/CN105635669B/en
Publication of CN105635669ApublicationCriticalpatent/CN105635669A/en
Application grantedgrantedCritical
Publication of CN105635669BpublicationCriticalpatent/CN105635669B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention provides a movement contrast system based on three-dimensional motion capture data and actually photographed videos and a method thereof. The contrast system comprises a basic section, a contrast section and a contrast unit. The basic section is a virtual reference file formed in a way that reference persons of standard movements complete the whole set of movements in the specific scene. The contrast section is a demonstration file formed in a way that demonstrators of the standard movements complete the whole set of movements in the field corresponding to the specific scene. The contrast unit is used for displaying the basic section and the contract section under the state of overlapping demonstration so as to evaluate accuracy of the movements. According to the contrast system, the basic section acting as the reference standard is introduced, and the virtual section is attached to the field frames in a matching way with no requirement for field guidance of the third party so that real performance and the standard movements can be visually and accurately contrasted at a monitor side, and the process of movement improvement can be optimized.

Description

Data and the real action contrast system and method shot the video is caught based on three-dimensional motion
Technical field
The present invention relates to a kind of action contrast system and method catching data based on three-dimensional motion and shooting the video in fact.
Background technology
For such as dancing, wushu, standard degree is had action criteria training or the course of strict demand by gymnastics etc., traditional guidance is that demonstration scene follows standard to do action with reference to video, then give field evaluation by director and suitably adjust, or performing artist oneself is facing to mirror exercise, this kind of action satisfactory degree based on oneself or other people assert mode under, the needs up to standard of a set of action constantly repeat, and due to not objectively can the visual data of comparison, effect varies with each individual, and the yardstick marking nonstandard or many standards can not accurately hold, the adjustment of performing artist and appraisement system need to be optimized further.
Summary of the invention
In view of this, the present invention provides a kind of and catches data and the real action contrast system and method shot the video based on three-dimensional motion, it is intended to by introducing benchmark, is caught by the live action of learner, thus accurately evaluates its standard degree.
The technical solution used in the present invention is specially:
Catch data and the real action contrast system shot the video based on three-dimensional motion, comprise benchmark chapters and sections, comparison chapters and sections and comparison unit, wherein:
The reference person that described benchmark chapters and sections are standard action completes a complete set of action in given scenario, the virtual reference file of formation;
The demonstration person that described comparison chapters and sections are standard action completes a complete set of action, the demonstration document of formation at the scene corresponding with given scenario;
Described comparison unit, for being shown when overlapping demonstration with described comparison chapters and sections by described benchmark chapters and sections, evaluates the accuracy of action.
Data are caught with, in the real action contrast system shot the video, described benchmark chapters and sections comprise the reduction display unit of basic parameter file forming unit and basic parameter file based on three-dimensional motion above-mentioned, wherein:
The forming unit of basic parameter file is specially:
Step key position is specified the scene reference person forming benchmark file, and complete the process of a complete set of action to use optics move the system of catching reference person and the action of reference person and step key position are gathered according to the virtual seat in the plane of setting, and the data of collection are stored as basic parameter file;
The reduction display unit of basic parameter file is specially:
By the action that the date restoring of the action in basic parameter file and step key position is person model in three-dimensional virtual environment, for recalling and check the action details of corresponding virtual seat in the plane,
Data are caught with, in the real action contrast system shot the video, described comparison chapters and sections comprise demonstration scene scene and generate unit and realize the action data matching unit of basic parameter file and on-the-spot demonstration based on three-dimensional motion above-mentioned, wherein:
It is by the filming apparatus corresponding with virtual seat in the plane in demonstration scene erection that described demonstration scene scene generates unit, and marks corresponding step key position in demonstration scene, for reappearing the scene of basic parameter file;
Described action data matching unit is by recalling benchmark chapters and sections, and the demostrating action of demonstration scene and the dummy activity of benchmark chapters and sections synchronously start, and by data synchronization transmissions to the monitoring terminal corresponding with filming apparatus.
Data are caught with the real action contrast system shot the video based on three-dimensional motion above-mentioned, described comparison chapters and sections by, on performing artist's picture that the action data matched in virtual seat in the plane is attached to filming apparatus shooting, realizing the comparison of action at monitoring terminal in the way of the overlap display of same frame.
Data are caught with the real action contrast system shot the video based on three-dimensional motion above-mentioned, if desired in demonstration scene, increase shooting device, then the three-dimensional virtual environment in benchmark chapters and sections needs to introduce corresponding virtual seat in the plane, thus realize multi-faceted comparison by increasing the monitoring terminal corresponding with shooting device.
Catch data and the real action control methods shot the video based on three-dimensional motion, comprise the forming step of benchmark chapters and sections, the forming step of comparison chapters and sections and comparison monitoring step, wherein:
The forming step of described benchmark chapters and sections completes a complete set of action by the reference person of standard action in given scenario, for the formation of virtual with reference to file;
The forming step of described comparison chapters and sections completes a complete set of action by the demonstration person of standard action at the scene corresponding with given scenario, for the formation of demonstration document;
Described comparison monitoring step by described benchmark chapters and sections are shown when overlapping demonstration with described comparison chapters and sections, for evaluating the accuracy of action.
Data are caught with, in the real action control methods shot the video, the forming step of described benchmark chapters and sections comprises the forming step of basic parameter file and the reduction display step of basic parameter file based on three-dimensional motion above-mentioned, wherein:
The forming step of basic parameter file is specially:
Step key position is specified the scene reference person forming benchmark file, and complete the process of a complete set of action to use optics move the system of catching reference person and the action of reference person and step key position are gathered according to the virtual seat in the plane of setting, and the data of collection are stored as basic parameter file;
The reduction display step of basic parameter file is specially:
By the action that the date restoring of the action in basic parameter file and step key position is person model in three-dimensional virtual environment, for recalling and check the action details of corresponding virtual seat in the plane,
Data are caught with, in the real action control methods shot the video, the forming step of described comparison chapters and sections comprises demonstration scene scene generation step and realizes basic parameter file mates step with the action data of on-the-spot demonstration based on three-dimensional motion above-mentioned, wherein:
Described demonstration scene scene generation step is by the filming apparatus corresponding with virtual seat in the plane in demonstration scene erection, and marks corresponding step key position in demonstration scene, for reappearing the scene of basic parameter file;
Described action data coupling step is by recalling benchmark chapters and sections, and the demostrating action of demonstration scene and the dummy activity of benchmark chapters and sections synchronously start, and by data synchronization transmissions to the monitoring terminal corresponding with filming apparatus.
Data are caught with the real action control methods shot the video based on three-dimensional motion above-mentioned, described comparison chapters and sections by, on performing artist's picture that the action data matched in virtual seat in the plane is attached to filming apparatus shooting, realizing the comparison of action at monitoring terminal in the way of the overlap display of same frame.
Data are caught with the real action control methods shot the video based on three-dimensional motion above-mentioned, if desired in demonstration scene, increase shooting device, then the three-dimensional virtual environment in benchmark chapters and sections needs to introduce corresponding virtual seat in the plane, thus realize multi-faceted comparison by increasing the monitoring terminal corresponding with shooting device.
The useful effect that the present invention produces is:
The present invention's is based on virtual benchmark chapters and sections based on three-dimensional motion seizure data and the real action contrast system shot the video, shoot the video by three-dimensional motion seizure data and many seats in the plane are real, virtual chapters and sections matched is attached to live view, the standard degree of performing artist's action can be observed intuitively, standard is unified, the good job do not evaluated by oneself or the 3rd people restricts, for action learning person or director's high efficiency solution all beyond doubt;
And the comparison system of the present invention can be reduced virtual scene in any occasion by erection pick up camera thus be realized comparison, common camera and watch-dog have been enough to realize its function, and initial stage input cost is low, easy and simple to handle.
Accompanying drawing explanation
When considered in conjunction with the accompanying drawings, it is possible to more complete understand the present invention better. Accompanying drawing described herein is used to provide a further understanding of the present invention, and embodiment and explanation thereof, for explaining the present invention, do not form inappropriate limitation of the present invention.
Fig. 1 is that the present invention is a kind of based on the realization flow schematic diagram of three-dimensional motion seizure data with the real action contrast system shot the video;
Fig. 2 is that the present invention is a kind of based on the on-the-spot schematic diagram of three-dimensional motion seizure data with the real action contrast system shot the video;
Fig. 3 is that the present invention is a kind of based on the logical diagram of three-dimensional motion seizure data with the real action contrast system shot the video.
Embodiment
Below in conjunction with drawings and Examples, the technical scheme of the present invention is described in further detail.
A kind of action contrast system catching data based on three-dimensional motion and shooting the video in fact as shown in Figure 1, mainly comprise benchmark chapters and sections, comparison chapters and sections and comparison unit, comparison unit is by intuitively contrasting when same frame is overlapping with the comparison chapters and sections that on-the-spot demonstration is formed the benchmark chapters and sections as virtual reference, being particularly suitable under scene do not have the prerequisite that the dynamic author of standard instructs, performing artist come accurately by comparison, evaluate intuitively and adjust oneself action accuracy. The comparison implementation method of above-mentioned action comparison system further as shown in Figure 3, mainly comprises the steps:
S10: the forming step as the benchmark chapters and sections of virtual reference:
First it is the formation of basic parameter file, specifically:
Select the reference person that can make high standard action, forming the on-the-spot appointment step key position of benchmark file, complete in the process of a complete set of action reference person, use the dynamic accurate seizure caught system (such as Vicon, Optitrack etc.) and it is carried out the virtual seat in the plane of action, step key position and setting of optics, above-mentioned data are stored, as the basic parameter file of comparison system; Wherein:
The action data caught is the FBX data of standard, FBX is to store all information of model with the structure of scenegraph, it is the software creating and exchanging form (three-dimensional data exchange) for cross-platform three-dimensional that Autodesk company produces, therefore can realize next step 3-d recovery;
The virtual machine bit data of step key position and setting is corresponding, this is because, comparison system needs the virtual seat in the plane parameter information comprising position, orientation and FOV (visual angle) by obtaining, bone positional information corresponding to each virtual seat in the plane is calculated, in order to ensure action accurate reproduction when showing in three-dimensional virtual environment based on step key position;
Next is the displaying of basic parameter file, specifically:
By the action that the date restoring of the action in basic parameter file and step key position is person model in three-dimensional virtual environment, the action details of corresponding virtual seat in the plane can be recalled so on the one hand, on the other hand, arbitrarily angled in presentation process can also check action details, as required, new virtual seat in the plane is introduced, to increase corresponding real camera in scene arbitrarily angled.
Recovery can by support standard FBX data such as 3Dmax, maya etc. can realize at interior existing three-dimensional software, comprise above-mentioned all virtual seats in the plane and model action information, can also for the three-dimensional software of development support standard FBX data, in S20 when scene needs to increase actual photographed seat in the plane, three-dimensional virtual environment also increases corresponding virtual seat in the plane, and according to existing standard FBX data, the bone positional information corresponding to virtual seat in the plane obtaining increase by calculating, upgrade current virtual seat in the plane information, and these data are saved to benchmark chapters and sections as basic parameter file, for when increasing actual photographed seat in the plane, the comparison of virtual machine bit data and real machine bit data still can be realized at watch-dog end.
The benchmark chapters and sections comprising basic parameter file and reduction display thereof can calling for demonstration scene.
S20: the forming step of the comparison chapters and sections of demonstration scene:
First it is the scene generation at shooting scene, specifically:
Set up the virtual machine bit data pick up camera one to one in real and virtual seat in the plane in demonstration scene, and at the scene acceptance of the bid corresponding step key position of note, reappear the scene of basic parameter file;
Next is the attachment of benchmark action data and on-the-spot demonstration, specifically:
Recall benchmark chapters and sections, the demostrating action of demonstration scene and the dummy activity of benchmark chapters and sections synchronously start, action data in virtual seat in the plane is attached on performing artist's picture of actual camera shooting by system by the mode matched of bone or person model, shows on same watch-dog; Wherein:
1) bone mode is specially:
There are corresponding step key position and the bone positional information that step position is corresponding in each seat in the plane, therefore the bone position of record can be carried out line, the bone pattern of line can be single lines can also be special drafting, by the drafting pattern arrived by the broadcasting of continuous frame, the data that actual photographed seat in the plane is obtained by system are as substrate, bone line drawn for the virtual seat in the plane corresponding with actual photographed seat in the plane is drawn on substrate upper strata, through above-mentioned Images uniting, can by the two groups of view data combined to show on the watch-dog corresponding with actual photographed seat in the plane in the way of comparison directly perceived,
2) person model mode is specially:
Real-time image scratching is carried out by scratching the person model as the virtual seat in the plane of equipment, the picture that actual camera position is taken by system is as substrate, the virtual seat in the plane person model picture that stingy picture obtains is shown in substrate upper strata with translucent mode eclipsed form, for convenience of the stingy picture of person model, three-dimensional scenic can carry out the special processing being correlated with;
Comprise live action and true chapters and sections attached to it for basic parameter file can be supplied calling of comparison unit.
S30: action comparison and evaluation procedure:
The picture that actual camera is taken is shown on corresponding comparison watch-dog, just can see that whether the realistic operation of corresponding seat in the plane is consistent with the standard action in benchmark chapters and sections at comparison watch-dog end like this, owing to the shooting angle of each actual photographed machine is different, in this way, can clearly judge in a complete set of action, which or which link is not handled properly, and recorded and get off to compare, pass through multi-angle, the action comparison of comprehensive formula, the mode of performing artist without the need to evaluating by means of mirror or undertaken by the 3rd people, both can be accurate, evaluate intuitively and adjust oneself action.
Demonstration scene is as shown in Figure 2, the pick up camera (one of erected on site, two, three) on-the-spot real picture is obtained, benchmark chapters and sections are attached to comparison watch-dog (, two, three) on, namely can same frame, eclipsed form way of contrast draw the order of accuarcy of performing artist's action intuitively; According to practical situation, can if desired increase the quantity of comparison watch-dog and actual photographed seat in the plane (pick up camera), now in order to ensure the comparison of comparison watch-dog end, the three-dimensional virtual environment in benchmark chapters and sections need to increase corresponding virtual seat in the plane.
And benchmark chapters and sections arbitrarily angled can check three-dimensional action, reproduction action details;
The on-the-spot real picture that the pick up camera of actual photographed seat in the plane obtains is transferred to comparison server by video frequency collection card, with transmit simultaneously can after the benchmark chapters and sections of three-dimensional action carry out information fusion, by multifrequency video card, real picture corresponding for different actual photographed seat in the plane is shown on corresponding comparison watch-dog to the virtual machine bit data of corresponding benchmark chapters and sections simultaneously, the difference between the live action of performing artist and the standard action of reference person can be found out intuitively.
Certainly, on the basis of the comparison system of the present invention, it is also possible to introduce the benchmark such as similar expert systems further, the accuracy of action is evaluated according to the mode of setting, and provides feedback etc.
Below having been explained by embodiments of the invention by reference to the accompanying drawings, accompanying drawing herein is used to provide a further understanding of the present invention. Obviously; the foregoing is only the present invention's preferably embodiment; but protection scope of the present invention is not limited thereto; any is change that can expect easily, that substantially do not depart from the present invention or replacement to one skilled in the art, is also all included within protection scope of the present invention.

Claims (10)

CN201510994098.1A2015-12-252015-12-25The movement comparison system and method for data and real scene shooting video are captured based on three-dimensional motionActiveCN105635669B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201510994098.1ACN105635669B (en)2015-12-252015-12-25The movement comparison system and method for data and real scene shooting video are captured based on three-dimensional motion

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201510994098.1ACN105635669B (en)2015-12-252015-12-25The movement comparison system and method for data and real scene shooting video are captured based on three-dimensional motion

Publications (2)

Publication NumberPublication Date
CN105635669Atrue CN105635669A (en)2016-06-01
CN105635669B CN105635669B (en)2019-03-01

Family

ID=56050105

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201510994098.1AActiveCN105635669B (en)2015-12-252015-12-25The movement comparison system and method for data and real scene shooting video are captured based on three-dimensional motion

Country Status (1)

CountryLink
CN (1)CN105635669B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106228143A (en)*2016-08-022016-12-14王国兴A kind of method that instructional video is marked with camera video motion contrast
CN107122752A (en)*2017-05-052017-09-01北京工业大学A kind of human action comparison method and device
CN107243141A (en)*2017-05-052017-10-13北京工业大学A kind of action auxiliary training system based on motion identification
CN107349594A (en)*2017-08-312017-11-17华中师范大学A kind of action evaluation method of virtual Dance System
CN109005380A (en)*2017-06-062018-12-14松下电器(美国)知识产权公司Dynamic image list generation method, program and server unit
CN109325466A (en)*2018-10-172019-02-12兰州交通大学 An intelligent motion guidance system and method based on motion recognition technology
CN110302524A (en)*2019-05-222019-10-08北京百度网讯科技有限公司 Body training method, device, equipment and storage medium
CN110336957A (en)*2019-06-102019-10-15北京字节跳动网络技术有限公司A kind of video creating method, device, medium and electronic equipment
CN110719455A (en)*2019-09-292020-01-21深圳市火乐科技发展有限公司Video projection method and related device
CN111083524A (en)*2019-12-172020-04-28北京理工大学 A crowd performance evaluation system
CN111899577A (en)*2020-07-132020-11-06杭州赛鲁班网络科技有限公司Exercise training system and method based on bimacular teaching
CN112560605A (en)*2020-12-022021-03-26北京字节跳动网络技术有限公司Interaction method, device, terminal, server and storage medium
WO2022193425A1 (en)*2021-03-192022-09-22深圳市韶音科技有限公司Exercise data display method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2006112308A1 (en)*2005-04-152006-10-26The University Of TokyoMotion capture system and method for three-dimensional reconfiguring of characteristic point in motion capture system
CN102243687A (en)*2011-04-222011-11-16安徽寰智信息科技股份有限公司Physical education teaching auxiliary system based on motion identification technology and implementation method of physical education teaching auxiliary system
CN102500094A (en)*2011-10-282012-06-20北京航空航天大学Kinect-based action training method
WO2013022214A2 (en)*2011-08-052013-02-14(주)앱스원Apparatus and method for analyzing exercise motion
CN104866108A (en)*2015-06-052015-08-26中国科学院自动化研究所Multifunctional dance experience system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2006112308A1 (en)*2005-04-152006-10-26The University Of TokyoMotion capture system and method for three-dimensional reconfiguring of characteristic point in motion capture system
CN102243687A (en)*2011-04-222011-11-16安徽寰智信息科技股份有限公司Physical education teaching auxiliary system based on motion identification technology and implementation method of physical education teaching auxiliary system
WO2013022214A2 (en)*2011-08-052013-02-14(주)앱스원Apparatus and method for analyzing exercise motion
CN102500094A (en)*2011-10-282012-06-20北京航空航天大学Kinect-based action training method
CN104866108A (en)*2015-06-052015-08-26中国科学院自动化研究所Multifunctional dance experience system

Cited By (17)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106228143A (en)*2016-08-022016-12-14王国兴A kind of method that instructional video is marked with camera video motion contrast
CN107122752A (en)*2017-05-052017-09-01北京工业大学A kind of human action comparison method and device
CN107243141A (en)*2017-05-052017-10-13北京工业大学A kind of action auxiliary training system based on motion identification
CN109005380A (en)*2017-06-062018-12-14松下电器(美国)知识产权公司Dynamic image list generation method, program and server unit
CN107349594A (en)*2017-08-312017-11-17华中师范大学A kind of action evaluation method of virtual Dance System
CN107349594B (en)*2017-08-312019-03-19华中师范大学A kind of action evaluation method of virtual Dance System
CN109325466B (en)*2018-10-172022-05-03兰州交通大学Intelligent motion guidance system and method based on motion recognition technology
CN109325466A (en)*2018-10-172019-02-12兰州交通大学 An intelligent motion guidance system and method based on motion recognition technology
CN110302524A (en)*2019-05-222019-10-08北京百度网讯科技有限公司 Body training method, device, equipment and storage medium
CN110336957A (en)*2019-06-102019-10-15北京字节跳动网络技术有限公司A kind of video creating method, device, medium and electronic equipment
CN110336957B (en)*2019-06-102022-05-03北京字节跳动网络技术有限公司Video production method, device, medium and electronic equipment
CN110719455A (en)*2019-09-292020-01-21深圳市火乐科技发展有限公司Video projection method and related device
CN111083524A (en)*2019-12-172020-04-28北京理工大学 A crowd performance evaluation system
CN111899577A (en)*2020-07-132020-11-06杭州赛鲁班网络科技有限公司Exercise training system and method based on bimacular teaching
CN112560605A (en)*2020-12-022021-03-26北京字节跳动网络技术有限公司Interaction method, device, terminal, server and storage medium
CN112560605B (en)*2020-12-022023-04-18北京字节跳动网络技术有限公司Interaction method, device, terminal, server and storage medium
WO2022193425A1 (en)*2021-03-192022-09-22深圳市韶音科技有限公司Exercise data display method and system

Also Published As

Publication numberPublication date
CN105635669B (en)2019-03-01

Similar Documents

PublicationPublication DateTitle
CN105635669A (en)Movement contrast system based on three-dimensional motion capture data and actually photographed videos and method thereof
CN107976811B (en)Virtual reality mixing-based method simulation laboratory simulation method of simulation method
CN107341832B (en)Multi-view switching shooting system and method based on infrared positioning system
KR101295471B1 (en)A system and method for 3D space-dimension based image processing
CN104331929B (en)Scene of a crime restoring method based on video map and augmented reality
CN110969905A (en)Remote teaching interaction and teaching aid interaction system for mixed reality and interaction method thereof
CN107079184A (en)Interactive binocular video display
CN109817031B (en)Limbs movement teaching method based on VR technology
US20160269685A1 (en)Video interaction between physical locations
JP2017507557A (en) Process for improving the quality of experience for users who view high-definition video streams on their devices
JPWO2018088037A1 (en) Control device for movable imaging device, control method for movable imaging device, and program
CN106293087B (en)A kind of information interacting method and electronic equipment
CN107392853A (en)Double-camera video frequency merges distortion correction and viewpoint readjustment method and system
CN206340066U (en)Visual human's On-the-spot Interaction performance system
CN106373142A (en)Virtual character on-site interaction performance system and method
CN212231547U (en)Mixed reality virtual preview shooting system
CN103543827A (en)Immersive outdoor activity interactive platform implement method based on single camera
CN108509173A (en)Image shows system and method, storage medium, processor
CN112509401A (en)Remote real-practice teaching method and system based on augmented reality projection interaction
CN105183161A (en)Synchronized moving method for user in real environment and virtual environment
CN105933637A (en)Video communication method and system
CN110312121A (en)A kind of 3D intellectual education monitoring method, system and storage medium
CN107809563A (en)A kind of writing on the blackboard detecting system, method and device
KR20160136160A (en) Virtual Reality Performance System and Performance Method
CN105892638A (en)Virtual reality interaction method, device and system

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp