Movatterモバイル変換


[0]ホーム

URL:


CN114185431A - Intelligent media interaction method based on MR technology - Google Patents

Intelligent media interaction method based on MR technology
Download PDF

Info

Publication number
CN114185431A
CN114185431ACN202111398973.1ACN202111398973ACN114185431ACN 114185431 ACN114185431 ACN 114185431ACN 202111398973 ACN202111398973 ACN 202111398973ACN 114185431 ACN114185431 ACN 114185431A
Authority
CN
China
Prior art keywords
image
real
time
intelligent terminal
intelligent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111398973.1A
Other languages
Chinese (zh)
Other versions
CN114185431B (en
Inventor
顾卫永
陈晓帆
刘俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Xinhua Media Co ltd
Original Assignee
Anhui Xinhua Media Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Xinhua Media Co ltdfiledCriticalAnhui Xinhua Media Co ltd
Priority to CN202111398973.1ApriorityCriticalpatent/CN114185431B/en
Publication of CN114185431ApublicationCriticalpatent/CN114185431A/en
Application grantedgrantedCritical
Publication of CN114185431BpublicationCriticalpatent/CN114185431B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention relates to an intelligent media interaction method based on an MR technology, which comprises the steps of utilizing a position identification module in an MR intelligent terminal to identify and calculate a front scene to obtain a real-time geographic position coordinate and a real-time marking image corresponding to the real-time geographic position coordinate; identifying whether the current motion state of the user is suitable for pushing specified information or not by using a motion state module in the MR intelligent terminal; utilizing an AI algorithm in the MR intelligent terminal to identify the real-time geographic position coordinates and the real-time marking images and the sample images stored in a local database in the MR intelligent terminal, and screening information to be played from the specified information when the identification result is in accordance with the preset; when the motion state module identifies that the current motion state of the user is suitable for pushing the information to be played, the display area of the MR intelligent terminal can display the information to be played. The invention utilizes MR technology to improve the interactive mode with the intelligent media, the interactive mode is diversified, the triggering accuracy is high, the flexibility is good, and the user experience is high.

Description

Intelligent media interaction method based on MR technology
Technical Field
The invention relates to an intelligent media interaction method based on an MR (magnetic resonance) technology, belonging to the technical field of media interaction.
Background
MR technology refers to mixed reality technology. Mixed Reality (MR) is a further development of virtual reality technology that builds an interactive feedback information loop between the real world, the virtual world and the user by presenting virtual scene information in the real scene to enhance the realism of the user experience.
Mixed reality is a combination of technologies that provide not only new viewing methods but also new input methods, and all methods are combined with each other, thereby promoting innovation. The combination of input and output is a key differentiation advantage for small and medium-sized enterprises. Therefore, mixed reality can directly influence your workflow and help your staff to improve work efficiency and innovation capability. Let us look at some feasible schemes to understand the working principle and what benefits it has.
Mixed reality (MR, including both augmented reality and augmented virtual) refers to a new visualization environment created by the merging of real and virtual worlds. Physical and digital objects coexist in the new visualization environment and interact in real time. The system generally employs three main features:
1. it combines virtual and reality; 2. in virtual three dimensions (3D registration); 3. and (4) running in real time.
Mixed Reality (MR) implementations need to be in an environment where real-world objects can interact with each other. It is the domain of VR if everything is virtual. If the displayed virtual information can only be simply superposed on the real things, the AR is the virtual reality (AR). The key point of MR is the interaction with the real world and the timely acquisition of information.
Currently, representative devices of MR technology, MR glasses, whose classical applications are shared MR glasses used in scenic spots, are accepted by an increasing number of scenic spots and visitors as wearing devices for MR applications. However, many disadvantages are reflected in the practical application of the existing equipment. Such as: the degree of intellectualization of part of MR glasses in a virtual scene is not high, and corresponding operations such as clicking, selecting and the like need to be carried out by using a handheld remote controller in the actual operation process; and implementing the operation on the desktop type graphic menu in an auxiliary voice input mode.
In the triggering method, the MR glasses are required to be connected with a peripheral through the application of the handheld remote controller or the pressure gloves, the man-machine interaction degree is limited, and the dynamic interesting effect is limited. On the other hand, the voice input corresponding command is limited by the existing voice input technology, especially the technical barrier of the voice input accuracy is still not good.
The above two points limit the development of the existing MR technology in the media interaction field.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an intelligent media interaction method based on an MR technology, which has the following specific technical scheme:
an intelligent media interaction method based on MR technology comprises the following steps:
step S1, recognizing a front scene by using a position recognition module in the MR intelligent terminal and calculating to obtain a real-time geographic position coordinate and a real-time marking image corresponding to the real-time geographic position coordinate;
step S2, identifying whether the current motion state of the user is suitable for pushing appointed information by using a motion state module in the MR intelligent terminal;
s3, recognizing the real-time geographic position coordinates and the real-time marking images and the specimen images stored in the local database in the MR intelligent terminal by using an AI algorithm in the MR intelligent terminal, and screening information to be played from the designated information when the recognition result is in accordance with the preset; when the motion state module identifies that the current motion state of the user is suitable for pushing the information to be played, the display area of the MR intelligent terminal can display the information to be played.
As an improvement of the above technical solution, the operating method of the location identification module includes:
the position recognition module photographs a front scene to generate a real-time image, compares the real-time image with the similarity of the sample image stored in a local database in the MR intelligent terminal, selects the sample image with the maximum similarity with the real-time image and marks the sample image as a module image; the specimen image and the module image both carry a position calibration point and a rectangular calibration pattern block taking the position calibration point as a center, the real-time image is subjected to gridding processing to obtain a gridding pattern block set, the calibration pattern block and the rectangular gridding pattern block in the gridding pattern block set are subjected to traversal comparison, and a comparison algorithm adopts a similarity algorithm; when the similarity between the calibration pattern block and the rectangular gridding pattern block in the gridding pattern block set is larger than or equal to a preset threshold value, the position information of the position calibration point load corresponding to the calibration pattern block is a real-time geographic position coordinate, and the real-time marking image is a rectangular picture which is intercepted at the position corresponding to the real-time image by taking the real-time geographic position coordinate as the center.
As an improvement of the above technical solution, the operating method of the motion state module in step S2 includes:
and recording the real-time motion state data set of the user by utilizing an acceleration sensor and a gyroscope which are installed on the MR intelligent terminal, and comparing the real-time motion state data set with a preset data set in a local database in the MR intelligent terminal so as to judge the corresponding motion state.
As an improvement of the above technical solution, the preset threshold is obtained by performing an AI algorithm, and the method includes:
the template wall is used for assisting training and is formed by arraying a plurality of rectangular module units, each module unit comprises a transparent box, a thermochromic layer is arranged in each transparent box, the front surface of each thermochromic layer is marked with a position mark, the back surface of each thermochromic layer is fixedly provided with an electric heating plate, and a heat insulation layer is filled between the edge of each electric heating plate and the inner wall of each transparent box; heating by using an electric heating plate to change the color of the thermochromic layer so as to change the pattern displayed by the template wall, photographing the template wall by using an MR intelligent terminal to obtain a first image set, and performing gray processing on each first image in the first image set to obtain a second image set;
constructing and training a twin neural network, wherein the twin neural network comprises two symmetrically arranged Convolutional Neural Networks (CNN), and two symmetrically arranged hidden Markov models are introduced into a generator of the Convolutional Neural Networks (CNN) for countertraining; inputting the second image set into a twin neural network, utilizing a defect random generation module to perform defect processing on the second image in the second image set to obtain a third image, receiving the third image by a generator of a Convolutional Neural Network (CNN), extracting a feature image, performing similarity comparison on the feature image and the real-time mark image, performing counterstudy, and regressing a preset threshold value according to a counterstudy result.
As an improvement of the technical scheme, the MR intelligent terminal comprises MR intelligent glasses and an MR intelligent helmet.
As an improvement of the technical scheme, the transparent box is made of a glass material.
As an improvement of the above technical solution, the thermochromic layer is made of a reversible thermochromic material.
As an improvement of the technical scheme, the heat insulation layer is made of heat insulation epoxy resin through filling and curing.
As an improvement of the technical scheme, the position mark is one of a cross shape, a meter shape and an shape.
The invention has the beneficial effects that:
the invention utilizes MR technology to improve the interaction mode with intelligent media, and provides more diversified information for users; according to the invention, the image recognition technology is used for triggering the push content appointed by the merchant, and the deep learning algorithm is used for improving the defects of the existing image recognition technology, so that the triggering accuracy and flexibility are improved, and the human-computer interaction effect is more friendly and convenient; the conditions that the triggering accuracy is low and accurate triggering cannot be achieved due to objective factors such as seasons, temperature, illumination and shooting angles are reduced, and the user experience is remarkably improved.
Drawings
FIG. 1 is a schematic view of the construction of the formwork wall of the present invention;
fig. 2 is a schematic structural diagram of the module unit according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
The intelligent media interaction method based on the MR technology comprises the following steps:
step S1, recognizing a front scene by using a position recognition module in the MR intelligent terminal and calculating to obtain a real-time geographic position coordinate and a real-time marking image corresponding to the real-time geographic position coordinate;
step S2, identifying whether the current motion state of the user is suitable for pushing appointed information by using a motion state module in the MR intelligent terminal;
s3, recognizing the real-time geographic position coordinates and the real-time marking images and the specimen images stored in the local database in the MR intelligent terminal by using an AI algorithm in the MR intelligent terminal, and screening information to be played from the designated information when the recognition result is in accordance with the preset; when the motion state module identifies that the current motion state of the user is suitable for pushing the information to be played, the display area of the MR intelligent terminal can display the information to be played.
Wherein, MR intelligent terminal includes MR intelligent glasses, MR intelligence helmet. In this embodiment, the MR smart terminal is preferably MR smart glasses.
Example 2
Based on theembodiment 1, the working method of the position identification module comprises the following steps:
the position recognition module photographs a front scene to generate a real-time image, compares the real-time image with the similarity of the sample image stored in a local database in the MR intelligent terminal, selects the sample image with the maximum similarity with the real-time image and marks the sample image as a module image; the specimen image and the module image both carry a position calibration point and a rectangular calibration pattern block taking the position calibration point as a center, the real-time image is subjected to gridding processing to obtain a gridding pattern block set, the calibration pattern block and the rectangular gridding pattern block in the gridding pattern block set are subjected to traversal comparison, and a comparison algorithm adopts a similarity algorithm; when the similarity between the calibration pattern block and the rectangular gridding pattern block in the gridding pattern block set is larger than or equal to a preset threshold value, the position information of the position calibration point load corresponding to the calibration pattern block is a real-time geographic position coordinate, and the real-time marking image is a rectangular picture which is intercepted at the position corresponding to the real-time image by taking the real-time geographic position coordinate as the center.
The reason for this is that in practical application scenes, for example, scenes in a scenic spot are affected by factors such as seasons, temperature, illumination, shooting angles, and the like, thereby causing great difficulty in comparison between the real-time image and the specimen image. The specimen image is only a picture taken by a merchant in advance, for example, the merchant uses MR smart glasses to push corresponding MR content, such as video, images, and the like, to a user. Based on commercial needs, when a user wears the MR smart glasses at a specified position and needs to trigger advertisements preset by the merchant in time, if the user is in a common commercial city, a sensor can be installed at the specified position of the commercial city in the conventional technology, and when the MR smart glasses move to the position near the sensor, an advertisement push command can be triggered; this technique is too limited, limited to the field, and very inflexible.
In an application scene of a scenic spot, when a user moves to a specified position, such as a position near a yellow mountain guest-welcoming pine tree, a merchant has a requirement that when the user is within a range of 20-100 meters near the yellow mountain guest-welcoming pine tree, an advertisement of the 'guest-welcoming pine' card cigarette can be triggered to be pushed to the user for 1-3 times. Since the installation of the corresponding sensor is not allowed near the yellow mountain guest-welcoming pine, the technique of installing the sensor is not practical. At present, a mode of shooting a picture of a yellow mountain welcoming pine tree in real time is also adopted, the picture is compared with a picture shot in advance in a database, and when the similarity reaches a certain degree, an advertisement push command can be triggered; this technique is theoretically very flexible. However, there are several problems:
firstly, the scenery of the scenic spot can be affected by factors such as season, temperature, illumination and shooting angle, so that great difficulty is caused in comparison between the real-time image and the specimen image.
Secondly, because the number of the specimen images is limited, how can the similarity between the real-time image and the specimen image be grasped to satisfy the trigger advertisement push command and ensure the accuracy of identification?
In the embodiment, the sample image with the maximum similarity to the real-time image is found and is calibrated as the module image, so that the most similar sample image can be found even if a large defect exists between the real-time image and the sample image due to the influences of factors such as seasons, air temperature, illumination, shooting angles and the like, and the rest images are excluded, so that the sample amount of calculation is reduced to the maximum extent. When the calibration image block is rectangular, the calibration image block is matched with the subsequent gridding technology. When the similarity between the calibration pattern block and the rectangular gridding pattern block in the gridding pattern block set is greater than or equal to a preset threshold value, the result shows that a 'preset point' with a merchant is found, because a gridding technology is adopted, the position information (different positions and image contents are different, so different images can represent different position information and are equivalent to a coordinate point) of the position calibration point load corresponding to the calibration pattern block is a real-time geographic position coordinate, and the real-time marking image is convenient for subsequent deep learning and checking and is used for checking the accuracy of an algorithm.
Therefore, in the embodiment, even if the use environment is affected by factors such as seasons, air temperature, illumination, shooting angle and the like, the comparison between the real-time image and the specimen image is not affected, so that the algorithm provided by the invention is remarkably improved, the application limitation is small, and the application range is wider.
Example 3
Based on embodiment 2, the working method of the motion state module in step S2 includes:
and recording the real-time motion state data set of the user by utilizing an acceleration sensor and a gyroscope which are installed on the MR intelligent terminal, and comparing the real-time motion state data set with a preset data set in a local database in the MR intelligent terminal so as to judge the corresponding motion state.
Application scenarios: when an acceleration sensor and a gyroscope of the MR intelligent terminal record a real-time motion state data set of a user, if a common male walks, the acceleration sensor and the gyroscope collect corresponding data, the data is compared with a standard database when the common male walks, if the data is in accordance with the standard database, the user can be judged to be in a walking state, and in the state, an advertisement is not recommended to be pushed to the user. If in the leisure and rest state, the advertisement can be properly pushed to the user.
Example 4
Based on embodiment 3, the preset threshold is obtained by performing an AI algorithm, and the method includes:
the template wall 10 is used for auxiliary training, as shown in fig. 1 and 2, the template wall 10 is formed by arranging a plurality of rectangular module units 11 in an array, each module unit 11 comprises atransparent box 111, athermochromic layer 112 is arranged inside eachtransparent box 111, a position mark is marked on the front surface of eachthermochromic layer 112, anelectric heating plate 113 is fixedly installed on the back surface of eachthermochromic layer 112, and aheat insulation layer 114 is filled between the edge of eachelectric heating plate 113 and the inner wall of eachtransparent box 111; the color of thethermochromic layer 112 is changed by heating with theelectric heating plate 113 to change the pattern displayed on the template wall 10, the template wall 10 is photographed by the MR smart terminal to obtain a first image set (set of first images), and each of the first images in the first image set is subjected to grayscale processing to obtain a second image set (set of second images).
Constructing and training a twin neural network, wherein the twin neural network comprises two symmetrically arranged Convolutional Neural Networks (CNN), and two symmetrically arranged hidden Markov models are introduced into a generator of the Convolutional Neural Networks (CNN) for countertraining; inputting the second image set into a twin neural network, utilizing a defect random generation module to perform defect processing on the second image in the second image set to obtain a third image, receiving the third image by a generator of a Convolutional Neural Network (CNN), extracting a feature image, performing similarity comparison on the feature image and the real-time mark image, performing counterstudy, and regressing a preset threshold value according to a counterstudy result.
And the defect random generation module performs defect processing (such as cropping) on the second image in the second image set by using a random forest mode to obtain a third image.
In the invention, the preset threshold value is not set by designers at will, and is obtained by deep learning algorithm operation according to a large number of samples, so that the method is more practical and more accurate. The accuracy of the preset threshold value is poor, and the direct consequence is that the humanization degree is very poor, so that the user experience feeling is reduced linearly.
In this embodiment, the template wall 10 is used for auxiliary training, and different module units 11 are controlled to change color into corresponding color images, so that the module unit 11 with a rectangular structure is equivalent to a "pixel color block", and finally, different color patterns are displayed on the template wall 10. The mode of randomly generating different patterns is a solid pattern, so that a scene real object image can be simulated. Compared with the electronic display screen for displaying images, the images displayed by the electronic display screen are flatter and more two-dimensional, the shooting effect is extremely poor, and the burden of subsequent image processing can be increased.
By using the pattern wall 10 to randomly generate different patterns, image defects caused by factors such as season, temperature, illumination, and shooting angle can be simulated, thereby generating corresponding samples.
In the embodiment, deep learning is performed by constructing a twin neural network, and the two symmetrically arranged convolutional neural networks CNN can perform cross validation, so that the sample size processing rate is remarkably improved; the double random characteristics of the hidden Markov models are utilized, and then the two hidden Markov models are symmetrically arranged, so that 2 can be generated for the same imagenThe defect samples can be subjected to cross comparison and regression for checking in countercheck learning, and accuracy of the regression preset threshold value is improved. Therefore, two symmetrically arranged hidden markov models are introduced into the two symmetrically arranged convolutional neural networks CNN for countertraining, which also helps to control the robustness of the system.In the deep learning process, a large number of samples do not need to be shot, the problem of small sample amount in the practical application process can be solved, the problem of low accuracy caused by small sample amount in the conventional deep learning process is solved, and the phenomenon of overfitting is small.
That is to say, in this embodiment, even the quantity of sample image is limited, can guarantee to trigger the accuracy and the promptness of advertisement propelling movement order to show improvement user experience and feel, avoid taking place the condition emergence of awkwardness such as spurious triggering, not triggering.
Example 5
Based on embodiment 4, thetransparent case 111 is made of a glass material. Thethermochromic layer 112 is made of a reversible thermochromic material. Thethermal insulation layer 114 is made of thermal insulation epoxy resin through filling and curing; the provision of thethermal insulation layer 114 can prevent the adjacent two module units 11 from interfering with each other.
The position mark is one of a cross, a meter, and an , and in this embodiment, a cross is preferable. The position mark must have a 'cross point', so that the accuracy of image identification is facilitated; the structure of the position mark cannot be too complex, which is convenient for image recognition.
Example 6
In the invention, the interaction method with the media further comprises the step of setting an advertisement platform for receiving the advertisement delivery of the merchant, wherein the merchant can set advertisement contents such as introduction contents, coupons, discount information and the like on the advertisement platform.
The interaction method between the user and the media can also be as follows: eye movement interaction, gesture recognition interaction, voiced/unvoiced speech recognition, head movement interaction, etc. When the hand makes a specific motion, receiving motion information of the hand of the user, or identifying a specific shape of the hand, and matching the specific motion with control options such as up-down, left-right sliding, zooming-in, zooming-out, clicking, closing and the like, so as to control related contents to perform corresponding motions. The accuracy of recognizing voiced/unvoiced is higher compared to speech input.
In the embodiment, the MR technology is used for improving the interaction mode with the intelligent media, so that more diversified information is provided for the user; according to the invention, the image recognition technology is used for triggering the push content appointed by the merchant, and the deep learning algorithm is used for improving the defects of the existing image recognition technology, so that the triggering accuracy and flexibility are improved, and the human-computer interaction effect is more friendly and convenient; the conditions that the triggering accuracy is low and accurate triggering cannot be achieved due to objective factors such as seasons, temperature, illumination and shooting angles are reduced, and the user experience is remarkably improved.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (9)

the position recognition module photographs a front scene to generate a real-time image, compares the real-time image with the similarity of the sample image stored in a local database in the MR intelligent terminal, selects the sample image with the maximum similarity with the real-time image and marks the sample image as a module image; the specimen image and the module image both carry a position calibration point and a rectangular calibration pattern block taking the position calibration point as a center, the real-time image is subjected to gridding processing to obtain a gridding pattern block set, the calibration pattern block and the rectangular gridding pattern block in the gridding pattern block set are subjected to traversal comparison, and a comparison algorithm adopts a similarity algorithm; when the similarity between the calibration pattern block and the rectangular gridding pattern block in the gridding pattern block set is larger than or equal to a preset threshold value, the position information of the position calibration point load corresponding to the calibration pattern block is a real-time geographic position coordinate, and the real-time marking image is a rectangular picture which is intercepted at the position corresponding to the real-time image by taking the real-time geographic position coordinate as the center.
the template wall is used for assisting training and is formed by arraying a plurality of rectangular module units, each module unit comprises a transparent box, a thermochromic layer is arranged in each transparent box, the front surface of each thermochromic layer is marked with a position mark, the back surface of each thermochromic layer is fixedly provided with an electric heating plate, and a heat insulation layer is filled between the edge of each electric heating plate and the inner wall of each transparent box; heating by using an electric heating plate to change the color of the thermochromic layer so as to change the pattern displayed by the template wall, photographing the template wall by using an MR intelligent terminal to obtain a first image set, and performing gray processing on each first image in the first image set to obtain a second image set;
CN202111398973.1A2021-11-242021-11-24Intelligent media interaction method based on MR technologyActiveCN114185431B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202111398973.1ACN114185431B (en)2021-11-242021-11-24Intelligent media interaction method based on MR technology

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202111398973.1ACN114185431B (en)2021-11-242021-11-24Intelligent media interaction method based on MR technology

Publications (2)

Publication NumberPublication Date
CN114185431Atrue CN114185431A (en)2022-03-15
CN114185431B CN114185431B (en)2024-04-02

Family

ID=80541346

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111398973.1AActiveCN114185431B (en)2021-11-242021-11-24Intelligent media interaction method based on MR technology

Country Status (1)

CountryLink
CN (1)CN114185431B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114816057A (en)*2022-04-202022-07-29广州瀚信通信科技股份有限公司Somatosensory intelligent terminal interaction method, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2015207168A (en)*2014-04-212015-11-19Kddi株式会社Information presentation system, method, and program
CN109344699A (en)*2018-08-222019-02-15天津科技大学 Disease identification method of winter jujube based on hierarchical deep convolutional neural network
CN111741287A (en)*2020-07-102020-10-02南京新研协同定位导航研究院有限公司 A method for MR glasses to trigger content using location information
CN112084849A (en)*2020-07-312020-12-15华为技术有限公司 Image recognition method and device
CN112181152A (en)*2020-11-132021-01-05幻蝎科技(武汉)有限公司Advertisement push management method, equipment and application based on MR glasses
US20210303854A1 (en)*2020-03-262021-09-30Varjo Technologies OyImaging System and Method for Producing Images with Virtually-Superimposed Functional Elements

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2015207168A (en)*2014-04-212015-11-19Kddi株式会社Information presentation system, method, and program
CN109344699A (en)*2018-08-222019-02-15天津科技大学 Disease identification method of winter jujube based on hierarchical deep convolutional neural network
US20210303854A1 (en)*2020-03-262021-09-30Varjo Technologies OyImaging System and Method for Producing Images with Virtually-Superimposed Functional Elements
CN111741287A (en)*2020-07-102020-10-02南京新研协同定位导航研究院有限公司 A method for MR glasses to trigger content using location information
CN112084849A (en)*2020-07-312020-12-15华为技术有限公司 Image recognition method and device
CN112181152A (en)*2020-11-132021-01-05幻蝎科技(武汉)有限公司Advertisement push management method, equipment and application based on MR glasses

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
丁德菊: "基于混合现实的人机交互系统设计", 《西部广播电视》, pages 237 - 238*

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114816057A (en)*2022-04-202022-07-29广州瀚信通信科技股份有限公司Somatosensory intelligent terminal interaction method, device, equipment and storage medium

Also Published As

Publication numberPublication date
CN114185431B (en)2024-04-02

Similar Documents

PublicationPublication DateTitle
Li et al.Layoutgan: Synthesizing graphic layouts with vector-wireframe adversarial networks
CN107045844B (en) A Landscape Guide Method Based on Augmented Reality Technology
CN100407798C (en) 3D geometric modeling system and method
CN110310175A (en)System and method for mobile augmented reality
CN110503074A (en)Information labeling method, apparatus, equipment and the storage medium of video frame
JP2011022984A (en)Stereoscopic video interactive system
CN115100742B (en)Meta universe exhibition and demonstration experience system based on space-apart gesture operation
CN117369233B (en)Holographic display method, device, equipment and storage medium
Yao et al.Neural radiance field-based visual rendering: A comprehensive review
CN117611774A (en)Multimedia display system and method based on augmented reality technology
CN108628455A (en)A kind of virtual husky picture method for drafting based on touch-screen gesture identification
Bhakar et al.A review on classifications of tracking systems in augmented reality
CN114185431B (en)Intelligent media interaction method based on MR technology
CN116935008B (en) A display interaction method and device based on mixed reality
CN111383343B (en)Home decoration design-oriented augmented reality image rendering coloring method based on generation countermeasure network technology
TaoA VR/AR-based display system for arts and crafts museum
Gross et al.Gesture Modelling: Using Video to Capture Freehand Modeling Commands
CN116974416A (en)Data processing method, device, equipment and readable storage medium
CN100593175C (en)Method and system for realizing organ animation
Weng et al.Green landscape 3D reconstruction and VR interactive art design experience using digital entertainment technology and entertainment gesture robots
CN120472377B (en) A fast moving small target tracking method based on dual-modal fusion
Wang et al.Virtual piano system based on monocular camera
Tu et al.An automatic base expression selection algorithm based on local blendshape model
CN119383425B (en) An intelligent video animation generation system based on AIGC
CN119148849B (en) A transparent display interaction method for augmented reality

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp