VR perception touch deviceTechnical Field
The invention relates to the technical field of VR (virtual reality), in particular to a VR sensing touch device.
Background
At present, VR glasses do not have a universal interaction device, most manufacturers can provide a (pair of) customized handle for the VR glasses, the handle is also in different types such as a remote control type, a 3DOF (degree of freedom) type and a 6DOF type, and the technical details of each family are not the same. But no matter which kind of implementation scheme, mostly all be to feeling interactive handle class equipment, VR handle equipment can promote the object for appreciation nature, the interactive of VR glasses, but does not have the fine accurate control of realization of way and input ability. In the era of computer hosts, the operation and control of a mouse and a keyboard are derived, so that a user can quickly type, quickly position and quickly select; in the era of mobile internet of mobile phones, touch screens are derived, and with continuous practice and adaptation of users, full-keyboard typing on a small screen can achieve very high input efficiency. The interaction of the current VR class, especially the text input and the accurate key control class technology are still incomplete, and an interactive short board is formed, so that most of VR class applications are mainly watching and entertainment, more users input passively, and the active output capacity is insufficient.
Another big problem that causes VR interaction difficulty is that users generally wear closed VR head-mounted display devices all the way, so that the users can not effectively sense the accurate positions of both hands, especially fingers, but an important premise of accurate input is that eyes (whether main visual angle or auxiliary light) observe and track both hands to determine the input position. However, once both eyes are covered, the input cannot be performed efficiently. To address this problem, various attempts have been made in the industry:
1. the fingers are observed through the camera and then fed back to the VR head display equipment, the scheme is used for locking the positions of both hands, and the actual experience process is not flexible and accurate enough, so that the original free advantage of VR is lost;
2. the problem that the requirement on the exercise and adaptation of a user is high and the degree of freedom is not high is solved by the handles with more keys, no method is available for effectively solving the hovering indication requirement, and the threshold is invisibly increased;
3. and changing the operation mode of the input method, wherein the operation mode is similar to a wooden fish knocking scheme, a virtual keyboard scheme and the like.
In summary, the existing VR devices have no way to compare with the traditional mouse, keyboard and touch screen solutions, because the interactive devices are cumbersome to wear/use and inefficient, whether in terms of text input or precise key pressing. This has also led to a lack of mainstream acceptance in the industry for text entry or accurate keying of VR devices.
Disclosure of Invention
Based on the technical problems, the invention provides a VR sensing touch device, which solves the problem that the existing VR equipment lacks an effective and convenient interaction mode in the aspect of character input or accurate key pressing.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
the utility model provides a VR perception touch device, including the first display device of VR display terminal and VR, VR display terminal and input device, position induction system communication connection, input device is used for the text input, position induction system is used for fixing a position the finger position of user on input device and transmits the positioning data for VR display terminal, VR display terminal utilizes the positioning data to generate virtual finger image and input device image and transmits the first display device synchronous display of VR for.
Preferably, the position sensing device comprises a sensing control module and a plurality of infrared correlation sensors, and the infrared correlation sensors are arranged around the input device and form at least one layer of infrared mesh above the touch input device.
In a preferred embodiment, the infrared correlation sensor is a single-point infrared correlation sensor.
In a preferred embodiment, the infrared correlation sensor is a grating-type infrared correlation sensor.
Preferably, the input device is an input keyboard or a touch screen, wherein the touch screen is a capacitive touch screen or a resistive touch screen.
Compared with the prior art, the invention has the beneficial effects that:
the invention induces the finger position used in the input operation of the input device through the position induction device, generates the virtual image according to the acquired finger positioning data and displays the virtual image in the VR display terminal, so that a user can synchronously judge the input state of the user at the moment through the generated virtual image under the condition of wearing the VR display terminal. The problem that the user cannot conveniently complete character input or accurate key pressing due to the fact that the user cannot sense the accurate positions of the hands, particularly fingers, of the closed VR display terminal is solved, and user operation experience and input efficiency are improved.
Drawings
FIG. 1 is a schematic top view of the mechanism of the present invention.
FIG. 2 is a schematic diagram of the present invention in a finger-floating state.
FIG. 3 is a diagram illustrating the use of the present invention in a finger touch state.
Fig. 4 is a flow chart of the present invention.
Wherein 11 input devices, 12 position sensing devices and 13 fingers.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings of the embodiments of the present disclosure. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the disclosure without any inventive step, are within the scope of protection of the disclosure.
Unless otherwise defined, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The use of "first," "second," and similar terms in this disclosure is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
Example 1:
referring to fig. 1 ~ 4, a VR perception touch device, including the first display equipment of VR display terminal and VR, VR display terminal and input device, position induction system communication connection, input device is used for the text input, position induction system is used for the finger position of location user on input device and transmits the positioning data for VR display terminal, VR display terminal utilizes the positioning data to generate virtual finger image and input device image and transmits for the first display equipment synchronous display of VR.
In this embodiment, when the user is carrying out text input or accurate button, locateposition induction system 12 oninput device 11 and can respond to user'sfinger 13 position and transmit the positioning data for VR display terminal, VR display terminal utilizes the positioning data to generatevirtual finger 13 image andinput device 11 image and transmits for the first display device synchronous display of VR. Therefore, when a user wears the closed VR head display equipment, thevirtual finger 13 image displayed in the VR head display equipment is used as visual feedback, so that the user can use thevirtual finger 13 as a reference, the user can move thefinger 13 and correctly and quickly select the target position of theinput device 11, and the purpose of quickly inputting characters or accurately pressing keys without watching two hands of an entity is achieved. The input state of the user at the moment is judged synchronously through the generated virtual image, the problem that the user cannot conveniently complete character input or accurate key pressing due to the fact that the user cannot sense the accurate positions of the two hands, particularly thefingers 13, of the user through the closed VR head display device is solved, and the user operation experience and the input efficiency are improved.
Preferably, the VR display terminal generates images of thefinger 13 and theinput device 11 using software such as Quest3D or Un it 3D.
Further, theposition sensing device 12 is composed of a sensing control module and a plurality of infrared correlation sensors, and the infrared correlation sensors are arranged around theinput device 11 and form at least one layer of infrared mesh above thetouch input device 11.
When a user performs an input operation on theinput device 11 with thefinger 13, the action process of thefinger 13 is "hover → approach → touch → far away → hover", in the process, thefinger 13 passes through an infrared correlation sensor (the infrared correlation sensor includes an infrared emitter and an infrared receiver, and infrared rays are formed between the infrared emitter and the infrared receiver) to form an infrared mesh above theinput device 11, and due to the blocking effect of thefinger 13, part of the infrared rays of the infrared correlation sensor are blocked and cannot be received by the infrared receiver, and the space surrounded by the rest normal infrared rays beside the blocked infrared rays is thefinger 13 positioning space. And then by utilizing the cooperation of the multilayer infrared meshes, a horizontal axis sensing area, a vertical axis sensing area and a height sensing area of the finger can be obtained by utilizing the infrared correlation sensor, wherein the sensing areas are shown in fig. 2 and 3. From this, it can be seen that the mesh size, density, and number of layers of the infrared mesh determine the accuracy of positioning thefinger 13, and the smaller the mesh size, the higher the infrared density, and the greater the number of layers of the infrared mesh, the more accurate the positioning contour of thefinger 13. Infrared correlation sensor transmits the locating data for VR display terminal through the response control module, rejects interference and false triggering through filtering algorithm, can form the multilayer projection of both hands in the infrared ray meshwork area, and rethread VR display terminal is to alright generation virtual 13 images of finger as the visual feedback foundation of user in the VR scene.
Therefore, the working process of the VR-aware touch device is summarized as the data of the position-sensing device → the gesture projection is generated → the gesture action is generated → the touch and hover gestures are perceived → the VR head display device displays the virtual gesture.
In theinput device 11, since theposition sensing device 12 is mounted on theinput device 11, the mounting position, distance, and size of theposition sensing device 12 and theinput device 11 can be easily determined, so that a virtual image of theinput device 11 can be generated in advance by using the VR display terminal, and the position of thefinger 13 on theinput device 11 can be determined according to the position of theposition sensing device 12.
Further, the infrared correlation sensor is a single-point infrared correlation sensor. The single-point infrared correlation sensor is characterized by comprising a pair of infrared emitters and infrared receivers, and theposition sensing device 12 formed by the single-point infrared correlation sensor has the advantages of adjustable size and direction of formed infrared mesh grids and better flexibility.
Furthermore, the infrared correlation sensor is a grid-shaped infrared correlation sensor. The grid-shaped infrared correlation sensor consists of a strip-shaped infrared emitter and an infrared receiver, a plurality of infrared sensing light rays are arranged on the grid-shaped infrared correlation sensor, and theposition sensing device 12 consisting of the grid-shaped infrared correlation sensor has the advantages of more compact structure, fewer parts and convenience in installation.
Further, theinput device 11 is an input keyboard or a touch screen, wherein the touch screen is a capacitive touch screen or a resistive touch screen.
The above is an embodiment of the present invention. The embodiments and specific parameters in the embodiments are only used for clearly illustrating the verification process of the invention and are not used for limiting the patent protection scope of the invention, which is defined by the claims, and all the equivalent structural changes made by using the contents of the description and the drawings of the present invention should be included in the protection scope of the present invention.