Disclosure of Invention
In order to solve the problems in the prior art, the embodiments of the present disclosure provide an input method, an apparatus, and a device for obtaining haptic feedback in an extended reality space, where one or more surfaces of a user's body or one or more surfaces of an object in the extended reality space are used as preset input surfaces, any part of the user's body is used as an input source, a relationship between the input source and each preset input surface is tracked, whether to enter an input mode is determined, and a target input surface is determined, when the input mode is entered, a virtual keyboard is rendered on the target input surface, and a user completes input of corresponding virtual keys on the virtual keyboard through interaction of a body part corresponding to the input source on the target input surface.
In order to solve any one of the above technical problems, the specific technical scheme in the specification is as follows:
In one aspect, embodiments of the present disclosure provide an input method for obtaining haptic feedback in an augmented reality space, the method including:
Taking one or more surfaces of a user body or one or more surfaces of an object as preset input surfaces respectively, and taking at least one body part of the user as an input source;
Tracking the input source and each preset input surface, and when the relation between the input source and the preset input surfaces meets the preset requirement, taking the preset input surface as a target input surface and generating a virtual keyboard on the target input surface;
and the user completes the input of the corresponding virtual keys on the virtual keyboard through the interaction of the body part corresponding to the input source on the body surface or the object surface corresponding to the target input surface.
Further, the process of completing the input of the corresponding virtual key on the virtual keyboard through the interaction of the body part corresponding to the input source on the body surface or the object surface corresponding to the target input surface by the user comprises the following steps:
And when the body part corresponding to the input source is close to the target input surface, pre-selecting a virtual key corresponding to the position of the input source, and then when the body part corresponding to the input source is away from the target input surface, inputting the pre-selected virtual key.
Further, if the user approaches the body part corresponding to the input source to the target input surface, pre-selecting the virtual key corresponding to the position of the input source further includes:
judging whether the distance between the input source and the target input surface is smaller than a preselected distance threshold value or not;
if yes, selecting a virtual key corresponding to the position of the input source.
Further, if the user approaches the body part corresponding to the input source to the target input surface, pre-selecting the virtual key corresponding to the position of the input source further includes:
When the user touches the body part corresponding to the input source on the target input surface, the virtual key corresponding to the contact position of the input source and the target input surface is preselected.
Further, after the user presets the virtual key corresponding to the position of the input source, the method further includes:
and under the condition that the body part corresponding to the input source is kept close to the target input surface, the user moves the body part corresponding to the input source, and switches the pre-selection virtual key.
Further, when the relation between the input source and the preset input surface meets a predetermined requirement, taking the preset input surface as a target input surface and generating a virtual keyboard on the target input surface further comprises:
and when the position of the input source falls into the input space of the preset input surface and the duration exceeds the preset time length, taking the preset input surface as the target input surface and generating a virtual keyboard on the target input surface.
Further, generating a virtual keyboard on the target input surface further includes:
Determining attribute information of the target input surface when the relation between the input source and the target input surface meets a preset requirement;
and generating the virtual keyboard according to the attribute information.
Further, the attribute information includes one or more of a size, an orientation, a shape, and a position of the target input surface.
Further, generating the virtual keyboard according to the attribute information further includes:
If the attribute information comprises the size of the target input surface, determining the size of the virtual keyboard according to the size of the target input surface;
If the attribute information comprises the orientation of the target input surface, determining the orientation of the virtual keyboard according to the orientation of the target input surface;
If the attribute information comprises the shape of the target input surface, determining the distribution mode of the virtual keyboard according to the shape of the target input surface;
If the attribute information comprises the position of the target input surface, determining the position of the virtual keyboard according to the position of the target input surface;
And generating the virtual keyboard according to one or more of the size, the orientation, the distribution mode and the position of the virtual keyboard.
Further, in the process that the user completes the input of the corresponding virtual key on the virtual keyboard through the interaction of the body part corresponding to the input source on the body surface or the object surface corresponding to the target input surface, the method further comprises:
and providing input feedback for the user according to a feedback mode corresponding to the target input surface.
Further, the user completing the input of the corresponding virtual key on the virtual keyboard through the interaction of the body part corresponding to the input source on the body surface or the object surface corresponding to the target input surface further comprises:
And the user performs interaction of the body part corresponding to the input source on the body surface or the object surface corresponding to the target input surface based on the input aiming mode corresponding to the target input surface, so as to complete input of the corresponding virtual keys on the virtual keyboard.
Further, the input aiming mode is a sight intersection mode, an input source projection mode or an input source nearest interaction point mode.
Further, after generating the virtual keyboard on the target input surface, the method further comprises:
judging whether the target input surface moves out of the visual field range of the user, if so, ending the input of the virtual keyboard, or,
And judging whether the distance between the target input surface and the viewpoint of the user exceeds a preset distance threshold value, and if so, ending the input of the virtual keyboard.
Further, after generating the virtual keyboard on the target input surface, the method further comprises:
And judging whether the body action change condition of the body surface of the user corresponding to the target input surface meets a preset ending input condition or not under the condition that the target input surface is the body surface of the user, and ending the input of the virtual keyboard if the body action change condition of the body surface of the user corresponding to the target input surface meets the preset ending input condition.
On the other hand, the embodiment of the present specification further provides an input device for obtaining haptic feedback in an augmented reality space, including:
an input setting unit, configured to take one or more surfaces of a user's body or one or more surfaces of an object as preset input surfaces, respectively, and take at least one body part of the user as an input source;
an input judging unit, configured to track the input source and each preset input surface, and when a relation between the input source and the preset input surface meets a predetermined requirement, take the preset input surface as a target input surface and generate a virtual keyboard on the target input surface;
and the input unit is used for processing the interaction of the user on the body surface or the object surface corresponding to the target input surface through the body part corresponding to the input source so as to finish the input of the corresponding virtual keys on the virtual keyboard.
In another aspect, embodiments of the present disclosure also provide a computer device including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the above method when executing the computer program.
Finally, the embodiments of the present specification also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements the above method.
By the method of the embodiment of the specification, the XR device can scan the body of the user and the object in the real space where the user is located, and render the body of the user and the object in the space where the user is located in the extended reality space. In order to improve user input convenience and user experience, the embodiment of the specification takes one or more surfaces of a user body or one or more surfaces of an object as preset input surfaces respectively, takes at least one body part of the user as an input source, can call a virtual keyboard to input whenever and wherever possible when the user interacts in an extended reality space, does not limit the virtual keyboard to a designated position in the extended reality space, does not need to additionally operate to open the virtual keyboard, does not limit the size of the virtual keyboard and the number of the body parts taken as the input source, and therefore supports the user to perform more convenient and natural interaction in the extended reality space, such as ten-finger interaction and the like, and improves the operation convenience of the user. And then, according to the input source and each preset input surface, when the relation between the input source and the preset input surface meets the preset requirement, taking the preset input surface as a target input surface, and generating a virtual keyboard on the target input surface rendered in the extended reality space. The user can complete the input of the corresponding virtual keys on the virtual keyboard through the interaction of the body part corresponding to the input source on the body surface or the object surface corresponding to the target input surface, and the user can obtain real physical touch feedback during the interaction because the user interacts with the body surface or the object surface through the body part, so that the input is more accurate, and the body surface or the object surface of the user with the touch feedback can be used as a support during the input of the user, so that fatigue is reduced.
Detailed Description
The technical solutions of the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is apparent that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims herein and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the present description described herein may be capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, apparatus, article, or device that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or device.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
In order to solve the problems in the prior art, the embodiment of the specification provides an input method for obtaining haptic feedback in an extended reality space, wherein one or more surfaces of a user body or one or more surfaces of an object in the extended reality space are used as preset input surfaces, any part of the user body is used as an input source, the relation between the input source and each preset input surface is tracked, whether an input mode is entered is judged, a target input surface is determined, when the input mode is entered, a virtual keyboard is rendered on the target input surface, and the user completes the input of corresponding virtual keys on the virtual keyboard through the interaction of the body part corresponding to the input source on the target input surface. Fig. 1 is a flowchart of an input method for obtaining haptic feedback in an augmented reality space according to an embodiment of the present disclosure. The input process of obtaining haptic feedback in augmented reality space is described in this figure, but may include more or fewer operational steps based on conventional or non-creative labor. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When a system or apparatus product in practice is executed, it may be executed sequentially or in parallel according to the method shown in the embodiments or the drawings. As shown in fig. 1, the method may include:
step 101, taking one or more surfaces of a user body or one or more surfaces of an object as preset input surfaces respectively, and taking at least one body part of the user as an input source;
102, tracking the input source and each preset input surface, and when the relation between the input source and the preset input surface meets the preset requirement, taking the preset input surface as a target input surface and generating a virtual keyboard on the target input surface;
And 103, completing the input of the corresponding virtual keys on the virtual keyboard through the interaction of the body part corresponding to the input source on the body surface or the object surface corresponding to the target input surface.
By the method of the embodiment of the specification, the XR device scans the body of the user and the object in the space of the user, and renders the body of the user and the object in the space of the user in the extended reality space. In order to improve user input convenience and user experience, the embodiment of the specification takes one or more surfaces of a user body or one or more surfaces of an object as preset input surfaces respectively, takes at least one body part of the user as an input source, can call a virtual keyboard to input whenever and wherever possible when the user interacts in an extended reality space, does not limit the virtual keyboard to a designated position in the extended reality space, does not need to additionally operate to open the virtual keyboard, does not limit the size of the virtual keyboard and the number of the body parts taken as the input source, and therefore supports the user to perform more convenient and natural interaction in the extended reality space, such as ten-finger interaction and the like, and improves the operation convenience of the user. And then, according to the input source and each preset input surface, when the relation between the input source and the preset input surface meets the preset requirement, taking the preset input surface as a target input surface, and generating a virtual keyboard on the target input surface rendered in the extended reality space. The user can complete the input of the corresponding virtual keys on the virtual keyboard through the interaction of the body part corresponding to the input source on the body surface or the object surface corresponding to the target input surface, and the user can obtain real physical touch feedback during the interaction because the user interacts with the body surface or the object surface through the body part, so that the input is more accurate, and the body surface or the object surface of the user with the touch feedback can be used as a support during the input of the user, so that fatigue is reduced.
In the embodiment of the present specification, one or more surfaces are selected from the surface of the user's body and the surface of the object in the real space rendered by the XR device as the preset input surface, respectively, and the preset input surface in the embodiment of the present specification refers to the surface where continuous tactile feedback can be obtained when the user completes input. The specific choice of which embodiment of the present description is not limiting. It may be a truly existing plane in physical space such as a table surface, a wall surface, a book surface, etc., or may be a part of the user's body such as a curved surface on the index finger side, a curved surface on the upper thigh side, etc. Different XR devices may carry different preset input surfaces depending on their capabilities (e.g., whether the XR device can detect (flat/curved) surfaces in the user's real space in real time, whether thigh surfaces can be detected, etc.). There may be multiple preset input surfaces at the same time, but at least one preset input surface is required. Because the main XR device can recognize the surfaces of both hands, the embodiment of the present disclosure preferably uses a preset surface parallel to the abdomen of the left and right index fingers or a part of the radial surface of the index finger as a preset input surface (as shown in fig. 2), and the embodiment of the present disclosure is not limited by whether more preset input surfaces are configured.
In this embodiment of the present disclosure, the preset input source may be a joint of a hand of the user, and in this embodiment of the present disclosure, one body part of the user may be used as the input source, for example, an index finger fingertip, or a plurality of body parts of the user may be used as the input source, for example, fingertips of ten fingers of two hands, so as to implement ten-finger interaction, that is, the user may perform input through co-cooperation with the fingertips of the ten fingers. The specific selection of which location is the input source is not limited in the embodiments herein.
The XR device may track the input source and each preset input surface while rendering in the augmented reality space. When the relation between the input source and one of the input surfaces meets the preset requirement, an input mode is entered, the preset input surface is used as a target input surface, and a virtual keyboard is generated on the target input surface.
When the relative position of a certain input source P_input of the user's arms/hands and a certain preset input surface accords with a preset rule, an input mode is started. The preset input surface may be, for example, the side of the index finger (sitting on the upper thigh of the sofa), or a certain real plane actually existing in space, such as on a table. When the input mode is started, the virtual keyboard for displaying and inputting is displayed for the user, and the movement and operation of the hand/finger of the user or the movement and operation of other interactive input sources (such as an external mouse, head control input and the like) are not interacted with targets outside the virtual keyboard, so that the stability and accuracy of input interaction are ensured. And if all the preset input surfaces corresponding to the input source P_input of the user with the two arms/hands do not meet the preset rules, if the input source P_input is far away from all the preset input surfaces, judging that the input mode is ended. Optionally, at the end of the "input mode", the virtual keyboard for input is hidden to free up this interaction space.
In order to improve convenience in entering the "input mode", according to one embodiment of the present specification, when the relationship between the input source and the preset input surface satisfies a predetermined requirement, taking the preset input surface as a target input surface and generating a virtual keyboard on the target input surface further includes:
and when the position of the input source falls into the input space of the preset input surface and the duration exceeds the preset time length, taking the preset input surface as the target input surface and generating a virtual keyboard on the target input surface.
In this embodiment of the present disclosure, the input space of the preset input Surface is preset, and when the relationship between any one (or multiple) input source p_input and at least one preset input Surface surface_i meets the following condition, namely, the input mode is "input mode", otherwise, other interaction modes in the system (such as a direct interaction mode, an indirect interaction mode, etc.). Movement and operation of the interaction source in the "input mode" will not affect the interaction results of the other modes. Since the input source p_input and the preset input surface may be plural, and the plural input sources p_input may correspond to the same preset input surface. When determining the interaction mode, the embodiment of the present disclosure may monitor the information of each input source p_input, so as to determine whether the condition of entering the "input mode" is satisfied, and when at least one input source p_input satisfies the following two conditions at the same time, it is considered to enter/switch to the "input mode".
The condition one is that if the plane set is expressed as P= { surface_i|surface_i being plane }, the curved Surface set is expressed as C= { surface_i|surface_i being curved Surface }. The input source is denoted pi (Vector 3). The difference in distance of the preset surface in the axial direction of the preset direction is denoted as r. The preset input space is a space defined by two surfaces with a distance R between the preset input space and the preset input surface in a preset direction and a reverse direction, and is denoted as R, and the surfaces can be planes or curved surfaces as shown in fig. 3, for example. Any input source enters the corresponding preset input space and is marked as pi epsilon R.
And in the second condition, the duration of maintaining the condition that k=true is met is longer than the preset time length T_mode.
Upon entering the "input mode", generating a virtual keyboard on the target input surface, specifically, as shown in fig. 4, generating a virtual keyboard on the target input surface further includes:
Step 401, determining attribute information of the target input surface when the relation between the input source and the target input surface meets a preset requirement;
and step 402, generating the virtual keyboard according to the attribute information.
In the embodiment of the present specification, when the "input mode" is entered, attribute information of a target input surface of a first frame F0 of the "input mode" is recorded, where the attribute information includes one or more of a size, an orientation, a shape, and a position of the target input surface.
Further, as shown in fig. 5, generating the virtual keyboard according to the attribute information further includes:
step 501, if the attribute information includes the size of the target input surface, determining the size of the virtual keyboard according to the size of the target input surface;
In this step, a correspondence between the size of the target input surface and the size of the virtual keyboard may be predefined, and the size of the virtual keyboard may be determined according to the size of the target input surface in the attribute information.
Step 502, if the attribute information includes the orientation of the target input surface, determining the orientation of the virtual keyboard according to the orientation of the target input surface;
In this step, the orientation of the virtual keyboard and the orientation of the target input surface may be set to be the same, and the embodiment of the present specification is not limited.
Step 503, if the attribute information includes the shape of the target input surface, determining a distribution mode of the virtual keyboard according to the shape of the target input surface;
in this step, the correspondence between the shape of the target input surface and the distribution manner of the virtual keyboard may be predefined, and the virtual keyboard and the virtual keys on the virtual keyboard may be laid out according to the usage (such as the movement range, the position, the path, etc.) of the movement of the thumb or other input sources of the user on the real contact surface. The method of the embodiment of the specification can provide a default initial virtual keyboard, and a user can define the default initial virtual keyboard according to the needs. Such as split virtual keyboards in left and right halves, virtual keyboards with more ergonomic arcuate distribution, virtual keyboards arranged in a fixed range (e.g., on the upper thigh), etc. The present description embodiments do not limit the size, form, layout of virtual keys, etc. of the virtual keyboard.
Step 504, if the attribute information includes the position of the target input surface, determining the position of the virtual keyboard according to the position of the target input surface;
In this step, a virtual keyboard is generated at a location of the target input surface in the augmented reality space.
And 505, generating the virtual keyboard according to one or more of the size, the orientation, the distribution mode and the position of the virtual keyboard.
Fig. 6 shows an example of two virtual keyboard display positions according to the embodiment of the present disclosure. In fig. 6, the inner side of the index finger may be used as a preset input surface, the tip of the thumb of the same hand may be used as an input source, the tip of the thumb and the inner side of the index finger of the same hand may be tracked, and when the distance between the tip of the thumb and the inner side of the index finger is smaller than the threshold D1, a virtual keyboard attached to the inner side of the index finger is displayed on the inner side of the index finger. The desktop can be used as a preset input surface, three preset joints of the hand are respectively used as input sources, and when the distance between at least one input source and the desktop is smaller than a threshold D2, a virtual keyboard attached to the desktop is displayed on the desktop.
Further, in the process that the user completes the input of the corresponding virtual key on the virtual keyboard through the interaction of the body part corresponding to the input source on the body surface or the object surface corresponding to the target input surface, the method further comprises:
and providing input feedback for the user according to a feedback mode corresponding to the target input surface.
In this embodiment of the present disclosure, the feedback manner may be a sound feedback and/or a key color change, that is, a sound feedback and/or a key color change of a virtual keyboard input corresponding to each preset input surface is predefined (may also be customized by a user, and the embodiment of the present disclosure is not limited), so as to improve the use experience of the user.
In order to enable physical tactile feedback to a user during input, the embodiment of the present disclosure uses a pre-selection (Hover) of the input source near the preset input surface and then a leave as input (Tap).
Specifically, the process of completing the input of the corresponding virtual key on the virtual keyboard through the interaction of the body part corresponding to the input source on the body surface or the object surface corresponding to the target input surface by the user comprises the following steps:
And when the body part corresponding to the input source is close to the target input surface, pre-selecting a virtual key corresponding to the position of the input source, and then when the body part corresponding to the input source is away from the target input surface, inputting the pre-selected virtual key.
The user approaching the body part corresponding to the input source to the target input surface, pre-selecting the virtual key corresponding to the position of the input source further comprises:
judging whether the distance between the input source and the target input surface is smaller than a preselected distance threshold value or not;
if yes, selecting a virtual key corresponding to the position of the input source.
In the embodiment of the present specification, the pre-selected distance threshold may be set empirically, or user-defined based on input sensitivity, and the embodiment of the present specification is not limited.
In this embodiment of the present disclosure, an input reference point of the virtual key may be preset, and when a distance between the input source and the target input surface is smaller than a preselected distance threshold, the virtual key closest to the reference point is preselected as the virtual key in the preselection.
Preferably, in the embodiment of the present disclosure, when the user touches the body part corresponding to the input source on the target input surface, the virtual key corresponding to the position where the input source touches the target input surface is selected.
The method for inputting by the user touching the body part corresponding to the input source on the target input surface is basically a judging mode for the system, and the judging mode is also used for judging whether the distance between the body part corresponding to the input source and the target input surface is smaller than a preselected distance threshold value, and the preset threshold value can be set smaller. For a user, the magnitude of the preselected distance threshold is not required to be concerned when inputting, but the body part corresponding to the input source is directly contacted with a certain corresponding virtual key in the virtual keyboard on the target input surface, so that when the body part corresponding to the input source is contacted with the target input surface, the distance between the body part corresponding to the input source and the virtual key is minimum, the distance between the body part corresponding to the input source and the target input surface is always smaller than the preselected distance threshold, when the body part corresponding to the input source is contacted with the target input surface by the user, the body part corresponding to the input source can obtain real physical tactile feedback, and the virtual key at the corresponding position of the input source is preselected. The user then moves (e.g., lifts) the body part corresponding to the input source away from the target input surface, and inputs the virtual key selected in advance. The input method of the embodiment of the specification is different from the suspension switching preselection of the traditional scheme, the approach of the embodiment of the specification is preselection (Hover), and then the approach of the embodiment of the specification is output (Tap), so that the actual physical tactile feedback can be obtained, the preselection switching is more accurate, and the surface of a user body or the surface of an object with the tactile feedback can be used as a support when the user inputs, so that fatigue is reduced.
Illustratively, as shown in fig. 7, the 6Dof information of the joint point specified by the user's hand, and the 6Dof information of the virtual keyboard (which may be each of the left and right parts) are acquired. The current virtual key a is determined to be preselected (Hover) or activated/entered (Tap) according to a predetermined distance-determining algorithm.
In addition, in the embodiment of the present disclosure, the method that the input source approaches to the preset input surface (Hover) and then leaves to input (Tap) may further perform more accurate pre-selection switching, specifically, after the user pre-selects the virtual key corresponding to the position of the input source, the method further includes:
And under the condition that the body part corresponding to the input source is kept close to the target input surface, the user moves the body part corresponding to the input source, and switches the pre-selection virtual key according to the movement of the input source.
It will be understood that after the user touches the body part corresponding to the input source to the target input surface, if the user does not leave the body part corresponding to the input source from the target input surface, the virtual key selected in advance will not be input, and if the user needs to preselect other virtual keys, the user only needs to slide the body part corresponding to the input source on the target input surface, and when the user needs to input, the user only needs to leave the body part corresponding to the input source from the target input surface.
In addition, in order to avoid the user from touching by mistake, the embodiment of the specification can also exit the preselection through a designated operation mode after the virtual key is preselected by the user, and the preselected virtual key is not input. The particular manner in which the preselection is exited is not limited by the embodiments of the present disclosure.
According to one embodiment of the present disclosure, the user completing the input of the corresponding virtual key on the virtual keyboard through the interaction of the body part corresponding to the input source on the body surface or the object surface corresponding to the target input surface further includes:
And the user performs interaction of the body part corresponding to the input source on the body surface or the object surface corresponding to the target input surface based on the input aiming mode corresponding to the target input surface, so as to complete input of the corresponding virtual keys on the virtual keyboard.
Because the method of the present embodiment supports multiple preset input surfaces, such as index finger Surface (curved Surface) and physical Surface (planar Surface), each/every preset input Surface surface_i is supported to correspond to an input aiming mode.
In the present embodiment, there are three ways of calculating the input reference point of the preselection (Hover) process, corresponding to the input aiming mode of the three input modes. The user may customize the input aiming pattern of his input patterns, including a line-of-sight intersection pattern, an input source projection pattern, and an input source closest interaction point pattern.
As shown in fig. 8, which is a schematic diagram of an input aiming mode of three input modes, the default (when not set by the user) is a "sight intersection" mode (the XR system may select an aiming mode according to the characteristics of the device, and the embodiment of the present disclosure does not limit what mode is by default).
And the sight line intersection point mode is that the reference point is the intersection point of an extension line of a connecting line of the position of a first frame preset input source (such as an index finger tip) on the virtual keyboard in the same Hover interaction process of the sight line starting point (such as CENTER EYE Anchor of glasses). It should be noted that the embodiment of the present disclosure allows that the point does not appear on the virtual keyboard, but on a preset plane (such as a physical plane), the calculation method is the same, that is, the virtual key closest to the point is still calculated.
The reference point is the position of the projection point of the position of the first frame preset input source (such as the finger tip of an index finger) on the virtual keyboard/preset plane in one Hover interaction process.
And the latest interaction point mode is that the reference point is the position of the point which is closest to the virtual keyboard from the preset input source in the process of Hover times (from the first frame to the last frame).
In the embodiment of the present disclosure, the input aiming mode of each preset input surface is preset (or customized by a user), so that a higher input accuracy can be obtained in different use scenarios.
Optionally, calibration of the input aiming pattern may also be performed to allow the user to complete a section of a typing game using the virtual keyboard of embodiments of the present specification. The positions of the preset input sources and the reference points in different input aiming modes, such as the positions of the thumb tips (input sources) and the thumb abdomen surfaces (target input surfaces) and the positions of the pressed virtual keys, are collected during the typing process of the user. The distance between the points and the central position of the target key of the user is calculated, such as a point P1 where the thumb tip/thumb abdomen is closest to the virtual keyboard, an intersection point P2 where the line of sight and the thumb tip are connected with the virtual keyboard, and a projection point P3 where the thumb tip is arranged on the frame F0 to the virtual keyboard. And calculating the average value of the distances between the points and the central position of the real target key in the virtual keyboard, and selecting a mode corresponding to the reference point with the minimum average value as the preferred input aiming mode of the user. The target key is a virtual key for prompting the user to input in the game.
According to one embodiment of the present specification, after the virtual keyboard is generated on the target input surface, the method further includes:
judging whether the target input surface moves out of the visual field range of the user, if so, ending the input of the virtual keyboard, or,
And judging whether the distance between the target input surface and the viewpoint of the user exceeds a preset distance threshold value, and if so, ending the input of the virtual keyboard.
In the embodiment of the present disclosure, when the target input surface moves out of the visual field of the user or the distance between the target input surface and the viewpoint of the user exceeds the preset distance threshold, the user may be prompted to exit the "input mode" immediately, or exit the "input mode" after a certain delay, or exit the "input mode" immediately.
Further, after generating the virtual keyboard on the target input surface, the method further comprises:
And judging whether the body action change condition of the body surface of the user corresponding to the target input surface meets a preset ending input condition or not under the condition that the target input surface is the body surface of the user, and ending the input of the virtual keyboard if the body action change condition of the body surface of the user corresponding to the target input surface meets the preset ending input condition.
In this embodiment of the present disclosure, the body motion change condition of the body surface of the user may include a body surface motion amplitude change condition, a posture change condition, and the like, for example, when the palm of the user is the target input surface, and enters the "input mode", the palm is open, and a virtual keyboard is generated on the palm, and if the palm is detected to be clenched, it may be considered that the preset end input condition is satisfied, and the input of the virtual keyboard is ended.
Illustratively, as shown in fig. 9, the body part p_hand of the user corresponding to the input source p_input is a Thumb-abdomen (Thumb Pad) of the left hand, and the preset input surface is a part of the surface on the radial side of the index finger of the left hand. And according to preset information and tracking the P_hand and the 6Dof information of the preset space, when the mode judging condition I and the mode judging condition II are met, namely, the length of time that the abdomen of the thumb is maintained in the preset input space where the preset surface of the index finger of the left hand is located exceeds a time threshold T (such as 2 s), entering an input mode.
When the input mode is entered, the attributes such as the style, the size, the position and the orientation of the virtual keyboard are determined according to the attribute information of the preset input surface at the moment, namely, the space near the radial surface of the index finger of the left hand, wherein the virtual keyboard comprises three virtual keys (such as virtual keys A, B and C shown in fig. 9) which occupy the transverse arrangement of the preset input surface, and the orientation of the virtual keyboard is the direction towards the starting point of the line of sight of the user. And when detecting that the position of the preset input surface or the angle relative to the sight direction of the user is changed beyond the position or the angle threshold value, updating the position and the orientation of the virtual keyboard. And if the 'input mode' ending information of the mode judging module is received, the virtual keyboard is packed and is not displayed any more.
When the input mode is on, the 6Dof information of the P_input is started to be monitored, namely the abdomen position of the thumb of the left hand. The virtual key "A" closest to the reference point of P_input is in Hover states. If P_input continues to move and the preset input space is not output, the virtual key of Hover is updated according to the position of the reference point, such as switching to the virtual key 'S', and if P_input moves out of the preset input space, the virtual key of Hover is input.
Finally, the visual and/or audible feedback of the virtual key at this time by Hover and the virtual key by Tap is completed. As shown in fig. 9, the visual effect of the virtual key "a/S" at Hover is that the front panel moves toward the rear panel, and when the virtual key is Tap, the front panel is restored from the rear panel position to the initial position.
Based on the same inventive concept, the embodiments of the present disclosure further provide an input device for obtaining haptic feedback in an augmented reality space, as shown in fig. 10, the device includes:
An input setting unit 1001, configured to use one or more surfaces of a user's body or one or more surfaces of an object as preset input surfaces, respectively, and use at least one body part of the user as an input source;
an input determination unit 1002, configured to track the input source and each of the preset input surfaces, and when a relation between the input source and the preset input surfaces meets a predetermined requirement, take the preset input surface as a target input surface and generate a virtual keyboard on the target input surface;
And an input unit 1003, configured to process interaction of a user on a body surface or an object surface corresponding to the target input surface through a body part corresponding to the input source, so as to complete input of a corresponding virtual key on the virtual keyboard.
Further, the process of completing the input of the corresponding virtual key on the virtual keyboard through the interaction of the body part corresponding to the input source on the body surface or the object surface corresponding to the target input surface by the user comprises the following steps:
And when the body part corresponding to the input source is close to the target input surface, pre-selecting a virtual key corresponding to the position of the input source, and then when the body part corresponding to the input source is away from the target input surface, inputting the pre-selected virtual key.
Further, if the user approaches the body part corresponding to the input source to the target input surface, pre-selecting the virtual key corresponding to the position of the input source further includes:
judging whether the distance between the input source and the target input surface is smaller than a preselected distance threshold value or not;
if yes, selecting a virtual key corresponding to the position of the input source.
Further, if the user approaches the body part corresponding to the input source to the target input surface, pre-selecting the virtual key corresponding to the position of the input source further includes:
When the user touches the body part corresponding to the input source on the target input surface, the virtual key corresponding to the contact position of the input source and the target input surface is preselected.
Further, the input unit 1003 is further configured to, after the user presets the virtual key corresponding to the position of the input source, move the body part corresponding to the input source while the body part corresponding to the input source remains close to the target input surface, and switch the virtual key in preselection.
Further, when the relation between the input source and the preset input surface meets a predetermined requirement, taking the preset input surface as a target input surface and generating a virtual keyboard on the target input surface further comprises:
and when the position of the input source falls into the input space of the preset input surface and the duration exceeds the preset time length, taking the preset input surface as the target input surface and generating a virtual keyboard on the target input surface.
Further, generating the virtual keyboard on the target input surface further includes:
Determining attribute information of the target input surface when the relation between the input source and the target input surface meets a preset requirement;
and generating the virtual keyboard according to the attribute information.
Further, the attribute information includes one or more of a size, an orientation, a shape, and a position of the target input surface.
Further, generating the virtual keyboard according to the attribute information further includes:
If the attribute information comprises the size of the target input surface, determining the size of the virtual keyboard according to the size of the target input surface;
If the attribute information comprises the orientation of the target input surface, determining the orientation of the virtual keyboard according to the orientation of the target input surface;
If the attribute information comprises the shape of the target input surface, determining the distribution mode of the virtual keyboard according to the shape of the target input surface;
If the attribute information comprises the position of the target input surface, determining the position of the virtual keyboard according to the position of the target input surface;
And generating the virtual keyboard according to one or more of the size, the orientation, the distribution mode and the position of the virtual keyboard.
Further, the input unit 1003 is further configured to provide input feedback to the user according to a feedback mode corresponding to the target input surface during the process that the user completes input of the corresponding virtual key on the virtual keyboard through interaction of the body part corresponding to the input source on the body surface or the object surface corresponding to the target input surface.
Further, the user completing the input of the corresponding virtual key on the virtual keyboard through the interaction of the body part corresponding to the input source on the body surface or the object surface corresponding to the target input surface further comprises:
And the user performs interaction of the body part corresponding to the input source on the body surface or the object surface corresponding to the target input surface based on the input aiming mode corresponding to the target input surface, so as to complete input of the corresponding virtual keys on the virtual keyboard.
Further, the input aiming mode is a sight intersection mode, an input source projection mode or an input source nearest interaction point mode.
Further, the input determination unit 1002 is further configured to determine whether the target input surface is moved out of the field of view of the user after the virtual keyboard is generated on the target input surface, and if so, end the input of the virtual keyboard, or,
And judging whether the distance between the target input surface and the viewpoint of the user exceeds a preset distance threshold value, and if so, ending the input of the virtual keyboard.
Further, the input determination unit 1002 is further configured to determine, after the virtual keyboard is generated on the target input surface, if the target input surface is a user body surface, whether a body motion change condition of the user body surface corresponding to the target input surface meets a preset ending input condition, and if yes, ending input of the virtual keyboard.
The beneficial effects obtained by the device are consistent with those obtained by the method, and the embodiments of the present disclosure are not repeated.
Fig. 11 is a schematic structural diagram of a computer device according to an embodiment of the present disclosure, where an apparatus in the present disclosure may be the computer device in the present embodiment, and perform the method in the present disclosure.
The computer device 1102 may include one or more processing devices 1104, such as one or more Central Processing Units (CPUs), each of which may implement one or more hardware threads.
The computer device 1102 may also include any storage resources 1106 for storing any kind of information, such as code, settings, data, etc. By way of non-limiting example, the storage resources 1106 may comprise any one or more combinations of any type of RAM, any type of ROM, flash memory devices, hard disks, optical disks, and the like.
More generally, any storage resource may store information using any technology.
Further, any storage resource may provide volatile or non-volatile retention of information.
Further, any storage resources may represent fixed or removable components of computer device 1102.
In one case, when the processing device 1104 executes associated instructions stored in any storage resource or combination of storage resources, the computer device 1102 may perform any of the operations of the associated instructions.
The computer device 1102 also includes one or more drive mechanisms 1108, such as a hard disk drive mechanism, optical disk drive mechanism, and the like, for interacting with any storage resources.
The computer device 1102 may also include an input/output module 1110 (I/O) for receiving various inputs (via an input device 1112) and for providing various outputs (via an output device 1114).
One particular output mechanism may include a presentation device 1116 and an associated Graphical User Interface (GUI) 1118.
In other embodiments, input/output module 1110 (I/O), input device 1112, and output device 1114 may not be included, but merely as a computer device in a network.
The computer device 1102 may also include one or more network interfaces 1120 for exchanging data with other devices via one or more communication links 1122. One or more communication buses 1124 couple together the components described above.
The communication link 1122 may be implemented in any manner, for example, through a local area network, a wide area network (e.g., the internet), a point-to-point connection, etc., or any combination thereof.
Communication link 1122 may include any combination of hardwired links, wireless links, routers, gateway functions, name servers, etc. governed by any protocol or combination of protocols.
The present description embodiment also provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the above-described method.
The present description also provides computer-readable instructions, wherein the program therein causes a processor to perform the above-described method when the processor executes the instructions.
It should be understood that, in various embodiments of the present disclosure, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation of the embodiments of the present disclosure.
It should also be understood that, in the embodiments of the present specification, the term "and/or" is merely one association relationship describing the association object, meaning that three relationships may exist. For example, A and/or B may mean that A alone, both A and B, and B alone are present. In the present specification, the character "/" generally indicates that the front and rear related objects are an or relationship.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the various example components and steps have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present specification.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this specification, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. In addition, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices, or elements, or may be an electrical, mechanical, or other form of connection.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purposes of the embodiments of the present description.
In addition, each functional unit in each embodiment of the present specification may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present specification is essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present specification. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The principles and embodiments of the present invention have been described in the present specification by using specific examples, which are provided to assist in understanding the method and core ideas of the present invention, and modifications will be apparent to those skilled in the art from the teachings of the present invention, and it is intended that the present invention not be limited to these examples.