FIELD- This document relates, generally, to augmented reality (AR) and/or virtual reality (VR) environments, and, more particularly, to gaze tracking for interaction with features in AR and/or VR environments. 
BACKGROUND- An (AR) and/or a (VR) system may generate a three-dimensional (3D) AR/VR environment. A user may experience this 3D AR/VR environment through interaction with various electronic devices, such as, for example, head mounted display (HMD) devices and/or heads up display (HUD) devices such as, for example, helmets, goggles, glasses and the like, gloves fitted with sensors, external handheld devices including sensors, and other such electronic devices. The user may move through the AR/VR environment, and may interact with objects, features, user interfaces and the like in the AR/VR environment using various input methods. For example, the user may interact with features in the AR/VR environment using external controllers, gesture inputs, voice inputs, gaze inputs including head gaze and eye gaze, and other such input methods. In some situations, particularly when gaze inputs are used to interact with features of the AR/VR environment, slippage and/or remount of the HMD may affect accuracy of detected gaze inputs. 
SUMMARY- In one aspect, a computer-implemented method may include detecting a user gaze directed at a virtual UI displayed on a display device of a head mounted display (HMD) device, the virtual UI including a plurality of virtual UI elements; detecting a gaze trajectory corresponding to the detected user gaze; matching the detected gaze trajectory to a display trajectory associated with a UI element of the plurality of UI elements; identifying the UI element as a target UI element; determining an offset between the gaze trajectory and the display trajectory associated with the target UI element, the offset including at least one of a translational offset, a scaling offset, or a rotational offset; and recalibrating a user gaze interaction mode based on the determined offset. 
- In some implementations, the translational offset may include a first translational offset in a first direction; and a second translational offset in a second direction. In some implementations, the scaling offset may include a compression or an expansion in the first direction; and a compression or an expansion in the second direction. In some implementations, recalibrating the user gaze interaction mode may include resetting the detected user gaze with the virtual UI and the plurality of UI elements to compensate for the first and second translational offsets and the first and second scaling offsets. 
- In some implementations, the virtual UI may be a dynamic UI including a plurality of dynamic UI elements, each of the plurality of dynamic UI elements having a respective display trajectory defining a pattern of movement of the respective dynamic UI element. Each dynamic UI element of the plurality of dynamic UI elements may include a unique display trajectory defining a unique pattern of movement for the dynamic UI element. The display trajectory may include a linear part, a circular part, or a curved part. Each dynamic UI element of the plurality of dynamic UI elements may include a pseudo-random display trajectory defining a pseudo-random pattern of movement for the dynamic UI element. 
- In some implementations, detecting the user gaze and detecting the gaze trajectory may include tracking, by an eye tracking system of the HMD, a user eye gaze; and detecting the gaze trajectory based on the tracked user eye gaze relative to the virtual UI and the plurality of UI elements. Tracking the user eye gaze may include emitting, by one or more light sources of the HMD, light towards the eyes of the user; detecting, by one or more light sensors of the HMD, reflection of the light, emitted by the one or more light sources, by the eyes of the user; and tracking the user eye gaze based on the detected reflection. 
- In another general aspect, an electronic device may include a display; a sensing system; at least one processor; and a memory storing instructions that. When executed by the at least one processor, the instructions may cause the electronic device to display, by the display, a virtual user interface (UI), the virtual UI including a plurality of UI elements; detect a user gaze directed at the virtual UI; detect a gaze trajectory corresponding to the detected user gaze; match the detected gaze trajectory to a display trajectory associated with a UI element of the plurality of UI elements; identify the UI element as a target UI element; determine an offset between the gaze trajectory and the display trajectory associated with the target UI element, the offset including at least one of a translational offset, a scaling offset, or a rotational offset; and recalibrate a user gaze interaction mode based on the determined offset. 
- In some implementations, the instructions may cause the at least one processor to determine the translational offset, including a first translational offset in a first direction; and a second translational offset in a second direction; and determine the scaling offset, including a compression or an expansion in the first direction; and a compression or an expansion in the second direction. The instructions may cause the at least one processor to reset the detected user gaze with the virtual UI and the plurality of UI elements to compensate for the first and second translational offsets and the first and second scaling offsets. The virtual UI may be a dynamic UI including a plurality of dynamic UI elements, each dynamic UI element of the plurality of dynamic UI elements having a respective display trajectory defining a unique pattern of movement for the respective dynamic UI element. The display trajectory may include a linear part, a circular part, or a curved part. Each dynamic UI element of the plurality of dynamic UI elements may include a pseudo-random display trajectory defining a pseudo-random pattern of movement for the dynamic UI element. 
- In some implementations, the instructions may cause the at least one processor to track, by an eye tracking system of the electronic device, a user eye gaze, including emit, by one or more light sources, light towards the eyes of the user detect, by one or more light sensors, reflection of the light, emitted by the one or more light sources, by the eyes of the user; and track the user eye gaze based on the detected reflection. 
- In some implementations, the electronic device may be a head mounted display (HMD) device, and the offset between the gaze trajectory and the display trajectory associated with the target UI element is due to movement of the HMD device relative to the eyes of the user after initial calibration of the HMD device. 
- In another general aspect, a non-transitory, computer-readable medium may have instructions stored thereon that, when executed by a computing device, cause the computing device to display, by a display device of the computing device, a virtual user interface (UI), the virtual UI including a plurality of UI elements; detect a user gaze directed at the virtual UI; detect a gaze trajectory corresponding to the detected user gaze; match the detected gaze trajectory to a display trajectory associated with a UI element of the plurality of UI elements; identify the UI element as a target UI element; 
- The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims. 
BRIEF DESCRIPTION OF THE DRAWINGS- FIGS.1A-1C illustrate example AR and/or VR systems. 
- FIG.2 illustrates an example system for generating an AR and/or a VR environment, and interacting with the AR and/or VR environment. 
- FIG.3 is a block diagram of an example electronic device for generating an AR and/or a VR environment, and interacting with the AR and/or VR environment, in accordance with implementations described herein. 
- FIG.4 is a schematic diagram of an example eye gaze tracking system. 
- FIGS.5A and5B illustrate an example user interface employing a point gaze input mode. 
- FIGS.6A-6C illustrate an example user interface employing an example gaze trajectory input mode, in accordance with implementations described herein. 
- FIGS.7A-7G are schematic diagrams of an example user interface element trajectory and an example gaze trajectory, in accordance with implementations described herein. 
- FIG.8 is a flowchart of a method of detecting a user gaze input, in accordance with implementations described herein. 
- FIG.9 shows an example of a computer device and a mobile computer device that can be used to implement the techniques described herein. 
DETAILED DESCRIPTION- A user may make use of numerous different input methods for interacting with features (i.e., virtual objects, virtual user interfaces (UIs), and the like) in an AR and/or a VR environment. Such input methods may include, for example, user inputs received at an external computing device (such as, for example, a handheld controller), gesture inputs, audible/voice inputs, gaze inputs (including, for example, head gaze inputs and eye gaze inputs), and the like. In some implementations, these input methods may be combined. A system and method, in accordance with implementations described herein, provide for user interaction with UIs in an AR and/or a VR environment using gaze input. The system and method, in accordance with implementations described herein, can account for slippage, or movement, or remount, of an HMD which may otherwise affect the accuracy of a gaze input directed to a UI in the AR/VR environment. 
- FIGS.1A-1C illustrate various example electronic devices that can generate an AR and/or a VR environment through, for example, an application executed by the electronic device. As illustrated in the example shown inFIG.1A, in some implementations, the user may view and experience the AR/VR environment via a display portion of awearable device16. In the example shown inFIG.1A, the wearable device is a head mounteddevice16, in the form of, for example, smart glasses. In this arrangement, the physical, real world environment may be visible to the user through the head mounteddevice16, and virtual feature(s), object(s), UI(s) and the like may be placed in, or superimposed on the user's view of the physical, real world environment based. As illustrated in the example shown inFIG.1B, in some implementations, the user may view and experience the AR/VR environment via a display portion of a head mounteddevice18 which essentially occludes the user's direct visibility of the physical, real world environment. In this arrangement, an image of a model of the physical environment, or a pass through image of the physical environment, may displayed on a display portion of the head mounteddevice18, with virtual feature(s), object(s), UI(s) and the like placed in the AR/VR scene viewed by the user. As illustrated in the example shown inFIG.1C, in some implementations, the user may view and experience the AR/VR environment on adisplay portion12 of ahandheld device10. Animaging device14 of the exemplaryhandheld device10 may provide images for display of a camera view, or scene, of the physical, real world environment, together with virtual feature(s), object(s), UI(s) and the like placed in the camera view, or scene, of the physical, real world environment. 
- Reference will now be made in detail to non-limiting examples of this disclosure, examples of which are illustrated in the accompanying drawings. The examples are described below by referring to the drawings, wherein like reference numerals refer to like elements. 
- Thehandheld device10 and the head mounteddevices16 and18 illustrated inFIGS.1A-1C are electronic devices having the capability to display virtual objects in an AR environment and/or a VR environment. Hereinafter, simply for ease of discussion and illustration, examples of a system and method, in accordance with implementations described herein, will be presented based on a scene as viewed on an electronic device similar to the exemplary head mounteddevice16 illustrated inFIG.1A. However, the principles to be described herein may be applied to other electronic device(s) and/or systems capable of generating and presenting an AR/VR environment in which the user can interact with virtual features, objects, UIs and the like using gaze inputs. 
- In some situations, when using a heads up display (HUD) device, or a head mounted display (HMD) device, such as theexample HMD16 shown inFIG.1A, gaze tracking may provide advantages in detecting input and or interaction with features in an AR/VR environments and applications. For example, eye tracking may facilitate see and click actions, allowing a user to, for example, gaze on an object and blink to select the object. Additionally, eye tracking may provide for optimized rendering of high-definition (HD) content. For example, eye tracking may facilitate foveated rendering by rendering only parts of a scene in HD that are being focused on by the user, rather than rendering the entire scene in HD, thus reducing processing load. In some situations, eye tracking may be used to perform iris recognition as a part of a biometric system, both for user authentication and user identification. Slippage of theexample HMD16, or physical adjustment of theHMD16, or removal/remounting of theHMD16 on the head of the user, may disrupt the gaze tracking capability, and in particular, the eye tracking capability of theHMD16, and may affect the accuracy of the corresponding user interaction with the selected virtual feature, object, UI and the like. 
- FIG.2 illustrates anexample system100 for creating and interacting with a 3D AR and/or VR environment in accordance with the teachings of this disclosure is shown. In general, thesystem100 provides the 3D AR/VR environment including content for user access, viewing, interaction and manipulation. Thesystem100 can provide the user with options for accessing the content, applications, objects, features and the like using, for example, gaze tracking, including eye tracking and/or head tracking. In the example shown inFIG.2, the user is wearing anHMD110, in the form of, for example, smart glasses, in which the physical environment is visible therethrough, and in which virtual features may be overlaid on the user's view of the physical environment to create an AR environment. 
- As shown inFIG.2, theexample system100 may include multiple computing and/or electronic devices that can exchange data over anetwork120. The devices may represent clients or servers, and can communicate via thenetwork120 or any other additional and/or alternative network(s). Example client devices include, but are not limited to, a mobile device131 (e.g., a smartphone, a tablet computing device, a personal digital assistant, a portable media player, etc.), a laptop ornetbook132, a camera (not shown), theHMD110 worn by the user in this example, adesktop computer133, ahandheld device134, a gaming device (not shown), and any other electronic or computing devices that can communicate using thenetwork120 or other network(s) with other computing or electronic devices or systems, or that may be used to access virtual content or operate within the AR/VR environment. Thedevices110 and131-134 may represent client or server devices. Thedevices110 and131-134 can execute a client operating system and one or more client applications that can access, render, provide, or display VR content on a display device included in or in conjunction with eachrespective device110 and131-134. Thedevices110 and131-134 can execute VR applications that, among other things, take advantage of eye tracking carried out by theHMD110 or other HMDs disclosed herein. 
- Thesystem100 may include any number ofcontent systems140 storing content and/or AR/VR software modules (e.g., in the form of AR/VR applications144) that can generate, modify, or execute AR/VR scenes. In some examples, thedevices110 and131-134 and thecontent system140 include one or more processors and one or more memory devices, which can execute a client operating system and one or more client applications. TheHMD110, the other devices131-133 or thecontent system140 may be implemented by the example computing devices shown inFIG.9. 
- Theapplications144 can be configured to execute on any or all ofdevices110 and131-134. TheHMD110 can be connected to devices131-134 to access AR/VR content on thecontent system140, for example. Devices131-134 can be connected (wired or wirelessly) to theHMD110, which can provide content for display. A user's AR/VR system can be theHMD110 alone, or a combination of devices131-134 and theHMD110. 
- FIG.3 is a block diagram of an exampleelectronic device300, such as, for example, the HMD110 (in the form of, for example, smart glasses) worn by the user inFIG.2, that can generate an AR environment to be experienced by the user, and that can perform gaze tracking (including eye gaze tracking and/or head gaze tracking) for detection of user inputs. 
- Theelectronic device300 may include asensing system360 and acontrol system370. Thesensing system360 may include one or more different types of sensors, including, for example, a light sensor, an audio sensor, an image sensor, a distance/proximity sensor, and/or other sensors and/or different combination(s) of sensors. In some implementations, thesensing system360 may include one or more gaze tracking sensors, including, for example, an eye tracking system. In some implementations, the eye tracking system may include one or more image sensors, or cameras, positioned to detect and track eye gaze of the user. In some implementations, the eye tracking system may include one or more light sources such as, for example, light emitting diodes (LEDs) that emit light, for example, infrared light, directed toward the eyes of the user. Reflection of this light emitted by the LEDs from the eyes of the user may be detected by, for example, one or more light sensors of thesensing system360. Eye gaze may be tracked based on the captured reflection of the light emitted by the LEDs. In some implementations, thesensing system360 may include an inertial measurement unit (IMU) to detect and track head gaze direction and movement. Thecontrol system370 may include, for example, power/pause control device(s), audio and video control device(s), optical control device(s), and/or other such devices and/or different combination(s) of devices. Thesensing system360 and/or thecontrol system370 may include more, or fewer, devices, depending on a particular implementation. Theelectronic device300 may include aprocessor390 in communication with thesensing system360 and thecontrol system370. Theprocessor390 may process inputs received from thesensing system360, such as, for example, eye gaze direction and movement inputs captured by the one or more image sensors and/or head gaze direction and movement inputs captured by the IMU, to process inputs and execute instructions corresponding to the detected gaze inputs. Theelectronic device300 may include an input system that can receive user inputs to be processed by the processor290 and output by anoutput system350 under the control of thecontrol system370. Theinput system340 may include various types of input devices including, for example, a touch input surface, audio input devices that can receive audio inputs (including, for example, audio sensors, or microphones, included in the sensing system360), a gesture recognition device (including, for example, images captured by image sensors(s) of thesensing system360 and processed by the processor390), and other such input devices. Theoutput system350 may include various types of output devices such as, for example, display device(s), audio output device(s), or speakers, physical and/or tactile output devices, and other such output devices. Theelectronic device300 may include amemory380, and acommunication module395 providing for communication between theelectronic device300 and one or more other, external device(s), such as, for example, the external devices shown inFIG.2. 
- In one non-limiting example, theelectronic device300 may be a HUD, or HMD, such as, for example, glasses, as in the example shown inFIG.4. In particular,FIG.4 is a side, schematic view of anexample HMD300 in the form of smart glasses worn by a user. In this example, theHMD300 includes an eye tracking device including one or morelight sources361 and one or morelight sensors362 to perform eye tracking, simply for purposes of discussion and illustration. Principles to be described herein may be applied to HMDs including other types of eye tracking devices. Theexample HMD300 may include aframe305 that positions one ormore lenses352 and adisplay device355 along a line of sight, or an optical path, of the eyes E of the user wearing theHMD300. One or morelight sources361, in the form of, for example, one or more LEDs, may emit light, for example, intermittently emit light, toward the eyes E of the user, for example, along example rays R1 shown inFIG.4. In some implementations, the light emitted by thelight source361 may be reflected by the eyes E of the user, illustrated by example rays R2. In some implementations, the light reflected by the eyes E of the user may be detected by one or morelight sensors362. The reflection of light from the eyes E of the user may be used, for example, by the processor of theHMD300, to track eye gaze. The processor may correlate detected eye gaze with, for example, elements visible to the user on the display device, or through the display device. In some implementations, the processor may correlate the detected eye gaze with elements of a UI displayed by the display device, to provide for user interaction with the elements of the UI. The arrangement and/or the number of components shown in the schematic view provided inFIG.4 is for illustrative purposes only. In some implementations, theexample HMD300 may include more, or fewerlight sources361, arranged and/or positioned differently than the example arrangement shown inFIG.4. In some implementations, the example HMD may include more, or fewerlight sensors362, arranged and/or positioned differently than the example arrangement shown inFIG.4. 
- As noted above, a HUD, or HMD, such as, for example, glasses, headsets, goggles, and the like, may be configured to receive gaze inputs including, for example, eye gaze inputs and/or head gaze inputs, in an AR and/or a VR environment. In some implementations, eye gaze as an input modality may rely on eye tracking performed by the HMD. In some situations, slippage of the HMD, remount of the HMD, and/or other repositioning of the HMD, may have a detrimental effect on the accuracy of the eye tracking performed by the HMD, particularly when eye gaze is tracked based on reflection of light from the eyes, output by light sources and captured by light sensors of the HMD, as described above with respect to the illustrative example shown inFIG.4, rather, than, for example, image based tracking based on images captured by cameras of the HMD, which may consume a prohibitive amount of power in a power constrained device such as an HMD. This detrimental effect on the accuracy of eye tracking may, in turn, affect the accuracy of eye gaze as an input modality. Recalibration to restore the accuracy of the eye tracking may be time consuming and resource intensive. 
- In a system and method, in accordance with implementations described herein, a slippage resistant UI may make use of a pattern of movement for elements of the UI that are selectable by a user of the HMD. In some implementations, this may allow the system to differentiate selection of an element of the UI based on gaze trajectory, even in the event of slippage, remount, or other movement of the HMD which may cause the gaze trajectory to no longer be in alignment with the UI elements. In some implementations, this may also allow the system to adjust calibration of the display of UI elements to realign a gaze trajectory estimate in accordance with a determined slippage. In some implementations, a gaze input may be matched with a corresponding UI element (for example, an intended user selection) based on a gaze trajectory, rather than an absolute position of a gaze point, which is otherwise susceptible to slippage errors, and which would otherwise require recalibration. A system and method, in accordance with implementations described herein, may account for slippage in determining a user intent with respect to UI elements, without relying on considerable changes in eye tracking methodologies. 
- FIG.5A illustrates anexample UI500 which may be displayed to a user of the exampleelectronic device300 described above, in an AR/VR environment as described above. Theexample UI500 may include one or morestatic UI elements550A through550I. Thestatic UI elements550A-550I may be selectable by the user, using a gaze input as described above.FIG.5B illustrates a user interaction with theexample UI300, in a situation in which some sort of movement of theHMD300 has occurred since calibration (i.e., slippage, remount, and the like). As shown inFIG.5B, in the situation in which the example slippage has occurred, a user eye gaze may be intended for the point510 (corresponding to selection of theUI element550A). However, due to the example slippage, the gaze may instead be estimated to have been directed at thepoint520 on thestatic UI500, resulting in matching with, and selection of, the UI element550I, rather than theUI element550A as intended. Matching of gaze to thestatic UI500/static UI elements550A-550I is sensitive to gaze offsets (represented by the arrow A), and thus the gaze offset due to slippage results in an inaccuracy in the user's interaction with thestatic UI500, and, in this example, an inaccuracy in the resulting selection. That is, in this example, matching of the user gaze with one of thestatic UI elements550A-550H of thestatic UI500 is based on the absolute position of the gaze point with respect to thestatic UI500/static UI elements550A-550H, which has been offset due to the slippage. In this situation, correction requires a full recalibration process. 
- In contrast, dynamic matching of user gaze with a dynamic UI including dynamic, or moving, UI elements, in accordance with implementations described herein, may be less susceptible to inaccuracy based on to gaze offsets due to slippage. In a dynamic UI including dynamic UI elements, in accordance with implementations described herein, matching (for example, between a gaze and an intended UI element) may be performed based on a gaze trajectory (and not on the absolute position of the gaze point), as the user's eye follows a given UI element for a target duration, or, as the user's eye performs a smooth pursuit of the given UI element intended for selection. Once one of the dynamic UI elements has been successfully selected, an eye-to-screen calibration can be calculated, based on the (known) trajectories of the dynamic UI elements and an estimated gaze trajectory observed during the smooth pursuit of the selected dynamic UI element. This calibration procedure may be relatively seamless, and may account for gaze offset due to slippage going forward. 
- FIG.6A illustrates anexample UI600, in accordance with implementations described herein. Theexample UI600 is adynamic UI600, including a plurality ofdynamic UI elements650A through650H. That is, in thisexample UI600, theUI elements650A-650H are movable, or displayed in a state in which theUI elements650A-650H are in motion, and move in linearly decoupled trajectories. In the example implementation to be described with respect toFIGS.6A-6C, the exampledynamic UI elements650A-650H have a substantially circular arrangement, and move in a substantially circular pattern, as shown, for example, by the arrow B, simply for ease of discussion and illustration. Movement in this substantially circular pattern is illustrated sequentially inFIGS.6B(1) through6B(5). However, the principles to be described herein may be applied to other dynamic UIs, including a UI having a different number of UI elements and/or which the UI elements are arranged differently, and/or in which UI elements move in other patterns and/or that follow other movement trajectories within the dynamic UI, including uniform and non-uniform, or random, or pseudo-random, patterns. 
- FIG.6C illustrates a user interaction with the exampledynamic UI600, in a situation in which some sort of movement of theHMD300 has occurred since calibration (i.e., slippage of theHMD300, remount of theHMD300, repositioning of theHMD300, and the like). As shown inFIG.6C, in the situation in which the example slippage has occurred, a user eye gaze may be intended for theUI element650A. In this example, although there has been slippage of the HMD300 (which would typically cause the gaze point to be offset from the intended target UI element, as described above), the gaze trajectory610 (rather than the gaze point) may be tracked. The system may detect that thegaze trajectory610 is in pursuit of, or follows thedisplay trajectory630A of theUI element650A. In this situation, thedisplay trajectory630A of theUI element650A is known by the system. Even though theuser gaze trajectory610 may be at an offset620 relative to the intendedtarget UI element650A and the associateddisplay trajectory630A of thetarget UI element650A, the system may match the detectedgaze trajectory610 with thedisplay trajectory630A of theUI element650A. Based on the matching of thegaze trajectory610 with thedisplay trajectory630A of theUI element650A, the system may determine that theUI element650A is the target of the detectedgaze610, and theUI element650A may be selected. In some implementations, after confirmation of the selection of thetarget UI element650A, the system may use the determined offset620 to perform a relatively straightforward eye to display calibration, for use by the system going forward, for example, until the need for another such recalibration is detected, an end of session is detected, and the like. 
- FIGS.7A through7C are schematic diagrams of the matching of a gaze trajectory, such as, for example, thegaze trajectory610 described above with respect toFIGS.6A-6C, with a movement pattern, or a display trajectory, of a target UI element, such as, for example, the knowndisplay trajectory630 of thetarget UI element650A described above with respect toFIGS.6A-6C, in accordance with implementations described herein. 
- As noted above, each of the exampledynamic UI elements650 may have its own,distinct display trajectory630. This may allow the system to differentiate between the plurality ofdifferent UI elements650, particularly when attempting to match the detectedgaze trajectory610 with one of the plurality ofUI elements650, to in turn determine an intendedtarget UI element650 associated with thegaze trajectory610. 
- The schematic diagrams shown inFIGS.7A through7G illustrate theexample display trajectory630 of one of theexample UI elements650, together with theexample gaze trajectory610 in a variety of different slippage scenarios. In the example arrangement shown inFIGS.7A-7G, theexample display trajectory630, i.e., the example trajectory, or path, followed by the exampledynamic UI element650, is substantially circular, simply for purposes of discussion and illustration. In some implementations, the display trajectory may follow other patterns, including pseudo random patterns. Regardless of the pattern, or shape, or contour, of the display trajectory, the pattern, or shape, or contour, of the display trajectory associated with a particular UI element is known to the system, allowing the system to differentiate between the various UI elements, and to match the detected, or observed, gaze trajectory with one of the UI elements 
- FIG.7A illustrates a scenario in which theHMD300 is positioned on the user's face, and an eye to screen calibration has been done. In the calibrated state shown inFIG.7A, the display trajectory630 (shown in solid lines) and the display trajectory610 (shown in dashed lines) are substantially aligned. That is, the observedgaze trajectory630 substantially matches, or coincides with, thedisplay trajectory610 of aUI element650 being gazed upon by the user. That is, thedynamic UI element650, which moves along thedisplay trajectory610, is substantially aligned with theuser gaze615 at the particular point in time represented inFIG.7A. 
- FIGS.7B through7G illustrate various scenarios in which some form of slippage of theHMD300, from the state shown inFIG.7A, has caused the observed, or detected,gaze trajectory610 to be offset from thedisplay trajectory630 of thetarget UI element650. While illustrated statically inFIGS.7A through7G, it is understood that thedynamic UI element650 moves along the pattern defined by the display trajectory630 (illustrated in solid lines), representing a function G(t). The gaze trajectory610 (illustrated in dashed lines) may represent a function G′(t). The function G′(t) may provide a measure, or an estimate, of the pattern of the observed, or detectedgaze trajectory610, to be matched to one of theUI elements650 of thedynamic UI600. InFIGS.7A-7G, the detecteduser gaze615 is shown at a point in time, corresponding to the point in time represented by the position of thedynamic UI element650 along thedisplay trajectory630. In the examples shown inFIGS.7B through7G, due to slippage of theHMD300, the user gaze targeted at theUI element650 does not follow thedisplay trajectory630 of thetarget UI element650. Rather, thegaze trajectory610 is offset from thedisplay trajectory630 of thetarget UI element650 due to slippage of theHMD300. In particular, there is a translational offset T, and a scaling offset S, between thedisplay trajectory630 and thegaze trajectory610 due to the slippage of theHMD300.FIGS.7B through7G illustrate various example scenarios corresponding to different types of slippage of theHMD300 corresponding arrangements of theHMD300 relative to the face/eyes of the user due to the slippage. 
- FIG.7B illustrates an example in which there has been some translation of theHMD300, with substantially homogeneous scaling between thedisplay trajectory630 and the observedgaze trajectory610. As shown inFIG.7B, the observed, or detected user gaze trajectory610 (shown in dashed lines) is now somewhat translated, or offset relative to the display trajectory630 (shown in solid lines) of thetarget UI element650 due to the slippage of theHMD300. In the example shown inFIG.7B, thegaze trajectory610 is offset, or translated in the X direction (i.e., along an X axis) and the Y direction (i.e., along a Y axis), relative to thedisplay trajectory650. The translational offset, or translation, of thegaze trajectory610 is represented by the vector T, extending from a central portion of the pattern representing thedisplay trajectory630 to a central portion of the pattern representing thegaze trajectory610. The translational offset T may be defined by components Tx (offset, or translation, or shift in the X direction, or along the X axis) and Ty (offset, or translation, or shift in the Y direction, or along the Y axis). The translational offset may be in response to, for example, movement or displacement of theHMD300 in the X direction and/or the Y direction (for example, relative to the calibrated position shown inFIG.7A). In the example shown inFIG.7B, the scaling offset S of thegaze trajectory610 may be represented by scaling components Sx (scaling in the X direction, or along the X axis) and Sy (scaling in the Y direction, or along the Y axis). The scaling component Sx may represent a compression, or an expansion, along the X-axis, between thedisplay trajectory630 and thegaze trajectory610. The scaling component Sy may represent a compression, or an expansion, along the Y-axis, between thedisplay trajectory630 and thegaze trajectory610. The scaling offset may be in response to, for example, a movement of theHMD300 closer to, or further away from, the face/eyes of the user, causing sensor(s) of theHMD300 to be closer to and/or further away from the face/eyes of the user. In the example shown inFIG.7B, thegaze trajectory610 is scaled (scaled down, or reduced) relative to thedisplay trajectory630 due to the slippage of theHMD300 in a substantially homogenous, or uniform, manner, in that the scaling Sx in the X direction is substantially the same as the scaling Sy in the Y direction. In this implementation, realistic slippage of theHMD300 only results in scaling (a substantially uniform scaling in the X and Y directions), but did not cause a deformation of thegaze trajectory610. 
- Thus, the translation components Tx and Ty, together with the scaling components Sx and Sy, may define translation and/or scaling and/or deformation of theuser gaze trajectory610 due to slippage of theHMD300. Translation may, at least in part, occur due to, for example, movement of theHMD300 along the bridge of the nose, lateral (i.e., left to right) movement of theHMD300 relative to the user's face, and the like. Scaling, in the form of compression and/or expansion, may, at least in part, occur due to repositioning of sensors due to slippage, positioning some sensors of theHMD300 closer to and/or further away from the user's face. Once a match is made with thetarget UI element650 of thedynamic UI600, the translational offsets Tx, Ty and/or the scaling offsets Sx, Sy, between thegaze trajectory610 and the knowndisplay trajectory630, may be determined, and those offsets may be applied to gaze detection going forward, to provide for eye to screen recalibration. That is, once thegaze trajectory610 is analyzed and matched with adisplay trajectory630 of one of theUI elements650, for selection of thetarget UI element650, an eye to screen recalibration may be performed, using those offsets, to return the system to a calibrated state as shown inFIG.7A, even in the slipped state of theHMD300. 
- FIG.7C illustrates a situation in which slippage of theHMD300 has caused a translational offset T (defined by the translation components Tx and Ty), and a scaling offset S (defined by the scaling components Sx and Sy). In the example shown inFIG.7C, the observed, or detectedgaze trajectory610 is now somewhat deformed due to the slippage of theHMD300. That is, in this example, the scaling offset S includes a scaling Sx (in the X direction, or along the X axis) that is independent from the scaling Sy (in the Y direction, or along the Y axis) compared to the substantially homogeneous, or uniform scaling in the example shown inFIG.7B. In this example, the scaling component Sx may represent an expansion in the X direction (or along the X axis), and the scaling component Sy may represent a compression in the Y direction (or along the Y axis). This may indicate that the slippage of theHMD300 has caused some sensor(s) of theHMD300 to be positioned closer to the face/eyes of the user, and some sensor(s) to be positioned further from the face/eyes of the user (compared to the calibrated position shown inFIG.7A). 
- FIG.7D illustrates an implementation in which realistic slippage of theHMD300 has caused a translational offset T and a scaling offset S including some level of shear Sxy, indicating non-uniform slippage of theHMD300.FIG.7E illustrates a situation in which slippage of theHMD300 has caused a translational offset T, with homogenous, or uniform, scaling S in the X direction and the Y direction. In the example shown inFIG.7E, the slippage also includes some degree of rotation of theHMD300, as noted by the offset position of thegaze615 of thegaze trajectory610 relative to thetarget UI element650 on thedisplay trajectory630 at the illustrated point in time.FIG.7F illustrates an implementation in which realistic slippage of theHMD300 has caused a translational offset T, with independent scaling S in the X direction (i.e., expansion) and the Y direction (i.e., compression). In the example shown inFIG.7F, the slippage also includes some degree of rotation of theHMD300, as noted by the offset position of thegaze615 of thegaze trajectory610 relative to thetarget UI element650 on thedisplay trajectory630 at the illustrated point in time.FIG.7G illustrates a situation in which slippage of theHMD300 has caused a translational offset T, and a scaling offset S, including shear Sxy. In the example shown inFIG.7G, the slippage also includes some degree of rotation of theHMD300, as noted by the offset position of thegaze615 of thegaze trajectory610 relative to thetarget UI element650 on thedisplay trajectory630 at the illustrated point in time 
- In some implementations, one or more algorithms may be implemented in determining atarget UI element650 based on a detectedgaze trajectory610 in the event of slippage of theHMD300. For example, in some implementations, modeling of the gaze tracking (in an example of translational slippage) may be characterized by Equation 1 below. 
 G′(t)=SG(t)+T  Equation 1
 
- In Equation 1, G(t) may define the true display trajectory associated with a particular UI element. T may represent the translation component, and S may represent the scaling component, associated with the slippage of the HMD, as discussed above. G′(t), the (estimated) user gaze trajectory may be determined using Equation 1. 
- In some implementations, this may be accomplished by matching a derivative of the estimatedgaze trajectory610 with a derivative of the movement pattern defined by thedisplay trajectory630 of thetarget UI element650, characterized by Equation 2 below. In some situations, this derivative approach may be most readily applicable to a situation in which the translational offset is the main component of slippage, but may experience sensitivity to noise. 
 G′(t)/dx=dSG(t)/dx≅SdY(t)/dx  Equation 2
 
- In some implementations, the gaze trajectory G′(t) can be matched to UI element trajectories using a phase detection algorithm to match the gaze trajectory with the movement pattern of the target UI element. Such a phase detection algorithm may be less sensitive to noise, translational offset and scaling. However, phase detection algorithms, such as, for example, the Fourier shift theorem, may rely on a sampling period that is in the order of magnitude, or longer, than the signal period. This may pose a challenge in balancing providing for relatively rapid user interaction with the dynamic UI, with movement speed of the UI elements of the dynamic UI. 
- As noted above,FIGS.7E,7F and7G illustrate examples in which the slippage of theHMD300 includes the translational and scaling components as described above, and also a rotational component, causing the detectedgaze trajectory610 to be offset from thedisplay trajectory630 of thetarget UI element650. Rotational slippage may occur, at least in part, due to, for example, upward movement of a first lateral side of theHMD300 and downward movement of a second lateral side of theHMD300, and the like In some implementations, modeling of the gaze tracking (in an example including rotational slippage) may be characterized by Equation 3 below. 
 G′(t)=SRG(t)+T  Equation 3
 
- In Equation 3, S may represent the scaling component of slippage, R may represent the rotational component of slippage, G(t) may define the true display trajectory associated with a particular UI element, and T may represent the translation component of slippage. In this manner, G′(t), the (estimated) user gaze trajectory, in a situation including rotational slippage of theHMD300, may be determined using Equation 3. 
- In some implementations, slippage may include independent scaling offsets Sx, Sy (i.e., non-homogeneous, or non-uniform Sx, Sy resulting in expansion and/or compression in the X and/or Y directions), as described above. Independent offset and scaling factors on each of the X and Y axes may be represented by Equations 4 and 5 below. 
 x′=a1x+a0  Equation 4
 
 y′=b1x+b0  Equation 5
 
- An example dynamic UI, such as, for example, thedynamic UI600 described above, may include some number N of dynamic UI elements, such as, for example, thedynamic UI elements650 described above. When modeling slippage with respect to a dynamic UI having N UI elements, there are N+1 hypotheses as to which of the UI elements is the target UI element (i.e., the UI element intended for selection by the detected user gaze). That is, the user gaze may be directed to one of the N UI elements, plus a hypothesis allowing for the situation in which the user gaze is not intended for any of the N UI elements. For each of the N hypotheses, the detected gaze including these offsets and independent scalings Sx and Sy may be modeled and fit to one of the UI elements by solving the linear Equations 6 and 7. When translation offsets are combined with shear, the detected gaze including these offsets may be modeled and fit to one of the UI elements by solving Equation 8 for each of the hypotheses and selecting the best fit. This method has been shown to be resistant to shear and small amounts of rotational slippage. 
 
- FIG.8 is a flowchart of anexample method800 of detecting a user gaze input in an AR/VR environment, in accordance with implementations described herein. 
- A dynamic UI, such as the exampledynamic UI600 described above, may be displayed to a user in an AR/VR environment via, for example, a display device of anHMD300 such as, for example, glasses, goggles and the like. Thedynamic UI600 may include a plurality of dynamic UI elements, such as the exampledynamic UI elements650 described above, each of thedynamic UI elements650 moving along a respectiveindependent display trajectory630 that is known by the system. One or more of thedynamic UI elements650 may be selectable by the user using a number of different input modes, including, for example, a gaze input mode. 
- The system may detect a user gaze directed to thedynamic UI600 including the plurality of dynamic UI elements650 (block810). If it is determined that the gaze input coincides with one of thedynamic UI elements650, then the system may set the identifieddynamic UI element650 as the target UI element, and select theUI element650 for further action (blocks820,880,890). If it is determined that the gaze input does not coincide with one of the dynamic UI elements650 (block820), then the system may track thegaze trajectory610 associated with the detected user gaze (830). The system may compare the trackeduser gaze trajectory610 with the knowndisplay trajectories630 corresponding to the dynamic UI elements650 (block840) to match theuser gaze trajectory610 with adisplay trajectory630 of one of the dynamic UI elements650 (block850). In response to matching of theuser gaze trajectory610 with thedisplay trajectory630 of one of thedynamic UI elements650, the system may set the identifiedUI element650 as the target UI element, and select the target UI element for further action (blocks850,880,890). The system may then determine offsets between theuser gaze trajectory610 and thedisplay trajectory630 of the target UI element650 (block860). The offsets may include, for example, at least one of a translational offset, a scaling offset, or a rotational offset. The system may use the determined offsets to perform a recalibration (block870). The process may continue until the session is terminated (block895). 
- FIG.9 shows an example of acomputer device1000 and amobile computer device1050, which may be used with the techniques described here.Computing device1000 includes aprocessor1002,memory1004, astorage device1006, a high-speed interface1008 connecting tomemory1004 and high-speed expansion ports1010, and alow speed interface1012 connecting tolow speed bus1014 andstorage device1006. Each of thecomponents1002,1004,1006,1008,1010, and1012, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. Theprocessor1002 can process instructions for execution within thecomputing device1000, including instructions stored in thememory1004 or on thestorage device1006 to display graphical information for a GUI on an external input/output device, such asdisplay1016 coupled tohigh speed interface1008. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also,multiple computing devices1000 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system). 
- Thememory1004 stores information within thecomputing device1000. In one implementation, thememory1004 is a volatile memory unit or units. In another implementation, thememory1004 is a non-volatile memory unit or units. Thememory1004 may also be another form of computer-readable medium, such as a magnetic or optical disk. 
- Thestorage device1006 is capable of providing mass storage for thecomputing device1000. In one implementation, thestorage device1006 may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product can be tangibly embodied in an information carrier. The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as thememory1004, thestorage device1006, or memory onprocessor1002. 
- Thehigh speed controller1008 manages bandwidth-intensive operations for thecomputing device1000, while thelow speed controller1012 manages lower bandwidth-intensive operations. Such allocation of functions is exemplary only. In one implementation, the high-speed controller1008 is coupled tomemory1004, display1016 (e.g., through a graphics processor or accelerator), and to high-speed expansion ports1010, which may accept various expansion cards (not shown). In the implementation, low-speed controller1012 is coupled tostorage device1006 and low-speed expansion port1014. The low-speed expansion port, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter. 
- Thecomputing device1000 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as astandard server1020, or multiple times in a group of such servers. It may also be implemented as part of arack server system1024. In addition, it may be implemented in a personal computer such as alaptop computer1022. Alternatively, components fromcomputing device1000 may be combined with other components in a mobile device (not shown), such asdevice1050. Each of such devices may contain one or more ofcomputing device1000,1050, and an entire system may be made up ofmultiple computing devices1000,1050 communicating with each other. 
- Computing device1050 includes aprocessor1052,memory1064, an input/output device such as adisplay1054, acommunication interface1066, and a transceiver1068, among other components. Thedevice1050 may also be provided with a storage device, such as a microdrive or other device, to provide additional storage. Each of thecomponents1050,1052,1064,1054,1066, and1068, are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate. 
- Theprocessor1052 can execute instructions within thecomputing device1050, including instructions stored in thememory1064. The processor may be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor may provide, for example, for coordination of the other components of thedevice1050, such as control of user interfaces, applications run bydevice1050, and wireless communication bydevice1050. 
- Processor1052 may communicate with a user throughcontrol interface1058 anddisplay interface1056 coupled to adisplay1054. Thedisplay1054 may be, for example, a TFT LCD (Thin-Film-Transistor Liquid Crystal Display) or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. Thedisplay interface1056 may comprise appropriate circuitry for driving thedisplay1054 to present graphical and other information to a user. Thecontrol interface1058 may receive commands from a user and convert them for submission to theprocessor1052. In addition, anexternal interface1062 may be provided in communication withprocessor1052, so as to enable near area communication ofdevice1050 with other devices.External interface1062 may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used. 
- Thememory1064 stores information within thecomputing device1050. Thememory1064 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units.Expansion memory1074 may also be provided and connected todevice1050 throughexpansion interface1072, which may include, for example, a SIMM (Single In Line Memory Module) card interface.Such expansion memory1074 may provide extra storage space fordevice1050, or may also store applications or other information fordevice1050. Specifically,expansion memory1074 may include instructions to carry out or supplement the processes described above, and may include secure information also. Thus, for example,expansion memory1074 may be provide as a security module fordevice1050, and may be programmed with instructions that permit secure use ofdevice1050. In addition, secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner. 
- The memory may include, for example, flash memory and/or NVRAM memory, as discussed below. In one implementation, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as thememory1064,expansion memory1074, or memory onprocessor1052, that may be received, for example, over transceiver1068 orexternal interface1062. 
- Device1050 may communicate wirelessly throughcommunication interface1066, which may include digital signal processing circuitry where necessary.Communication interface1066 may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication may occur, for example, through radio-frequency transceiver1068. In addition, short-range communication may occur, such as using a Bluetooth, Wi-Fi, or other such transceiver (not shown). In addition, GPS (Global Positioning System) receiver module1070 may provide additional navigation- and location-related wireless data todevice1050, which may be used as appropriate by applications running ondevice1050. 
- Device1050 may also communicate audibly usingaudio codec1060, which may receive spoken information from a user and convert it to usable digital information.Audio codec1060 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset ofdevice1050. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating ondevice1050. 
- Thecomputing device1050 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as acellular telephone1080. It may also be implemented as part of asmart phone1082, personal digital assistant, or other similar mobile device. 
- Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. 
- These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” “computer-readable medium” refers to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. 
- To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input. 
- The systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), and the Internet. 
- The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. 
- In some implementations, the computing devices depicted inFIG.10 can include sensors that interface with a virtual reality (VR headset/HMD device1090). For example, one or more sensors included on acomputing device1050 or other computing device depicted inFIG.10, can provide input toVR headset1090 or in general, provide input to a VR space. The sensors can include, but are not limited to, a touchscreen, accelerometers, gyroscopes, pressure sensors, biometric sensors, temperature sensors, humidity sensors, and ambient light sensors. Thecomputing device1050 can use the sensors to determine an absolute position and/or a detected rotation of the computing device in the VR space that can then be used as input to the VR space. For example, thecomputing device1050 may be incorporated into the VR space as a virtual object, such as a controller, a laser pointer, a keyboard, a weapon, etc. Positioning of the computing device/virtual object by the user when incorporated into the VR space can allow the user to position the computing device so as to view the virtual object in certain manners in the VR space. For example, if the virtual object represents a laser pointer, the user can manipulate the computing device as if it were an actual laser pointer. The user can move the computing device left and right, up and down, in a circle, etc., and use the device in a similar fashion to using a laser pointer. 
- In some implementations, one or more input devices included on, or connect to, thecomputing device1050 can be used as input to the VR space. The input devices can include, but are not limited to, a touchscreen, a keyboard, one or more buttons, a trackpad, a touchpad, a pointing device, a mouse, a trackball, a joystick, a camera, a microphone, earphones or buds with input functionality, a gaming controller, or other connectable input device. A user interacting with an input device included on thecomputing device1050 when the computing device is incorporated into the VR space can cause a particular action to occur in the VR space. 
- In some implementations, a touchscreen of thecomputing device1050 can be rendered as a touchpad in VR space. A user can interact with the touchscreen of thecomputing device1050. The interactions are rendered, inVR headset1090 for example, as movements on the rendered touchpad in the VR space. The rendered movements can control virtual objects in the VR space. 
- In some implementations, one or more output devices included on thecomputing device1050 can provide output and/or feedback to a user of theVR headset1090 in the VR space. The output and feedback can be visual, tactical, or audio. The output and/or feedback can include, but is not limited to, vibrations, turning on and off or blinking and/or flashing of one or more lights or strobes, sounding an alarm, playing a chime, playing a song, and playing of an audio file. The output devices can include, but are not limited to, vibration motors, vibration coils, piezoelectric devices, electrostatic devices, light emitting diodes (LEDs), strobes, and speakers. 
- In some implementations, thecomputing device1050 may appear as another object in a computer-generated, 3D environment. Interactions by the user with the computing device1050 (e.g., rotating, shaking, touching a touchscreen, swiping a finger across a touch screen) can be interpreted as interactions with the object in the VR space. In the example of the laser pointer in a VR space, thecomputing device1050 appears as a virtual laser pointer in the computer-generated, 3D environment. As the user manipulates thecomputing device1050, the user in the VR space sees movement of the laser pointer. The user receives feedback from interactions with thecomputing device1050 in the VR environment on thecomputing device1050 or on theVR headset1090. 
- In some implementations, acomputing device1050 may include a touchscreen. For example, a user can interact with the touchscreen in a particular manner that can mimic what happens on the touchscreen with what happens in the VR space. For example, a user may use a pinching-type motion to zoom content displayed on the touchscreen. This pinching-type motion on the touchscreen can cause information provided in the VR space to be zoomed. In another example, the computing device may be rendered as a virtual book in a computer-generated, 3D environment. In the VR space, the pages of the book can be displayed in the VR space and the swiping of a finger of the user across the touchscreen can be interpreted as turning/flipping a page of the virtual book. As each page is turned/flipped, in addition to seeing the page contents change, the user may be provided with audio feedback, such as the sound of the turning of a page in a book. 
- In some implementations, one or more input devices in addition to the computing device (e.g., a mouse, a keyboard) can be rendered in a computer-generated, 3D environment. The rendered input devices (e.g., the rendered mouse, the rendered keyboard) can be used as rendered in the VR space to control objects in the VR space. 
- Computing device1000 is intended to represent various forms of digital computers and devices, including, but not limited to laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers.Computing device1050 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document. 
- A number of embodiments have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the specification. 
- In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other embodiments are within the scope of the following claims. 
- While certain features of the described implementations have been illustrated as described herein, many modifications, substitutions, changes and equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the scope of the implementations. It should be understood that they have been presented by way of example only, not limitation, and various changes in form and details may be made. Any portion of the apparatus and/or methods described herein may be combined in any combination, except mutually exclusive combinations. The implementations described herein can include various combinations and/or sub-combinations of the functions, components and/or features of the different implementations described. 
- In the following some examples are described. 
- Example 1: A computer-implemented method, comprising displaying, by a display device of a head mounted display (HMD) device, a virtual user interface (UI), the virtual UI including a plurality of UI elements; detecting a user gaze directed at the virtual UI; detecting a gaze trajectory corresponding to the detected user gaze; matching the detected gaze trajectory to a display trajectory associated with a UI element of the plurality of UI elements; identifying the UI element as a target UI element; determining an offset between the gaze trajectory and the display trajectory associated with the target UI element, the offset including at least one of a translational offset, a scaling offset, or a rotational offset; and recalibrating a user gaze interaction mode based on the determined offset. 
- Example 2: The method of claim1, wherein determining the offset includes determining the translational offset, including a first translational offset in a first direction; and a second translational offset in a second direction; and determining the scaling offset, including a compression or an expansion in the first direction; and a compression or an expansion in the second direction. 
- Example 3: The method of claim2, wherein recalibrating the user gaze interaction mode includes resetting the detected user gaze with the virtual UI and the plurality of UI elements to compensate for the first and second translational offsets and the first and second scaling offsets. 
- Example 4: The method of claim1, wherein the virtual UI is a dynamic UI including a plurality of dynamic UI elements, each of the plurality of dynamic UI elements having a respective display trajectory defining a pattern of movement of the respective dynamic UI element. 
- Example 5: The method of claim4, wherein each dynamic UI element of the plurality of dynamic UI elements has a unique display trajectory defining a unique pattern of movement for the dynamic UI element. 
- Example 6: The method of claim5, wherein the display trajectory comprises a linear, circular or curved part. 
- Example 7: The method of claim5, wherein each dynamic UI element of the plurality of dynamic UI elements has a pseudo-random display trajectory defining a pseudo-random pattern of movement for the dynamic UI element. 
- Example 8: The method of at least one of the claims1 to7, wherein detecting the user gaze and detecting the gaze trajectory includes tracking, by an eye tracking system of the HMD, a user eye gaze; and detecting the gaze trajectory based on the tracked user eye gaze relative to the virtual UI and the plurality of UI elements. 
- Example 9: The method of claim8, wherein tracking the user eye gaze includes emitting, by one or more light sources of the HMD, light towards the eyes of the user; detecting, by one or more light sensors of the HMD, reflection of the light, emitted by the one or more light sources, by the eyes of the user; and tracking the user eye gaze based on the detected reflection. 
- Example 10: An electronic device, including a display; a sensing system; at least one processor; and a memory storing instructions that, when executed by the at least one processor, cause the electronic device to display, by the display, a virtual user interface (UI), the virtual UI including a plurality of UI elements; to detect a user gaze directed at the virtual UI; to detect a gaze trajectory corresponding to the detected user gaze; to match the detected gaze trajectory to a display trajectory associated with a UI element of the plurality of UI elements; to identify the UI element as a target UI element; to determine an offset between the gaze trajectory and the display trajectory associated with the target UI element, the offset including at least one of a translational offset, a scaling offset, or a rotational offset; and to recalibrate a user gaze interaction mode based on the determined offset. 
- Example 11: The device ofclaim10, wherein, in determining the offset, the instructions cause the at least one processor to determine the translational offset, including a first translational offset in a first direction; and a second translational offset in a second direction; and to determine the scaling offset, including a compression or an expansion in the first direction; and a compression or an expansion in the second direction. 
- Example 12: The device of claim11, wherein, in recalibrating the user gaze interaction mode, the instructions cause the at least one processor to reset the detected user gaze with the virtual UI and the plurality of UI elements to compensate for the first and second translational offsets and the first and second scaling offsets. 
- Example 13: The device ofclaim12, wherein the virtual UI is a dynamic UI including a plurality of dynamic UI elements, each dynamic UI element of the plurality of dynamic UI elements having a respective display trajectory defining a unique pattern of movement for the respective dynamic UI element. 
- Example 14: The method of claim13, wherein the display trajectory comprises a linear, circular or curved part. 
- Example 15: The device of claim13 wherein each dynamic UI element of the plurality of dynamic UI elements has a pseudo-random display trajectory defining a pseudo-random pattern of movement for the dynamic UI element. 
- Example 16: The device of at least one of theclaims10 to15, wherein, in detecting the user gaze and detecting the gaze trajectory, the instructions cause the at least one processor to track, by an eye tracking system of the electronic device, a user eye gaze, including emit, by one or more light sources, light towards the eyes of the user; detect, by one or more light sensors, reflection of the light, emitted by the one or more light sources, by the eyes of the user; and track the user eye gaze based on the detected reflection. 
- Example 17: The device of at least one of theclaims10 to16, wherein the electronic device is a head mounted display (HMD) device, and the offset between the gaze trajectory and the display trajectory associated with the target UI element is due to movement of the HMD device relative to the eyes of the user after initial calibration of the HMD device. 
- Example 18: A non-transitory, computer-readable medium having instructions stored thereon that, when executed by a computing device, cause the computing device to display, by a display device of the computing device, a virtual user interface (UI), the virtual UI including a plurality of UI elements; to detect a user gaze directed at the virtual UI; to detect a gaze trajectory corresponding to the detected user gaze; to match the detected gaze trajectory to a display trajectory associated with a UI element of the plurality of UI elements; to identify the UI element as a target UI element; to determine an offset between the gaze trajectory and the display trajectory associated with the target UI element, the offset including at least one of a translational offset, a scaling offset, or a rotational offset; and to recalibrate a user gaze interaction mode based on the determined offset.