Detailed Description
FIG. 1 illustrates a schematic diagram of an example implementation of a computing device for attracting a gaze of a viewer of a display device. As described in more detail below, the computing device uses gaze tracking data from a gaze tracking system to monitor the gaze location of a viewer. The gaze attraction program controls the display device to display movement of the guide element along a calculated dynamic path that passes inside a predetermined area proximate to the viewer's gaze location and leads to the target object. If the viewer's gaze does not leave the guide element, the guide element continues to move along the calculated dynamic guide path toward the target object. If the viewer's gaze deviates from the guide element by at least the predetermined deviation threshold, the guide element is no longer displayed for movement along the calculated dynamic guide path toward the target object.
In various examples, the computing device may be physically separate from, or integrated into, a display device with which a viewer may interact. FIG. 1 schematically shows an example of a computing device 10 physically separated from a display device 14. In this example, computing device 10 may comprise or may be integrated into a separate device, such as: a set-top box, a game console, a web cam, a head-mounted or other wearable computing device, a keyboard, a dedicated peripheral, or other similar device that does not include an integrated display.
Computing device 10 may be operatively connected with display device 14 using a wired connection, or may employ a wireless connection via WiFi, bluetooth, or any other suitable wireless communication protocol. For example, computing device 10 may be communicatively coupled to network 16. The network 16 may take the form of a Local Area Network (LAN), a Wide Area Network (WAN), a wired network, a wireless network, a personal area network, or a combination thereof, and may include the Internet. Additional details regarding the components and computing aspects of computing device 10 are described in greater detail below with reference to FIG. 8.
Fig. 1 also shows an example of a computing device 12 integrated into a Head Mounted Display (HMD) device 18. The HMD device 18 may create and display a virtual reality environment or a mixed reality environment to the first viewer 22. In these examples, the HMD device 18 may include a display program 26 that may generate a virtual environment or a mixed reality environment for display via the HMD device. The virtual environment may include one or more visual elements in the form of virtual images (e.g., three-dimensional (3D) holographic objects and two-dimensional (2D) virtual images) that are generated and displayed via the HMD device 18. In a mixed reality environment, the HMD device 18 may enable a viewer to view such holographic objects and virtual images within the physical environment surrounding the viewer.
As described in more detail below, in some examples, the HMD device 18 may include a transparent, translucent, or non-transparent display supported in front of one or both eyes of the viewer. The HMD device 18 may include various sensors and related systems that receive physical environment data from the physical environment. For example, the HMD device 18 may include a depth sensor system 30, the depth sensor system 30 including one or more depth cameras that generate depth image data.
In some examples, the HMD device 18 may include an optical sensor system 32 that utilizes at least one externally facing sensor (e.g., an RGB camera or other optical sensor). An externally facing sensor may capture two-dimensional information from a physical environment. The HMD device 18 may also include a position sensor system 34, the position sensor system 34 including one or more accelerometers, gyroscopes, head tracking systems, and/or other sensors for determining the position or orientation of the user.
The HMD device 18 may also include a transducer system 38, the transducer system 38 including one or more actuators that convert an electrical signal into another form of energy. In some examples, the transducer system 38 may include one or more speakers for providing audio feedback to a viewer. In other examples, the transducer system 38 may include one or more tactile transducers for generating and providing tactile feedback (e.g., vibrations) to a viewer. The HMD device 18 may also include a microphone system 42 and one or more microphones for receiving audio input from the physical environment.
Additionally, the example shown in fig. 1 shows the computing device 12 integrated into the HMD device 18. It should be understood that in other examples, the computing device may be a separate component from the HMD device 18. Many types and configurations of HMD devices 18 with various form factors may be used and are within the scope of this disclosure. A more detailed description of an example HMD device is provided below with reference to fig. 6.
It should also be understood that computing device 12 may include or be integrated into any other suitable type or form of display device, such as a tablet computer, notebook computer, smart phone, or other mobile computing device, desktop computing device, stand-alone monitor, wall-mounted display, interactive whiteboard, or other similar device having an integrated display. Such devices may also include a gaze tracking system, as described in more detail below.
Both computing device 10 and computing device 12 may include gaze attraction programs 46 that may be stored in mass storage 40. The gaze attraction program 46 may be loaded into memory 48 and executed by the processor 52 to perform one or more of the methods and processes described in more detail below.
Computing devices 10 and 12 may receive gaze tracking data 50 from gaze tracking system 54. In various examples, gaze tracking system 54 may be located in display device 14, in HMD device 18, or in a common housing with any other suitable type or form of display device (including but not limited to those example devices with integrated displays discussed above). In other examples, gaze tracking system 54 and computing device 10 may be integrated into a common housing that does not include an integrated display, which may be, for example, a head-mounted or other wearable device, or any other suitable type or form of computing device that does not include an integrated display (including but not limited to those example devices discussed above that do not have an integrated display).
With continued reference to fig. 1, the example display device 14 may include a display system 58 for presenting one or more visual elements to a second viewer 62. As described in more detail below, the gaze attraction program 46 may utilize gaze tracking data 50 from the gaze tracking system 54 to attract the gaze of the viewer via guide elements displayed by the display device 14, HMD 18, or other display device.
2-5, a description of an example use case will now be provided. Fig. 2 is a schematic diagram of several viewers in a room 200 interacting with a computing device and a display device that use gaze tracking data from a gaze tracking system to attract the viewers' gaze. In one example, the viewer Alex202 is viewing a movie 206 displayed on a wall-mounted display 210. In this example, the wall-mounted display 210 is communicatively coupled to a set-top box 214, the set-top box 214 including the gaze tracking system 54 and a computing device including the gaze attraction program 46.
Referring now to fig. 3, in one example, a producer of a movie 206 may desire to draw the viewer's attention to a coffee house 302 displayed in the scenes of the movie. To attract the attention of the viewer, the gaze attraction program 46 may be configured to control the display device 210 to display the guide element. In this example, the guide element includes a bird 306, which may be a computer-generated image, added to the movie scene. Gaze attraction program 46 may use gaze tracking data from gaze tracking system 54 to monitor the gaze location of viewer Alex202 on wall-mounted display 210. For example, and as shown in fig. 3, gaze tracking system 54 may use gaze tracking data 50 to determine that viewer Alex202 is currently gazing at gaze location 308.
The bird 306 may be shown moving along a calculated dynamic path 310 leading to the coffee house 302. In addition and to attract the attention of the viewer Alex202, the calculated dynamic path 310 may pass inside a predetermined area 314 adjacent to the gaze location 308 of the viewer Alex 202. Additionally and to minimize disruption or distraction to the viewing experience of the viewer Alex202, the dynamic path 310 may be calculated in a manner that makes the movement of the bird 306 appear natural and realistic. Advantageously, utilizing such a dynamic path may enable the viewer Alex202 to continue watching and enjoying the movie 206 without feeling that his attention is manipulated or intentionally diverted.
In some examples, the computer-generated bird 306 may be displayed to move along the calculated dynamic path 310 according to computer-generated image movement rules that govern movement of computer-generated images rendered in real-time in the movie 206. It should also be understood that this example of a guide element in the form of a bird 306 is provided for illustrative purposes, and that many other types, forms, and examples of guide elements may be utilized and are within the scope of the present disclosure. For example and with respect to movie 206, other computer-generated guide elements that may be utilized include, but are not limited to: floating leaves, people, cars, or any other suitable guide element.
In the example shown in fig. 3, the predetermined area 314 proximate to the gaze location 308 of the viewer Alex202 is a circle having a radius R concentric with the gaze location. The radius R may be determined in any suitable manner and may be based on one or more of the following criteria, for example: a distance from the wall-mounted display 210 to the viewer Alex202, a size of the wall-mounted display 210, a size of one or more elements displayed on the wall-mounted display, an accuracy of the gaze tracking system 54, or any other suitable criteria. In various examples, the radius R may have a length of approximately 0.5mm, 1.0mm, 5.0mm, 10.0mm, 50.0mm, 100.0mm, or any other suitable distance. It should be understood that any other suitable shape and/or configuration of the predetermined area proximate to the viewer's gaze location may be used and is within the scope of the present disclosure.
With continued reference to fig. 3, as the bird 306 travels along the calculated dynamic path 310, the gaze attraction program 46 may determine whether the gaze of the viewer Alex202 follows the flight of the bird. In one example, after bird 306 passes gaze location 308, gaze attraction program 46 may determine whether the updated gaze location 308' of viewer Alex202 is within a predetermined deviation threshold of bird 306.
In one example and as shown in fig. 3, when the updated gaze location overlaps at least a portion of bird 306, it may be determined that updated gaze location 308' of viewer Alex202 is within a predetermined deviation threshold of bird 306. In another example, when the gaze location is within a predetermined distance from (and does not necessarily overlap) the bird, the updated gaze location 308' of the viewer Alex202 may be determined to be within a predetermined deviation threshold of the bird 306.
For example and as shown in fig. 3, the predetermined deviation threshold 318 may comprise a circle having a radius T concentric with the updated gaze location 308'. As discussed above with respect to radius R, the radius T may be determined in any suitable manner and may be based on one or more of the following criteria, for example: a distance from the wall-mounted display 210 to the viewer Alex202, a size of the wall-mounted display, a size of the bird 306 and/or one or more other elements displayed on the wall-mounted display, an accuracy of the gaze tracking system 54, or any other suitable criteria. In various examples, the radius T may have a length of approximately 0.5mm, 1.0mm, 5.0mm, 10.0mm, 50.0mm, 100.0mm, or any other suitable length. It should also be understood that any other suitable shape and/or configuration of the predetermined area proximate to the viewer's gaze location may be used and is within the scope of the present disclosure.
In the event that the updated gaze location 308 'is within the bird's predetermined deviation threshold, the gaze attraction program 46 may control the wall-mounted display 210 to continue to display the movement of the bird 306 toward the coffee house 302 along the calculated dynamic path 310. Advantageously and in this manner, the line of sight of the viewer Alex202 may be directed to the coffee house 302, thereby increasing Alex's perception of the coffee house. Additionally and as described in more detail below, in some examples, the gaze attraction program 46 may also determine that the gaze location of the viewer Alex overlaps the coffee house 302 and, in response, may assign an advertising consumption charge to the coffee house.
In another example, gaze attraction program 46 may determine that the position of the viewer's gaze of Alex202 deviates from bird 306 by at least a predetermined deviation threshold 318. Expressed alternatively and with respect to the example of fig. 3, gaze attraction program 46 may determine that the gaze location of viewer Alex202 is outside the circle indicated at 318. For example, the gaze attraction program 46 may determine that the viewer Alex202 returned his gaze to the gaze location 308. In this example, the gaze attraction program 46 may control the wall-mounted display 210 to not continue to show the bird 306 moving toward the coffee house 302 along the calculated dynamic path 310.
In another example, the gaze attraction program 46 may monitor and track the gaze trajectory 312 of the gaze location of the viewer Alex. In this example, when the gaze track 312 is within the path deviation threshold 316 of the calculated dynamic path 310, the gaze location of the viewer Alex202 may be determined to be within the predetermined deviation threshold of the bird 306. For example, after the bird 306 passes a predetermined area 314 proximate to Alex's gaze location 308, the viewer Alex202 may initially follow the bird's flight with his gaze such that his gaze trajectory 312 is within the path deviation threshold 316.
As shown in fig. 3, in this example, the path deviation threshold 316 is the distance between the line-of-sight trajectory 312 and the calculated dynamic path 310 at a given time. In various examples, path deviation threshold 316 may have a length of approximately 0.5mm, 1.0mm, 5.0mm, 10.0mm, 50.0mm, 100.0mm, or any other suitable length. It should also be understood that any other suitable comparison between the gaze track 312 and the calculated dynamic path 310 may be used to determine whether the viewer Alex is looking to follow the bird 306, and is within the scope of the present disclosure.
As discussed above, if the gaze attraction program 46 determines that the gaze track 312 is within the path deviation threshold 316, the program may control the wall-mounted display 210 to continue to display the movement of the bird 306 toward the coffee house 302 along the calculated dynamic path 310. In another example, the gaze attraction program 46 may determine that the gaze trajectory 312 of the viewer Alex202 deviates from the calculated dynamic path 310 by at least the path deviation threshold 316. For example, the gaze attraction program 46 may determine that the viewer Alex202 shifted his gaze to the gaze location 308 "indicated by the gaze track 312. In this example, the gaze attraction program 46 may control the wall-mounted display 210 to not continue to show the bird 306 moving toward the coffee house 302 along the calculated dynamic path 310.
In one example, the gaze attraction program 46 may not continue to display the movement of the bird 306 along the calculated dynamic path 310 by offsetting the movement of the bird to move along an alternative path 322 that is not directed to the coffee house 302. In this way, the gaze attraction program 46 may avoid displaying all guide elements traveling toward the target object, even when the viewer is not gazing at the guide elements. In other examples, the gaze attraction program 46 may not continue to display the bird 306 moving along the calculated dynamic path by no longer displaying the guide element. For example, the gaze attraction program 46 may cause the guide element to disappear from the display.
In another example, the calculated dynamic path 310 may be programmatically adjusted based on a change in gaze location of the viewer Alex 202. For example, as the viewer Alex202 changes his gaze to the updated gaze location 308 "at the bystander 320, the gaze attraction program 46 may programmatically adjust the calculated dynamic path to traverse inside a predetermined area proximate the updated gaze location 308" and then proceed to the coffee house 302.
In another example and referring now to fig. 1 and 2, viewer Mary230 may play a computer game 234 on her tablet computer 238. Tablet computer 238 may include a display 242, gaze tracking system 54, and gaze attraction program 46. Referring now to fig. 4, in one example, the computer game 234 includes a baseball game 400, the baseball game 400 including a player character in the form of a batter 404 that may be controlled by the viewer Mary230, and a non-player character in the form of a pitcher 408 and outsiders 412 and 414 that move according to the non-player character movement rules of the baseball game. The baseball game 400 may also include at least one object that moves according to object movement rules of the baseball game. In this example, the object may include a baseball 420.
In this example, the guide elements may include a pitcher 408, a fielder 412, and/or a baseball 420. For example, gaze attraction program 46 may control display 242 of tablet computer 238 to display that outfield 412 moves along calculated dynamic path 430 according to non-player character movement rules. The calculated dynamic path 430 leads to an advertisement 434 that is located on the outfield wall and encourages the viewer Mary230 to "eat at cafe a". The gaze location 450 of the viewer Mary230 on the display may be determined and monitored as described hereinabove. Movement of outfield hands 412 along the calculated dynamic path 430 may also be controlled as described above.
In some examples, cafe a may pay a promotional fee to have its advertisements 434 displayed in baseball game 400. In one example, if the gaze attraction program 46 determines that the gaze location 450 of the viewer Mary230 overlaps the advertisement 434, an advertisement consumption charge may be assigned to cafe a. Advantageously, in this manner, advertisers pay an impression-based advertising consumption charge that is directly related to the actual impression of the advertiser's advertisement by the viewer.
In another example, the promotional fee paid by cafe a for its advertisement 434 may be based at least in part on the time period during which the viewer looked at the advertisement. For example, in the event that the line of sight position 450 of viewer Mary230 overlaps the advertisement 434 for less than a predetermined period of time, a first ad consumption charge may be assigned to cafe A. In the event that the line-of-sight location 450 of viewer Mary230 overlaps the advertisement 434 for at least a predetermined period of time, a second advertising consumption charge that is higher than the first advertising consumption charge may be allocated to cafe A. In some examples, the predetermined period of time may be 0.5 seconds, 1.0 seconds, 2.0 seconds, 5.0 seconds, or any other suitable period of time. Likewise, any suitable amount may be utilized for the first advertising consumption charge and the second advertising consumption charge.
Referring now to FIG. 5, in some examples, the target object may include an advertisement displayed on a web page. For example, the viewer Mary230 may view a web page on her tablet computer 238 that includes a touch-sensitive display 242. A guide element in the form of a striped comet 504 may be displayed moving along a calculated dynamic path 506 inside a predetermined area 508 passing through a gaze location 510 proximate to the viewer Mary 230. The calculated dynamic path may lead to a target object in the form of a selectable advertisement 512 from cafe a that notifies "pizza buy a gift".
In one example, viewer Mary230 may provide viewer input associated with selectable advertisement 512 by touching the screen of display 242. The location of her touch selection can be interpreted by the touch sensitive screen as a touch location point 514 that is located overlapping a portion of the advertisement 512. Thus, this viewer input selecting advertisement 512 may trigger an advertisement consumption charge that may be assigned to cafe a.
However, in some examples, viewer Mary230 may not want to select advertisement 512. For example, the viewer Mary230 may have a finger greater than average and may want to select "click here" selectable button 520. The touch detection system of touch sensitive display 242 may have incorrectly interpreted her desired touch location as being at point 514. To address this possibility, when the touch input of the viewer Mary230 is received and associated with the advertisement 512, the gaze attraction program 46 may determine whether the gaze location of the viewer Mary230 overlaps with the advertisement.
In one example, if the gaze location 524 of viewer Mary230 overlaps with advertisement 512 when the viewer input is received, an advertisement consumption charge is assigned to coffee shop a. On the other hand, if the line-of-sight position 530 of the viewer Mary230 does not overlap with the ad 512 when the viewer input is received, the ad consumption charge is cancelled. Advantageously, in this manner, inadvertent or unintentional selection of selectable advertisements or other components on a web page may be identified and corresponding erroneous advertising consumption charges and/or other inadvertent operations may be avoided.
In another example and referring again to fig. 2, a viewer Wally250 is wearing an HMD device in the form of a pair of glasses 254. The viewer Wally250 participates, via HMD glasses 254, in a mixed reality experience that includes a holographic wizard 260 and guide elements in the form of floating balls 264, all displayed by the glasses.
In one example, a developer of a mixed reality experience may desire to draw the viewer's Wally attention to the holographic coffee house advertisement 270 displayed in the room 20. As described above, the gaze attraction program 46 of the HMD glasses 254 may be configured to display a guide element. In this example, the guide element includes a floating ball 264. The gaze attraction program 46 may use gaze tracking data from the gaze tracking system 54 of the HMD glasses 254 to monitor the gaze location of the viewer Wally 250.
The floating ball 264 may be displayed to move along a calculated dynamic path 274 leading to the coffee house advertisement 270. As described above, the calculated dynamic path 274 may pass inside a predetermined area 278 proximate to the line of sight location 282 of the viewer Wally 250. As the floating ball travels along the calculated dynamic path 274, the gaze attraction program 46 may determine whether the gaze location of the viewer Wally250 is within a predetermined deviation threshold of the ball. In one example, after the floating ball 264 passes the gaze location 282, the gaze attraction program 46 determines that the updated gaze location 282' of the viewer Wally250 is at the ball and within a predetermined deviation threshold. Thus, the gaze attraction program 46 may control the HMD glasses 254 to continue to display the floating ball 264 moving along the calculated dynamic path 274 toward the coffee house advertisement 270.
In other examples and as described above, the gaze attraction program 46 may also determine that another gaze location 282 "of the viewer Wally250 overlaps the coffee house advertisement 270 and, in response, may assign an advertising consumption charge to the coffee house.
In another example, the gaze attraction program 46 may determine that the viewer's Wally250 gaze location deviates from the floating ball 264 by at least a predetermined deviation threshold. In this example, the gaze attraction program 46 may control the HMD glasses 254 to not continue to display the floating ball 264 moving along the calculated dynamic path 274 toward the coffee house advertisement 270. For example, the HMD glasses 254 may offset the floating ball 264 to move along an alternative path 286 that does not overlap the advertisement 270. In other examples, the HMD glasses 254 may stop displaying the floating ball 264.
Referring now to fig. 6, one example of an HMD device 600 in the form of a pair of wearable glasses with a transparent display is provided. It should be understood that in other examples, the HMD device 600 may take other suitable forms in which a transparent, translucent, and/or non-transparent display is supported in front of one or both eyes of the user. It should be understood that the HMD device shown in fig. 1 and 2 may take the form of HMD device 600, as described in detail below, or any other suitable HMD device.
The HMD device 600 includes a display system 602 and a see-through or transparent display 604 that enables images (e.g., holographic objects) to be delivered to the eyes of a wearer of the HMD device. The transparent display 604 may be configured to visually augment the appearance of the real-world, physical environment to a wearer viewing the physical environment through the transparent display. For example, the appearance of the physical environment may be enhanced by graphical content (e.g., one or more pixels each having a respective color and brightness) presented via the transparent display 604 to create an augmented reality environment.
The transparent display 604 may also be configured to enable a wearer of the HMD device to view a physical, real-world object in the physical environment by displaying one or more partially transparent pixels of the virtual object representation. As shown in fig. 6, in one example, transparent display 604 may include an image producing element (e.g., a see-through Organic Light Emitting Diode (OLED) display) located within a lens 606. As another example, the transparent display 604 may include a light modulator on the edge of the mirror plate 606. In this example, the lens 606 may act as a light guide for passing light from the light modulator to the eye of the wearer. Such a light guide may enable the wearer to perceive a 3D holographic image located within the physical environment the wearer is viewing, while also allowing the wearer to view physical objects in the physical environment, thus creating an augmented reality environment.
The HMD device 600 may also include various sensors and related systems. For example, the HMD device 600 may include a gaze tracking system 608, the gaze tracking system 608 including one or more image sensors configured to obtain image data in the form of gaze tracking data from the wearer's eyes. The gaze tracking system 608 may use this information to track the position and movement of the wearer's eyes, provided the wearer has agreed to the acquisition and use of this data.
In one example, the gaze tracking system 608 includes a gaze detection subsystem configured to detect the direction of each eye of the wearer. The gaze detection subsystem may be configured to determine the gaze direction of each of the wearer's eyes in any manner. For example, the gaze detection subsystem may include one or more light sources (e.g., infrared light sources) configured to cause the flickering light to reflect off of the cornea of each eye of the wearer. In turn, the one or more image sensors may be configured to capture images of the wearer's eyes.
The optical axis of each eye may be determined using an image of the pupil and glints determined from image data collected from an image sensor. The gaze tracking system 608 may then use this information to determine the direction in which the wearer gazes. The gaze tracking system 608 may additionally or alternatively determine which physical or virtual object the wearer is gazing at and where on the physical or virtual object the wearer is gazing at. Such gaze tracking data may then be provided to the HMD device 600.
It should also be appreciated that the gaze tracking system 608 may have any suitable number and arrangement of light sources and image sensors. For example and referring to fig. 6, the gaze tracking system 608 of the HMD device 600 may utilize at least one inwardly facing sensor 610.
The HMD device 600 may also include a sensor system that receives physical environment data from the physical environment. For example, the HMD device 600 may also include a head tracking system 612 that utilizes one or more pose sensors (e.g., pose sensor 614 on HMD device 600) to capture head pose data and thereby support position tracking, direction/position and orientation sensing, and/or motion detection of the wearer's head.
In one example, the head tracking system 612 may include an Inertial Measurement Unit (IMU) configured as a three-axis or three-degree-of-freedom position sensor system. For example, the example position sensor system may include three gyroscopes to indicate or measure changes in orientation of the HMD device 600 within the 3D space about three orthogonal axes (e.g., x, y, and z or roll, pitch, and yaw). In some examples, the orientation derived from the sensor signals of the IMU may be used to display one or more virtual objects having body-locked positions via the transparent display 604, where the position of each virtual object appears to be fixed relative to the wearer of the see-through display and the position of each virtual object appears to be movable relative to real objects in the physical environment.
In another example, the head tracking system 612 may include an IMU configured as a six-axis or six degree-of-freedom position sensor system. For example, the example position sensor system may include three accelerometers and three gyroscopes to indicate or measure changes in the position of the HMD device 600 along the three orthogonal axes, as well as changes in the orientation of the device about the three orthogonal axes.
The head tracking system 612 may also support other suitable positioning technologies, such as GPS or other global positioning systems. Further, while specific examples of position sensor systems have been described, it should be understood that any other suitable position sensor system may be used. For example, head pose and/or motion data may be determined based on sensor information from any combination of sensors mounted on and/or external to the wearer, including, but not limited to, any number of gyroscopes, accelerometers, inertial measurement units, GPS devices, barometers, magnetometers, cameras (e.g., visible light cameras, infrared light cameras, time-of-flight depth cameras, structured light depth cameras, etc.), communication devices (e.g., WiFi antennas/interfaces), and so forth.
In some examples, the HMD device 600 may also include an optical sensor system that utilizes one or more externally facing sensors (e.g., optical sensor 616 on the HMD device 600) to capture image data. An externally facing sensor may detect motion within its field of view, for example, gesture-based input, or other motion performed by the wearer or by a person or physical object within the field of view. The externally facing sensors may also capture 2D image information and depth information from the physical environment and physical objects within the environment. For example, the externally facing sensors may include a depth camera, a visible light camera, an infrared light camera, and/or a position tracking camera.
The optical sensor system may include a depth tracking system that generates depth tracking data via one or more depth cameras. In one example, each depth camera may include a left camera and a right camera of a stereo vision system. Time-resolved images from one or more of these depth cameras may be registered (register) with each other and/or with an image from another optical sensor (e.g., a visible spectrum camera), and may be combined to produce a depth-resolved video.
In other examples, a structured light depth camera may be configured to project structured infrared illumination and image illumination reflected from a scene onto which the illumination is projected. A depth map of the scene may be constructed based on the space between adjacent features in various regions of the imaged scene. In other examples, the depth camera may take the form of a time-of-flight depth camera configured to project pulsed infrared illumination onto a scene and detect illumination reflected from the scene. For example, illumination may be provided by an infrared light source 618. It should be understood that any other suitable depth camera may be used within the scope of the present disclosure.
The externally facing sensor may capture images of the physical environment in which the wearer of the HMD device is located. With respect to the HMD device 600, in one example, the augmented reality display program may include a 3D modeling system that models the environment surrounding the wearer of the HMD device, the 3D modeling system using such captured images to generate a virtual environment. In some examples, the optical sensor 616 may cooperate with the IMU to determine the position and orientation of the HMD device 600 in six degrees of freedom. Such position and orientation information may be used to display one or more virtual objects having world-locked positions via the transparent display 604, where the position of each virtual object appears to be fixed relative to real-world objects that may be seen through the transparent display, and the position of each virtual object appears to be movable relative to the wearer of the see-through display.
The HMD device 600 also includes a microphone system that includes one or more microphones, such as microphone 620, that capture audio data. In other examples, the audio may be presented to the wearer via one or more speakers (e.g., speakers 622 on HMD device 600).
The HMD device 600 may also include a controller, such as controller 624. The controller 624 may include a logic subsystem and a storage subsystem, as discussed in more detail below with respect to fig. 8, that communicate with various sensors and systems of the HMD device 600. In one example, the storage subsystem may include instructions executable by the logic subsystem to receive signal inputs from the sensors, determine a pose of the HMD device 600, and adjust display properties of content displayed via the transparent display 604.
Fig. 7A and 7B illustrate a flow diagram of a method 700 for attracting a gaze of a viewer of a display device in accordance with an implementation of the present disclosure. The following description of method 700 is provided with respect to the software and hardware components described above and illustrated in fig. 1-6. It should be understood that method 700 may also be performed in other contexts using other suitable hardware and software components.
Referring to fig. 7A, at 704, method 700 may include controlling a display device to display a target object. At 708, method 700 may include monitoring a gaze location of a viewer using gaze tracking data from a gaze tracking system. At 712, the method 700 may include controlling a display device to display movement of the guide element along a calculated dynamic path that passes inside a predetermined area proximate to the viewer's gaze location and leads to the target object.
At 716, the method 700 may include using the gaze tracking data to determine whether the gaze location of the viewer is within a predetermined deviation threshold of the guide element. At 720, method 700 may include, if the viewer's gaze location is within the predetermined deviation threshold of the guide element, controlling the display device to continue to display the guide element moving along the calculated dynamic guide path toward the target object. At 724, method 700 may include controlling the display device to not continue to display the guide element moving toward the target object along the calculated dynamic guide path if the viewer's gaze location deviates from the guide element by at least a predetermined deviation threshold.
At 728, not continuing to display the guide element moving along the calculated dynamic guide path toward the target object may include deviating the guide element from the calculated dynamic path. At 732, not continuing to display the guide element moving along the calculated dynamic guide path toward the target object may include ceasing to display the guide element. At 736, the predetermined deviation threshold comprises a distance to the guide element. At 738, the predetermined deviation threshold includes a path deviation threshold to the calculated dynamic path, and the method 700 may include determining whether the viewer's gaze trajectory deviates from the calculated dynamic path by at least the path deviation threshold. If the viewer's gaze trajectory deviates from the calculated dynamic path by at least the path deviation threshold, the display may be controlled not to continue to display movement of the guide element along the calculated dynamic guide path toward the target object, as discussed above.
Referring now to fig. 7B, controlling a display device to display a guide element may further comprise, at 740 and with the viewer playing a computer game (wherein the computer game has at least one non-player character moving according to non-player character movement rules and at least one object moving according to object movement rules), performing method 700: the guide element is moved along the calculated dynamic path according to the non-player character movement rule or the object movement rule. At 744 and where the method is performed while the user is watching the movie, controlling the display device to display the guide element may further comprise: the guide element is moved along the calculated dynamic path according to movement rules for computer-generated images of computer-generated images rendered in real-time in the movie.
At 748 and where the target object includes an advertisement from an advertiser displayed on a display device, the method 700 may further include: if the viewer's gaze location overlaps the advertisement, an advertisement consumption charge is assigned to the advertiser. At 752 and where the advertising consumption charge is a first advertising consumption charge, method 700 may further include: a second advertising consumption charge greater than the first advertising consumption charge is assigned if the viewer's gaze location overlaps the advertiser for at least a predetermined period of time.
At 756, where the target object comprises an advertisement from an advertiser displayed on the web page, the method 700 may comprise: a viewer input associated with an advertisement is received, the viewer input triggering an advertisement consumption charge. At 760, if the viewer's gaze location overlaps with the advertisement on the web page when the viewer input is received, the method 700 may include assigning an advertisement consumption charge to the advertiser. At 764, if the viewer's gaze does not overlap with the advertisement on the web page when the viewer input is received, the method 700 may include canceling the advertisement consumption charge. At 768, the display device may include a wearable display device including a gaze tracking system.
It should be understood that the method 700 is provided as an example and is not meant to be limiting. Accordingly, it should be understood that method 700 may include additional and/or alternative steps than those shown in fig. 7A and 7B. Further, it should be understood that method 700 may be performed in any suitable order. Further, it should be understood that one or more steps may be omitted from method 700 without departing from the scope of this disclosure.
FIG. 8 schematically illustrates a non-limiting example of a computing system 800 that can perform one or more of the methods and processes described above. Computing devices 10 and 12 may take the form of or include one or more aspects of computing system 800. Computing system 800 is shown in simplified form. It should be understood that virtually any computer architecture may be used without departing from the scope of this disclosure. In different examples, computing system 800 may take the form of: a mainframe computer, server computer, desktop computer, tablet computer, home entertainment computer, network computing device, tablet computer, notebook computer, smart phone, or other mobile computing device, mobile communication device, gaming device, etc.
As shown in fig. 8, computing system 800 includes a logic subsystem 804 and a storage subsystem 808. Computing system 800 may optionally include sensor subsystem 812, display subsystem 816, communication subsystem 820, input subsystem 822, and/or other subsystems and components not shown in fig. 8. Computing system 800 can also include computer-readable media, including computer-readable storage media and computer-readable communication media. Computing system 800 may also optionally include other user input devices such as: a keyboard, a mouse, a game controller, and/or a touch screen. Furthermore, in some embodiments, the methods and processes described herein may be implemented as a computer application, computer service, computer API, computer library, and/or other computer program product in a computing system that includes one or more computers.
Logic subsystem 804 may include one or more physical devices configured to execute one or more instructions. For example, logic subsystem 804 may be configured to execute one or more instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical structures. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more devices, or otherwise arrive at a desired result.
The logic subsystem 804 may include one or more processors configured to execute software instructions. Additionally or alternatively, the logic subsystem may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. The processors of the logic subsystem may be single-core or multi-core, and the programs executed thereon may be configured for parallel processing or distributed processing. The logic subsystem may optionally include individual components distributed throughout two or more devices, which may be remotely located and/or configured for coordinated processing. One or more aspects of the logic subsystem may be virtualized and executed by a remotely accessible networked computing device configured in a cloud computing configuration.
The storage subsystem 808 may include one or more physical, persistent devices configured to hold data and/or instructions executable by the logic subsystem 804 to implement the methods and processes described herein. When such methods and processes are implemented, the state of the storage subsystem 808 may be transformed (e.g., to hold different data).
Storage subsystem 808 may include removable media and/or built-in devices. Storage subsystem 808 may include optical memory devices (e.g., CD, DVD, HD-DVD, blu-ray disc, etc.), semiconductor memory devices (e.g., RAM, EPROM, EEPROM, etc.), and/or magnetic memory devices (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. The storage subsystem 808 may include devices with one or more of the following characteristics: volatile, nonvolatile, dynamic, static, read/write, read-only, random access, sequential access, location addressable, file addressable, and content addressable.
In some examples, aspects of the logic subsystem 804 and the storage subsystem 808 may be integrated into one or more common devices through which the functions described herein may be at least partially implemented (enact). Such hardware logic components may include, for example: field Programmable Gate Arrays (FPGAs), program and application specific integrated circuits (PASIC/ASIC), program and application specific standard products (PSSP/ASSP), system on a chip (SOC), and Complex Programmable Logic Devices (CPLDs).
Fig. 8 also illustrates aspects of the storage subsystem 808 in the form of a removable computer-readable storage medium 824, which removable computer-readable storage medium 824 may be used to store data and/or instructions executable to implement the methods and processes described herein. The removable computer-readable storage medium 824 may take the form of a CD, DVD, HD-DVD, blu-ray disc, EEPROM, and/or floppy disk, among others.
It should be understood that the storage subsystem 808 includes one or more physical, persistent devices. In contrast, in some implementations, aspects of the instructions described herein may be propagated in a transient manner by a pure signal (e.g., an electromagnetic signal, an optical signal, etc.) that is not held by a physical device for at least a limited duration. Furthermore, data and/or other forms of information pertaining to the present disclosure may be propagated by a pure signal via a computer readable communication medium.
When included, sensor subsystem 812 may include one or more sensors configured to sense different physical phenomena (e.g., visible light, infrared light, sound, acceleration, orientation, position, etc.) as described above. The sensor subsystem 812 may be configured to provide sensor data to the logic subsystem, for example. Such data may include gaze tracking information, image information, ambient light information, depth information, audio information, location information, motion information, user location information, and/or any other suitable sensor data that may be used to perform the methods and processes described above.
When included, display subsystem 816 may be used to present a visual representation of data held by storage subsystem 808. The methods and processes as discussed hereinabove change the data held by the storage subsystem 808 and thus transform the state of the storage subsystem, as well as transform the state of the display subsystem 816 to visually represent changes in the underlying data. Display subsystem 816 may include one or more display devices that virtually utilize any type of technology. Such display devices may be combined with logic subsystem 804 and/or storage subsystem 808 in a shared enclosure, or such display devices may be peripheral display devices.
When included, communication subsystem 820 may be configured to communicatively couple computing system 800 with one or more networks and/or one or more other computing devices. Communication subsystem 820 may include wired and/or wireless communication devices compatible with one or more different communication protocols. By way of non-limiting example, the communication subsystem 820 may be configured to communicate via a wireless telephone network, a wireless local area network, a wired local area network, a wireless wide area network, a wired wide area network, or the like. In some embodiments, the communication subsystem may allow computing system 800 to send and/or receive messages to and/or from other devices via a network such as the internet.
When included, input subsystem 822 may include or interface with one or more sensors or user input devices, such as: a game controller, a gesture input detection device, a voice recognizer, an inertial measurement unit, a keyboard, a mouse, or a touch screen. In some embodiments, the input subsystem 822 may include or interface with selected Natural User Input (NUI) components. Such components may be integrated or peripheral, and the transduction and/or processing of input actions may be processed on-board and/or off-board. Example NUI components may include a microphone for voice and/or speech recognition; infrared, color, stereo and/or depth cameras for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition; and an electric field sensing component for assessing brain activity.
The term "program" may be used to describe aspects of computing device 10 and computing device 12 that are implemented to perform one or more particular functions. In some cases, such a program may be instantiated via the logic subsystem 804 executing instructions held by the storage subsystem 808. It should be appreciated that different programs may be instantiated from the same application, service, code block, object, library, routine, API, function, etc. Likewise, the same program may be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc. The term "program" means an individual or group of executable files, data files, libraries, drivers, scripts, database records, and the like.
It is to be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. Thus, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or in some cases omitted. Also, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and nonobvious combinations and subcombinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.