EYEWEAR DEVICE, SYSTEM, AND METHOD FOR OBTAINING EYE IMAGING DATA TECHNICAL FIELD [0001] The invention relates to a personal eyewear device, method and system for obtaining eye imaging data of a wearer of the personal eyewear device, and in particular for obtaining eye imaging data including the iris, sclera and cornea using an eye imaging sensor subsystem.
BACKGROUND OF INVENTION
[0002] Many conditions including physiological conditions, psychological conditions, and diseases, exhibit symptoms or markers that can be detected in the human eye. For example, glaucoma is a common eye condition that can be diagnosed from identifiable markers that can be confirmed by measuring intraocular pressure, conducting dilated eye examination and imaging tests, measuring corneal thickness, and inspecting a drainage angle of the eye.
[0003] Conditions such as glaucoma are easier and cheaper to treat, and are more likely to be treated successfully, if they are diagnosed early. However, glaucoma tends to develop slowly over many years, meaning many people do not realise they have glaucoma. It is often only picked up during a routine eye test, but people do not attend eye tests regularly. For these reasons, conditions such as glaucoma are often diagnosed at a much later stage of development, which is detrimental in terms of cost of treatment and its chances of success.
[0004] There is therefore a need to diagnose these types of conditions earlier, as doing so will bring many benefits, to the person/persons afflicted by such conditions and wider public health alike.
[0005] However, conventional diagnosis techniques for conditions that exhibit symptoms or markers in the human eye require the use of expensive clinical equipment, and often the self-identification by the sufferer, who generally only seeks clinical diagnosis once the condition has already impinged on their life. The clinical-grade equipment is large, fragile, and only used in a clinical setting and by professionals, such as during eye tests. Furthermore, the equipment is usually complex in terms of components and requires a high-power input to function. Eye tests and other consultations with healthcare professionals can be expensive and in high-demand. It is thus both costly and impractical to suggest significantly increasing the frequency of healthcare consultations to increase the likelihood of early diagnosis of conditions.
[0006] There is therefore a further need to provide early diagnosis of conditions that is cost-effective and efficient in terms of components and power usage, and the time and expense of healthcare professionals and services.
[0007] It has been appreciated that the aforementioned problems can be solved by providing an inexpensive non-clinical solution that observes symptoms or markers in the human eye at more regular intervals.
SUMMARY OF INVENTION
[0008] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to determine the scope of the claimed subject matter; variants and alternative features which facilitate the working of the invention and/or serve to achieve a substantially similar technical effect should be considered as falling into the scope of the invention disclosed herein.
[0009] In a first aspect, the present disclosure provides a personal eyewear device for obtaining eye tracking data of a wearer of the personal eyewear device, the personal eyewear device comprising: a frame for supporting one or more lenses in front of a left eye and a right eye of a wearer when the personal eyewear device is worn by the wearer; and an electronics module. The electronics module includes: a processor; a memory; a power source for powering the electronics module; and an eye tracking subsystem, wherein the processor and the memory are communicatively coupled with the eye tracking subsystem and wherein the eye tracking subsystem comprises: a first pair of imaging sensors forming a first stereo imaging sensor configured to capture images of a left eye of the wearer; and a second pair of imaging sensors forming a second stereo imaging sensor configured to capture images of a right eye of the wearer. Using stereo imaging sensors in this manner for each eye improves the accuracy of the determination of the spatial location of features of the eyes, which in turn provides more accurate eye tracking data.
[0010] The first pair of imaging sensors may include a first imaging sensor arranged on the frame at a first position and a second imaging sensor arranged on the frame at a second position, wherein the first position and the second position are substantially opposed to each other such that the first imaging sensor and second imaging sensor are configured to face the left eye of the wearer from substantially opposite angles. The second pair of imaging sensors may include a third imaging sensor arranged on the frame at a third position and a fourth imaging sensor arranged on the frame at a fourth position, wherein the third position and the fourth position are substantially opposed to each other such that the third imaging sensor and fourth imaging sensor are configured to face the right eye of the wearer from substantially opposite angles. The first and second position may mirror the third and fourth position.
[0011] The first position and the second position may be in the vicinity of a first rim of the frame, wherein the first rim of the frame is adjacent to the left eye of the wearer when the personal eyewear device is worn. The third position and the fourth position may be in the vicinity of a second rim of the frame, wherein the second rim of the frame is adjacent to the right eye of the wearer when the personal eyewear device is worn. The first rim may be a left rim, and the second rim may be a right rim of the frame.
[0012] The first and second pairs of imaging sensors may be configured to operate synchronously according to a global shutter such that the first and second pairs of imaging sensors are configured to capture simultaneous images. The first and second pairs of imaging sensors may operate according to a global clock signal to capture images at the same moment in time, and the imaging sensors themselves may operate according to a global shutter. This provides the advantage of capturing images from all four imaging sensors at the exact same moment, which ensures a determination of the spatial location of features of the eyes is accurate and may be performed using data captured at the same time. [0013] The first and second pairs of imaging sensors may be infrared imaging sensors, each configured to be sensitive to light in the infrared spectral band. This aids in determination of the location of the pupil due to less glare and reflections when compared to visible light.
[0014] The eye tracking subsystem may further comprise: a first illumination source for illuminating the left eye of the wearer with light in the infrared spectral band; and a second illumination source for illuminating the right eye of the wearer with light in the infrared spectral band, wherein the first and second illumination sources are communicatively coupled with the processor. Illuminating the eyes using the first and second illumination sources ensures a consistent level of infrared light for detection by the imaging sensors at all times, irrespective of environmental conditions.
[0015] The eye tracking subsystem may further comprise: an accelerometer; and a gyroscope, wherein the accelerometer and the gyroscope are configured to detect an orientation of the personal eyewear device, wherein the accelerometer and gyroscope are communicatively coupled to the processor. The accelerometer and gyroscope may provide data that may be used to aid processing of the captured images, to ensure eye tracking data derived therefrom is adjusted or normalised according to the orientation of the wearer.
[0016] The personal eyewear device may further comprise a transceiver communicatively coupled to the processor, for transmitting eye tracking data to a user terminal device. The transceiver may be any suitable device or mechanism for transmitting and receiving data, and may be wired or wireless. The eye tracking data may be stored on the memory of the personal eyewear device until it is transmitted via the transceiver.
[0017] The personal eyewear device may further comprise a photovoltaic cell disposed on the frame for charging and/or powering the power source.
[0018] According to a second aspect, the present disclosure provides a system for obtaining eye tracking data of a wearer of a personal eyewear device, the system comprising: the personal eyewear device of the first aspect of this disclosure set out above; a user device configured to communicate with the personal eyewear device and to receive data including eye tracking data of a wearer from the personal eyewear device; and a data processing module, configured to further process the eye tracking data of the wearer to determine a status of the wearer. The status may be a condition, such as a health-related condition, a physiological condition, a neurological condition, a psychological condition for example. The status may also or alternatively be an activity the wearer is undertaking, such as watching a display or reading text, for example.
[0019] The data processing module may be external to the user device, wherein the user device is configured to communicate with the data processing module, and wherein the data processing module is configured to receive the eye tracking data of the wearer from the user device. The data processing module may be a computing device, a server, or a plurality of servers for example. The data processing module and the user device communicate to send and receive the eye tracking data, via any suitable communication module, which may be wired or wireless or over a network such as the internet.
[0020] According to a third aspect, the present disclosure provides a method of obtaining eye tracking data of a wearer of the personal eyewear device of the first aspect of this disclosure set out above, wherein the method comprises: simultaneously capturing, with the first pair of imaging sensors, a first pair of images of a left eye of the wearer, and with the second pair of imaging sensors, a second pair of images of a right eye of the wearer. The first and second pair of images may form part of eye tracking data, but may also be processed to form additional parts of eye tracking data. The processing may be performed by the processor on the personal eyewear device.
[0021] The method may further comprise a first processing step including: processing, using a stereo computer vision process, the first pair of images to obtain first three-dimensional information data of the left eye of the wearer; processing, using the stereo computer vision process, the second pair of images to obtain second three-dimensional information data of the right eye of the wearer; and recording the first and second three-dimensional information data as part of the eye tracking data. The three-dimensional information may refer to depth information combined with the image information from the captured images, which is generated as part of the stereo computer vision process.
[0022] The method may further comprise a second processing step including: identifying a first location of a first biological feature of the left eye of the wearer in the first three-dimensional information data; and identifying a second location of a second biological feature of the right eye of the wearer in the second three-dimensional information data. The first and second biological features may be the centre of the pupil of each eye, and the first and second location may be determined in terms of a pixel location from the combined captured images from each pair of imaging sensors. The determination of the pupil as a feature of the eye may include the use of image processing methods, such as feature recognition. The location of the biological feature in each eye may form part of the eye tracking data and may be recorded as such. [0023] The method may further comprise a third processing step including: determining a left eye gaze vector from based on the first location; determining a right eye gaze vector based on the second location; and recording the left and right eye gaze vectors as eye tracking data. The left and right eye gaze vectors indicate a direction of gaze of each eye, based on the identified location of the biological features of the eyes.
[0024] The method may further comprise a fourth processing step including: determining a gaze point, by determining a point of minimum distance or convergence between the left eye gaze vector and the right eye gaze vector; and recording the gaze point as eye tracking data. The gaze point indicates the point at which the left and right eye gaze vectors meet, or the point at which they cross at their closest. The gaze point is indicative of the focus of the wearer's gaze.
[0025] The method may further comprise a fifth processing step including: determining a distance from the wearer to the gaze point; and recording the distance from the wearer to the gaze point as eye tracking data. The distance may be determined from the left and right eye gaze vectors and knowledge of the baseline between the eyes of the wearer, for example.
[0026] The method may further comprise: capturing a plurality of consecutive first and second pairs of images; recording a plurality of eye tracking data corresponding to the plurality of consecutive first and second pairs of images; and associating each of the plurality of eye tracking data with timing data indicative of respective times of capture of the plurality of consecutive first and second pairs of images. In this manner the imaging sensors of the eye tracking subsystems may continuously capture images, each of which are assigned timing data, through timestamps or other metadata for example. This enables a continuous stream of captured images to be processed to provide a continuous stream of eye tracking data, which can be chronologically ordered based on the timing data.
[0027] The method may further comprise a sixth processing step including: retrieving gaze points and distances from the wearer to the gaze points from the plurality of eye tracking data; joining the gaze points in a chronological manner, according to their associated timing data, to form a gaze path, wherein the gaze path is associated with the distances from the wearer to the gaze points forming the gaze path; and recording the gaze path as eye tracking data. The gaze path is a three-dimensional gaze path and may be adjusted based on accelerometer and gyroscope data such that it is independent of the orientation of the wearer. In this case the gaze path may be referred to as a global gaze path. The gaze path represents a chronological record of how the wearers gaze changes spatially over a period of time, which is useful for determining a number of possible statuses.
[0028] The method may further comprise calibrating the personal eyewear device, wherein calibrating comprises: capturing, in a controlled environment, a first calibration pair of images of a left eye of the wearer, and a second calibration pair of images of a right eye of the wearer, wherein the controlled environment includes a viewable target point of known approximate first and second angles from the left and the right eyes of the wearer respectively, with wearer looking at the target point; processing the first calibration pair of images to identify a first calibration location of a first calibration biological feature of the left eye of the wearer; processing the second calibration pair of images to identify a second calibration location of a second calibration biological feature of the right eye of the wearer; and assigning a first calibration eye gaze vector to the first calibration location based on the known approximate first angle between the target point of the wearer and the left eye of the wearer; and assigning a second calibration eye gaze vector to the second calibration location based on the known approximate angle between the target point of the wearer and the left eye of the wearer, wherein: determining the left eye gaze vector based on the first location comprises determining the left eye gaze vector from the first calibration eye gaze vector and a first difference between the first location and the first calibration location; and determining the right eye gaze vector based on the second location comprises determining the right eye gaze vector from the second calibration eye gaze vector and a second difference between the second location and the second calibration location.
[0029] The first, second, third, fourth, fifth and/or sixth processing steps may be performed by the processor of the electronics module of the personal eyewear device. In particular, the first processing step may be performed, or the first and second, or the first to third, or the first to fourth, or the first to fifth, or the first to sixth processing steps may be performed on the processor. Performing these steps on the processor allows the personal eyewear device and the processor thereof to use the eye tracking data, which may include any of the: captured images; three dimensional information; location of biological features of the eye; eye gaze vectors; gaze point; and gaze path, for additional purposes, such as to determine when to activate additional subsystems of the personal eyewear device such as a fundus imaging subsystem or an eye or iris imaging subsystem.
[0030] The method may further comprise: using a scheduling module that is coupled to, or forms part of the processor of the electronics module of the personal eyewear device: activating or deactivating one or more additional subsystems of the personal eyewear device by comparing the eye tracking data to respective activation conditions for the one or more additional subsystems. The scheduling module may be an independent device or component to the processor or may be comprised of software executable by the processor itself, for example.
[0031] The method may further comprise performing further processing of the eye tracking data to determine a status of the wearer. The further processing may be performed on a data processing module, such as a server or combination of servers in communication with a user device.
[0032] Performing the further processing of the eye tracking data may comprise: training a machine learning model with a training dataset, the training dataset including a plurality of data entries, the plurality of data entries including training eye tracking data correlated with training status data; inputting, to the trained machine learning model, the eye tracking data; receiving, from the trained machine learning model, an output including status data; and determining the status of the wearer from the output including status data. The machine learning model may comprise one or more machine learning algorithms, and may take inputs from eye tracking data and data from other sensor subsystems.
[0033] According to a fourth aspect, the present disclosure provides a computer program, which, when executed by a processor, is configured to perform the method of the third aspect of this disclosure as set out above.
[0034] According to a fifth aspect, the present disclosure provides a personal eyewear device for obtaining fundus imaging data of a wearer of the personal eyewear device, the personal eyewear device comprising: a frame for supporting one or more lenses in front of eyes of the wearer when the personal eyewear device is worn by the wearer; an electronics module including: a processor; a memory; a power source for powering the electronics module; an eye tracking subsystem, wherein the eye tracking subsystem is configured to obtain eye tracking data with respect to at least an eye of the wearer of the personal eyewear device; a fundus imaging subsystem comprising: a first imaging sensor configured to obtain first fundus imaging data with respect to a left eye of the wearer; and a second imaging sensor configured to obtain second fundus imaging data with respect to a right eye of the wearer; and a scheduling module, wherein the scheduling module is configured to dynamically activate and deactivate the first imaging sensor and the second imaging sensor based on the eye tracking data. The personal eyewear device of the fifth aspect may include the features of the first aspect as set out above. The capability of activating and deactivating the first and second imaging sensors of the fundus Imaging subsystem based on eye tracking data provides power efficiency and improved data-quality, which are both very beneficial in the context of the personal eyewear device, wherein storage capability in terms of memory and power are limited by the constraints of size, weight and complexity of the personal eyewear device. Activating and deactivating the first and second imaging sensors of the fundus imaging subsystem may refer to individual components or the entire fundus imaging subsystem, and may refer to providing or stopping power supply to the components or fundus imaging subsystem.
[0035] The first imaging sensor may be arranged on the frame at a first position and is configured to capture first images of a left eye fundus of the wearer when the first imaging sensor is activated; and wherein the second imaging sensor may be arranged on the frame at a second position and is configured to capture second images of a right eye fundus of the wearer when the second imaging sensor is activated.
[0036] The scheduling module may be configured to activate the first and second imaging sensors upon a determination that the eye tracking data meets a first activation condition related to an eye alignment with respect to the first and/or second imaging sensor. The capability of determining an eye alignment as a trigger for activating the first and second imaging sensors ensures that when the first and/or second imaging sensor is activated, there is a greater likelihood that it will obtain good-quality fundus imaging data. In some examples the first and second imaging sensor may be triggered in this manner based off one set of eye tracking data, from one eye tracking sensor, or alternatively may each be triggered by a respective eye tracking sensor for each eye. Using only one eye tracking sensor for triggering the first and second imaging sensors to activate may be power-efficient.
[0037] The scheduling module may be configured to activate the first and second imaging sensors upon the determination that the eye-gaze tracking data meets the first activation condition in combination with a second activation condition, wherein the second activation condition is a time condition, wherein the scheduling module is configured to compare timing data against the second activation condition to determine whether the second activation condition is met.
[0038] The time condition of the second activation condition may be a time of day or a time elapsed since previous fundus imaging data was obtained by the fundus imaging subsystem.
[0039] The eye-tracking data may include first eye tracking data corresponding to the left eye of the wearer and second eye-tracking data corresponding to the right eye of the wearer, wherein the scheduling module is configured to activate the first imaging sensor based on the first eye tracking data; and activate the second imaging sensor based on the second eye-tracking data, such that the scheduling module is configured to activate the first imaging sensor independently of the second imaging sensor. Comparing the first activation condition against separate eye tracking data, corresponding to each eye, for each of the first and second imaging sensors ensures that each eye is in the optimal position for fundus imaging before fundus imaging data is obtained through captured images.
[0040] The scheduling module may be configured to dynamically activate and deactivate the fundus imaging subsystem based on the eye tracking data, such that the first imaging sensor and the second imaging sensor are activated and deactivated together. This example is also compatible with independently assessing whether first and second eye tracking data meets the first activation condition for the first and second imaging sensors respectively, as in this case the first and second imaging sensors are activated together once both the first and second eye tracking data satisfy the first activation condition. [0041] The first imaging sensor and the second imaging sensor may be configured to detect light in the infrared or near-infrared spectral bands.
[0042] The fundus imaging subsystem may further comprise: a first fundus illumination source configured to illuminate the fundus of the left eye of the wearer with infrared or near-infrared light; and a second fundus illumination source configured to illuminate the fundus of the right eye of the wearer with infrared or near-infrared light.
[0043] The first fundus illumination source may be arranged on the frame at a first adjacent position to the first imaging sensor, and wherein the second fundus illumination source is arranged on the frame at a second adjacent position to the second imaging sensor. This arrangement ensures that illumination of the fundus is maximized from the perspective of the first and second imaging sensors.
[0044] The scheduling module may be configured to dynamically activate and deactivate the first fundus illumination source with the first imaging sensor and the second fundus illumination source with the second imaging sensor based on the eye-gaze tracking data. Only turning the illumination sources on when the first and second imaging sensors are active is power efficient.
[0045] The first and second imaging sensors may include a monochromatic camera, a muhtispectral camera or a hyperspectral camera.
[0046] The eye-tracking subsystem may comprise at least an eye tracking imaging sensor configured to obtain the eye tracking data, wherein the eye tracking imaging sensor has a lower resolution and/or a lower power consumption than each of the first and second imaging sensors when the first and second imaging sensors are activated. The lower-resolution and/or lower power consumption of the eye tracking imaging sensor or sensors allows it to operate continuously or more frequently than the fundus imaging subsystem.
[0047] According to a sixth aspect, the present disclosure provides a system for processing fundus imaging data, the system comprising: the personal eyewear device of the fifth aspect as set out above; a user device configured to communicate with the personal eyewear device and to receive the first and second fundus imaging data from the personal eyewear device; and a data processing module, configured to process the first and second fundus imaging data to determine a status of the wearer. Each of the personal eyewear device and the user device include a transceiver or other communication module for sending and receiving data. This may include the use of a wired or wireless transmitter, using any suitable communication protocol such as Bluetooth, Wi-Fi or the like. The system of the sixth aspect is compatible with the system of the second aspect as set out above.
[0048] The data processing module is external to the user device, wherein the user device is configured to communicate with the data processing module, and wherein the data processing module is configured to receive the first and second fundus imaging data from the user device. The data processing module may be a server or group of servers, and may also include a communication module for communicating with the user device, for example over a network such as the internet.
[0049] According to a seventh aspect, the present disclosure provides a method of obtaining fundus imaging data of a wearer of a personal eyewear device, the personal eyewear device having: a frame for supporting one or more lenses in front of eyes of the wearer when the personal eyewear device is worn by the wearer; an electronics module including: an eye tracking subsystem; and a fundus imaging subsystem including a first imaging sensor and a second imaging sensor, the method comprising: obtaining, with the eye tracking subsystem, eye tracking data with respect to at least an eye of the wearer of the personal eyewear device; dynamically activating the first imaging sensor based on the eye-gaze tracking data such that the first imaging sensor is in an active mode; obtaining, with the first imaging sensor in the active mode, first fundus imaging data with respect to a left eye of the wearer; dynamically activating the second imaging sensor based on the eye-gaze tracking data such that the second imaging sensor is in an active mode; and obtaining, with the second imaging sensor in the active mode, second fundus imaging data with respect to a left eye of the wearer.
[0050] Dynamically activating the first and second imaging sensors based on the eye-gaze tracking data may further comprise: determining that the eye tracking data meets a first activation condition related to an eye alignment with respect to the first and/or second imaging sensor. The first activation condition may thus be met when eye alignment is achieved with one eye and the first or second imaging sensor or alternatively when both or each eye align with the first and second imaging sensors. Activation may occur for each imaging sensor independently.
[0051] The eye tracking data may include an eye-gaze parameter indicative of an orientation of an eye of the wearer, wherein the determining that the eye tracking data meets the first activation condition includes: determining that the eye-gaze parameter is within a threshold difference of a predicted eye-gaze parameter, wherein the predicted eye-gaze parameter is indicative of the eye alignment with respect to the first and/or second imaging sensor. The eye-gaze parameter may be the location of a biological feature of the eye or eyes such as a pixel location corresponding to the centre of the pupil, or an eye-gaze vector indicative of a direction of gaze of the wearer, for example.
[0052] Dynamically activating the first and second imaging sensors may further comprise: determining that a second activation condition is met, wherein the second activation condition is a time condition, wherein determining that the second activation condition is met comprises: comparing timing data against the second activation condition to determine whether the second activation condition is met.
[0053] The method may further comprise deactivating the first and second imaging sensors after obtaining the first and second fundus imaging data, such that the first and second imaging sensors are in an inactive mode. In the inactive mode, the first and second imaging sensors may be unpowered or turned off. The first and second imaging sensors may transition to the inactive mode independently of each other, dependent on when they obtain fundus imaging data.
[0054] Deactivating the first and second imaging sensors may comprise: determining that the first and/or second fundus imaging data meets a quality condition related to a quality of the first and/or second fundus imaging data; and if the quality condition is not met: maintaining the first and/or second imaging sensor in the active mode. The quality condition may be any measure of image quality, and may be compared against from a sample of the fundus imaging data. The quality condition may be compared against independently for the first and second fundus imaging data, such that the first and second imaging sensors are deactivated or maintained independently from each other. Alternatively, the quality condition may be compared against using both the first and second fundus imaging data, such that the first and second imaging sensors are deactivated or maintained together. The timing data such as an elapsed time, for the second activation condition, may be reset after both the first and second imaging sensors are deactivated. [0055] Dynamically activating the first imaging sensor based on the eye tracking data may be performed together with dynamically activating the second imaging sensor based on the eye tracking data, such that the method comprises dynamically activating the fundus imaging subsystem based on the eye tracking data. Activation may be performed based on eye tracking data from an eye tracking sensor for one eye, (thus assuming the eyes are looking in the same direction), which is power-efficient, or activation may be performed based on eye tracking data for both eyes, using two or more eye tracking sensors, which is more accurate.
[0056] Dynamically activating the first imaging sensor based on the eye tracking data may be performed independently from activating the second imaging sensor based on the eye tracking data. In this case, the timing data for the second activation condition may be recorded for each of the first and second imaging sensor, such that the timing data for the first imaging sensor may be independent of that of the second imaging sensor.
[0057] The method may further comprise performing further processing of the first and/or second fundus imaging data to determine a status of the wearer, by: training a machine learning model with a training dataset, the training dataset including a plurality of data entries, the plurality of data entries including training fundus imaging data correlated with training status data; inputting, to the trained machine learning model, the first and/or second fundus imaging data; receiving, from the trained machine learning model, an output including status data; and determining the status of the wearer from the output including status data.
[0058] According to an eighth aspect, the present disclosure provides a computer program, which, when executed by a processor, is configured to perform the method of the seventh aspect as set out above. [0059] According to a ninth aspect, the present disclosure provides a personal eyewear device for obtaining eye imaging data of a wearer of the personal eyewear device, the personal eyewear device comprising: a frame for supporting one or more lenses in front of eyes of the wearer when the personal eyewear device is worn by the wearer; an electronics module including: a processor; a memory; a power source for powering the electronics module; an eye tracking subsystem, wherein the eye tracking subsystem is configured to obtain eye tracking data with respect to at least an eye of the wearer of the personal eyewear device; an eye imaging subsystem comprising: a first imaging sensor configured to obtain first eye imaging data with respect to a left eye of the wearer; and a second imaging sensor configured to obtain second eye imaging data with respect to a right eye of the wearer; the personal eyewear device further comprising: a scheduling module, wherein the scheduling module is configured to dynamically activate and deactivate the first imaging sensor and the second imaging sensor based on one or more activation conditions for the eye imaging subsystem, wherein a first activation condition of the one or more activation conditions is based on the eye tracking data. The eye imaging subsystem may be referred to as an iris imaging subsystem and the first and second eye imaging data may include any one or more of the iris, sclera and the cornea of the eyes of the wearer.
[0060] The first imaging sensor may be arranged on the frame and is configured to capture first images of a left eye of the wearer when the first imaging sensor is activated, the first images forming at least a part of the first eye imaging data; and the second imaging sensor may be arranged on the frame and is configured to capture second images of a right eye of the wearer when the second imaging sensor is activated, the second images forming at least a part of the second eye imaging data.
[0061] The scheduling module may be configured to activate the first and second imaging sensors upon a determination that the first activation condition is met, wherein the first activation condition is related to an eye alignment with a first direction, wherein the eye alignment with the first direction is determined from the eye tracking data. The processor may be configured to: record a set of previous eye tracking data, wherein the previous eye-tracking data was captured over a previous period of time; determine a set of second directions, each corresponding to one or more of the set of previous eye tracking data; wherein the first direction is substantially different from the set of second directions. The eye tracking data may include an eye-gaze parameter indicative of an orientation of an eye of the wearer, wherein the eye-gaze parameter may be an eye-gaze vector or a pixel location of a biological feature of the eye, such as the centre of the pupil, for example.
[0062] The eye-tracking subsystem may comprise at least an eye tracking imaging sensor configured to obtain the eye tracking data, wherein the eye tracking imaging sensor has a lower resolution and/or a lower power consumption than each of the first and second imaging sensors when the first and second imaging sensors are activated.
[0063] The eye tracking subsystem may be configured to obtain eye tracking data with respect to a left and a right eye of the wearer, wherein the eye-tracking data includes first eye tracking data corresponding to the left eye of the wearer and second eye-tracking data corresponding to the right eye of the wearer, wherein the scheduling module is configured to: activate the first imaging sensor based on the first eye tracking data; and activate the second imaging sensor based on the second eye-tracking data, such that the scheduling module is configured to determine whether to activate the first imaging sensor independently of the second imaging sensor.
[0064] The scheduling module may be configured to activate the first and second imaging sensors upon a determination that a second activation condition of the one or more activation conditions is met in combination with the first activation condition, wherein the second activation condition is a time condition, wherein the scheduling module is configured to compare timing data against the second activation condition to determine whether the second activation condition is met.
[0065] The time condition of the second activation condition is a time of day or a time elapsed since previous first and/or second eye imaging data was obtained by the eye imaging subsystem.
[0066] The first imaging sensor and the second imaging sensor may be configured to detect light in the visible spectrum, and wherein the eye imaging subsystem further comprises: a first ambient light sensor configured to detect an ambient light level in the vicinity of the personal eyewear device, wherein the scheduling module is further configured to activate the first imaging sensor and second imaging sensors based on a determination that a third activation condition of the one or more activation conditions is met in combination with the first condition, wherein the third activation condition is an ambient light condition, wherein the scheduling module is configured to compare the ambient light level against the third activation condition to determine whether the third activation condition is met.
[0067] The eye imaging subsystem may further comprise a second ambient light sensor, wherein: the first ambient light sensor is configured to detect a first ambient light level in the vicinity of a left eye of the wearer; the second ambient light sensor is configured to detect a second ambient light level in the vicinity of a right eye of the wearer; the scheduling module is configured to dynamically: activate and deactivate the first imaging sensor based at least in part on the first ambient light level; and activate and deactivate the second imaging sensor based at least in part on the second ambient light level. Alternatively, the first and second imaging sensors may be activated based on a combination of the first and second ambient light levels, such as an average ambient light level.
[0068] The scheduling module may be configured to activate the first and second imaging sensors only when all of the one or more activation conditions are met.
[0069] The scheduling module may be configured to dynamically activate and deactivate the eye imaging subsystem based on the one or more activation conditions, such that the first imaging sensor and the second imaging sensor are activated and deactivated together.
[0070] The first imaging sensor may comprise a first pair of eye imaging cameras arranged at separate locations around a first rim of the personal eyewear device, and wherein the second imaging sensor may comprise a second pair of eye imaging cameras arranged at separate locations around a second rim of the personal eyewear device.
[0071] According to a tenth aspect, the present disclosure provides a system for processing eye imaging data, the system comprising: the personal eyewear device of the ninth aspect as set out above, a user device configured to communicate with the personal eyewear device and to receive the first and second imaging data from the personal eyewear device; and a data processing module, configured to process the first and second imaging data to determine a status of the wearer.
[0072] The data processing module may be external to the user device, wherein the user device is configured to communicate with the data processing module, and wherein the data processing module is configured to receive the first and second imaging data from the user device. Each of the personal eyewear device, the user device, and the data processing module are configured to receive and transmit data via a communication module, which may be any suitable transceiver or the like.
[0073] According to an eleventh aspect, the present disclosure provides a method of obtaining eye imaging data of a wearer of a personal eyewear device, the personal eyewear device having: a frame for supporting one or more lenses in front of eyes of the wearer when the personal eyewear device is worn by the wearer; an electronics module including: an eye imaging subsystem including a first imaging sensor and a second imaging sensor; and an eye tracking subsystem; the method comprising: obtaining, by the eye tracking subsystem, eye tracking data with respect to at least an eye of the wearer of the personal eyewear device; dynamically activating the first imaging sensor and the second imaging sensors based on one or more activation conditions such that the first imaging sensor and the second imaging sensor are in an active mode, wherein a first activation condition of the one or more activation conditions is based on the eye tracking data; obtaining, with the first imaging sensor in the active mode, first eye imaging data with respect to a left eye of the wearer; and obtaining, with the second imaging sensor in the active mode, second eye imaging data with respect to a left eye of the wearer.
[0074] Dynamically activating the first and second imaging sensors based on the one or more activation conditions may further comprise at least one of: determining that a second activation condition of the one or more activation conditions is met, wherein the second activation condition is a time condition; and determining that a third activation condition of the one or more activation conditions is met, wherein the third activation condition is an ambient light condition.
[0075] The method may further comprise deactivating the first and second imaging sensors after obtaining the first and second eye imaging data, such that the first and second imaging sensors are in an inactive mode. In the inactive mode, the imaging sensors may be in a power-off state.
[0076] Deactivating the first and second imaging sensors may comprise: determining that the first and/or second eye imaging data meets a quality condition related to a quality of the first and/or second fundus imaging data; and if the quality condition is not met: maintaining the first and/or second imaging sensor in the active mode.
[0077] Dynamically activating the first imaging sensor based on the one or more activation conditions may be performed independently from activating the second imaging sensor based on the one or more activation conditions.
[0078] Obtaining the first eye imaging data may comprise: capturing a first plurality of images of respective portions of the left eye of the wearer including one or more of a left iris, a left cornea, and/or a left sclera; wherein obtaining the second eye imaging data may comprise: capturing a second plurality of images of respective portions the right eye of the wearer including one or more of a right iris, a right cornea, and/or a right sclera; wherein the method further may further comprise: using a photogrammetry process with respect to first and second plurality of images to obtain respective first and second three-dimensional models of the left eye and the right eye respectively, wherein the first and second three-dimensional models form at least a part of the first and second imaging data respectively.
[0079] The first imaging sensor may comprise a first pair of eye imaging cameras arranged at separate locations around a first rim of the personal eyewear device, and wherein the second imaging sensor may comprise a second pair of eye imaging cameras arranged at separate locations around a second rim of the personal eyewear device, wherein: capturing the first plurality of images of respective portions of the left eye of the wearer may comprise synchronously capturing the first plurality of images with the first pair of eye imaging cameras; and capturing the second plurality of images of respective portions of the right eye of the wearer may comprise synchronously capturing the second plurality of images with the second pair of eye imaging cameras.
[0080] The method may further comprise performing further processing of the first and/or second eye imaging data to determine a status of the wearer.
[0081] The further processing of the first and/or second eye imaging data to determine the status of the wearer may comprise: training a machine learning model with a training dataset, the training dataset including a plurality of data entries, the plurality of data entries including training eye imaging data correlated with training status data; inputting, to the trained machine learning model, the first and/or second eye imaging data or a part thereof; receiving, from the trained machine learning model, an output including status data; and determining the status of the wearer from the output including status data. [0082] According to a twelfth aspect, the present disclosure provides a computer program, which, when executed by a processor, is configured to perform the method of the eleventh aspect as set out above. [0083] According to a thirteenth aspect, the present disclosure provides a method of controlling a sensor subsystem of a personal eyewear device, the method comprising: obtaining, with an eye tracking sensor of the personal eyewear device, eye tracking data corresponding to at least an eye of a wearer of the personal eyewear device; processing, on the personal eyewear device, the eye tracking data to determine an eye-gaze parameter, indicative of a direction of gaze of the at least an eye of the wearer; and activating the sensor subsystem of the personal eyewear device based, at least in part, on the eye-gaze parameter. Activating the sensor subsystem based on the eye-gaze parameter improves the likelihood of obtaining good quality data, since the method can selectively activate or trigger the sensor subsystem when the eye is in the optimal orientation for obtaining sensor data.
[0084] Activating the sensor subsystem of the personal eyewear device based, at least in part, on the eye-gaze parameter may comprise: comparing the eye-gaze parameter to an activation condition for the sensor subsystem; and determining that the eye-gaze parameter meets the activation condition, and upon determining that the eye-gaze parameter meets the activation condition, activating the sensor subsystem accordingly.
[0085] Obtaining the eye tracking data corresponding to the at least an eye of the wearer, processing the eye tracking data to determine the eye-gaze parameter, and activating the sensor subsystem based, at least in part, on the eye-gaze parameter, may occur in real or near-real time on the personal eyewear device. Performing the processing of the eye tracking data in real-time allows for the sensor subsystem to be activated and deactivated in real-time, which is beneficial for obtaining data as soon as eye conditions are optimal enough to pass the activation condition.
[0086] Activating the sensor subsystem may comprise providing, from a power supply on the personal eyewear device, power to the sensor subsystem. This may be controlled by the processor.
[0087] The sensor subsystem may have a greater required power than the eye tracking sensor, wherein providing power to the sensor subsystem further comprises: providing the greater required power to the sensor subsystem relative to the eye tracking sensor.
[0088] The method may further include obtaining, with an environmental sensor of the personal eyewear device, environmental data corresponding to at least an eye of a wearer of the personal eyewear device; and activating the sensor subsystem of the personal eyewear device based on both the eye-gaze parameter and the environmental data.
[0089] Activating the sensor subsystem of the personal eyewear device based on both the eye-gaze parameter and the environmental data may comprise: comparing the environmental data to an environmental activation condition for the sensor subsystem; and determining that the environmental data meets the environmental activation condition; and upon determining that the eye-gaze parameter meets the activation condition and that the environmental data meets the environmental activation condition, activating the sensor subsystem accordingly.
[0090] The environmental sensor includes one or more of: an ambient light sensor, wherein the environmental data includes ambient light data; a timing device, wherein the environmental data includes timing data; a gyroscope sensor, wherein the environmental data includes gyroscope data; and/or an accelerometer, wherein the environmental data includes accelerometer data.
[0091] The sensor subsystem may include a first sensor for obtaining sensor data with respect to a first eye of the wearer and a second sensor for obtaining sensor data with respect to a second eye of the wearer, wherein obtaining, with the eye tracking sensor of the personal eyewear device, eye tracking data corresponding to the at least an eye of a wearer of the personal eyewear device comprises: obtaining with a first eye tracking sensor of the personal eyewear device, first eye tracking data corresponding to the first eye of the wearer; and obtaining with a second eye tracking sensor of the personal eyewear device, second eye tracking data corresponding to the first eye of the wearer; wherein the method further comprises: processing, on the personal eyewear device, the first eye tracking data to determine a first eye-gaze parameter, indicative of a first direction of gaze of the first eye of the wearer; processing, on the personal eyewear device, the second eye tracking data to determine a second eye-gaze parameter, indicative of a second direction of gaze of the second eye of the wearer; and activating the first sensor of the sensor subsystem of the personal eyewear device based, at least in part, on the first eye-gaze parameter; and/or activating the second sensor of the sensor subsystem of the personal eyewear device based, at least in part, on the second eye-gaze parameter.
[0092] The method may further include controlling a plurality of sensor subsystems of the personal eyewear device, by activating the sensor subsystems of the personal eyewear device based, at least in part, on the eye-gaze parameter. The plurality of sensor subsystems may include any of the sensor subsystems set out above, for example the eye imaging subsystem and/or the fundus imaging subsystem. [0093] The method may further include obtaining, with the eye tracking sensor of the personal eyewear device, updated eye tracking data corresponding to the at least an eye of the wearer of the personal eyewear device; and processing, on the personal eyewear device, the updated eye tracking data to determine an updated eye-gaze parameter, indicative of an updated direction of gaze of the at least an eye of the wearer. The updated eye tracking data may include eye tracking data obtained next in a sequence of eye tracking data. In this manner, obtaining the updated eye tracking data is performed continuously.
[0094] The method may further include deactivating the sensor subsystem of the personal eyewear device based, at least in part, on the updated eye-gaze parameter.
[0095] Deactivating the sensor subsystem of the personal eyewear device based, at least in part, on the updated eye-gaze parameter may comprise: comparing the updated eye-gaze parameter to a deactivation condition for the sensor subsystem; and determining that the updated eye-gaze parameter meets the deactivation condition; and upon determining that the eye-gaze parameter meets the deactivation condition, deactivating the sensor subsystems accordingly. The deactivation condition may simply be not meeting the activation condition, such that not meeting or failing the activation condition is equal to meeting the deactivation condition. In practice this allows for one activation condition to be used rather than a separate deactivation condition.
[0096] The method may further include, upon activating the sensor subsystem of the personal eyewear device, obtaining sensor data using the sensor subsystem; and subsequent to obtaining the sensor data, deactivating the sensor subsystem.
[0097] The method may further include, prior to deactivating the sensor subsystem, comparing the sensor data obtained using the sensor subsystem to a sensor data quality condition; and if the sensor data does not meet the sensor data quality condition: repeating the obtaining sensor data using the sensor subsystem. The sensor data quality condition may be referred to as a quality condition and may include any measure of data quality to be compared against.
[0098] According to a fourteenth aspect, the present disclosure provides a personal eyewear device, the personal eyewear device comprising: an electronics module including: a processor; a memory; a power source for powering the electronics module; an eye tracking sensor; and one or more sensor subsystems, wherein the processor is configured to perform the method of the thirteenth aspect as set out above. [0099] The one or more sensor subsystems include one or more of: a fundus imaging subsystem comprising: a first imaging sensor configured to obtain first fundus imaging data with respect to a left eye of the wearer; and a second imaging sensor configured to obtain second fundus imaging data with respect to a right eye of the wearer; an eye imaging subsystem comprising: a first imaging sensor configured to obtain first eye imaging data with respect to a left eye of the wearer, wherein the first eye imaging data includes data with respect to one or more of a left iris, a left cornea, and/or a left sclera; and a second imaging sensor configured to obtain second eye imaging data with respect to a right eye of the wearer, wherein the second eye imaging data includes data with respect to one or more of a right iris, a right cornea, and/or a right sclera.
[0100] According to a fifteenth aspect, the present disclosure provides a system comprising: the personal eyewear device of the thirteenth aspect as set out above; a user device configured to communicate with the personal eyewear device and to receive sensor data of the one or more sensor subsystems; and a data processing module, configured to further process the sensor data.
[0101] According to a sixteenth aspect, the present disclosure provides a computer program, which, when executed by a processor, is configured to perform the method of the thirteenth aspect as set out above. [0102] According to a sixteenth aspect, the present disclosure provides a personal eyewear device fort obtaining eye-related data of a wearer of the personal eyewear device, the personal eyewear device comprising: a frame for supporting one or more lenses in front of a left eye and a right eye of a wearer when the personal eyewear device is worn by the wearer; and an electronics module. The electronics module includes: a processor; a memory; a power source for powering the electronics module; an eye tracking subsystem; a fundus imaging subsystem; and an eye imaging subsystem. The processor and the memory are communicatively coupled with each of the subsystems, which may include any of the features described with respect to the aspects set out above.
[0103] The methods described herein may be performed by software in machine readable form on a tangible storage medium e.g. in the form of a computer program comprising computer program code means adapted to perform all the steps of any of the methods described herein when the program is run on a computer and where the computer program may be embodied on a computer readable medium. Examples of tangible (or non-transitory) storage media include disks, thumb drives, memory cards etc. and do not include propagated signals. The software can be suitable for execution on a parallel processor or a serial processor such that the method steps may be carried out in any suitable order, or simultaneously. [0104] This application acknowledges that firmware and software can be valuable, separately tradable commodities. It is intended to encompass software, which runs on or controls "dumb" or standard hardware, to carry out the desired functions. It is also intended to encompass software which "describes" or defines the configuration of hardware, such as HDL (hardware description language) software, as is used for designing silicon chips, or for configuring universal programmable chips, to carry out desired functions.
[0105] The preferred features may be combined as appropriate, as would be apparent to a skilled person, and may be combined with any of the aspects of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[0106] Examples of the invention will be described, by way of example, with reference to the following drawings, in which: [0107] Figure 1 is a schematic diagram of a personal eyewear device; [0108] Figure 2 is a schematic diagram of a system including the personal eyewear device; [0109] Figure 3 is a schematic diagram of an eye-tracking subsystem of the personal eyewear device; [0110] Figure 4 is a schematic diagram of the eye-tracking subsystem of the personal eyewear device; [0111] Figure 5 is a schematic diagram of the eye-tracking subsystem of the personal eyewear device; [0112] Figure 6 is a schematic diagram of the eye-tracking subsystem of the personal eyewear device; [0113] Figure 7 is a schematic diagram of the eye-tracking subsystem of the personal eyewear device; [0114] Figure 8 is a schematic diagram of a fundus imaging subsystem of the personal eyewear device; [0115] Figure 9 is a schematic diagram of a fundus imaging subsystem of the personal eyewear device; [0116] Figure 10 is a schematic diagram of an eye/iris imaging subsystem of the personal eyewear device; [0117] Figure 11 is a schematic diagram showing an overhead view of the eye-tracking subsystem of the personal eyewear device; [0118] Figure 12 is a flow diagram illustrating a method of determining eye tracking data from the eye tracking subsystem of the personal eyewear device; [0119] Figure 13a is a schematic diagram of a first visualisation of eye tracking data; [0120] Figure 13b is a schematic diagram of a second visualisation of eye tracking data; [0121] Figure 14 is a schematic diagram of a controlled environment for calibrating the eye tracking subsystem of the personal eyewear device; [0122] Figure 15 is a diagram showing the relationship between various parameters in the calibration process of the eye tracking subsystem of the personal eyewear device; and [0123] Figure 16 is a schematic diagram of an artificial neural network for further processing eye-related data obtained from one or more of the subsystems of the personal eyewear device.
[0124] Common reference numerals are used throughout the figures to indicate similar features.
DETAILED DESCRIPTION
[0125] This application relates to a wearable device, system and method for obtaining eye-related data from a wearer of the wearable device. The wearable device comprises a personal eyewear device, such as eyeglasses.
[0126] Figure 1 shows such an eyewear device 100 The eyewear device 100 includes a lens or pair of lenses 102 including a right lens and a left lens, and a frame 104 for fixing to the lenses 102. The frame 104 is configured to support the eyewear device 100 on the head of a wearer, such that the lenses 102 are positioned in front of the wearer's eyes when the eyewear device 100 is worn. The frame 104 has an inner surface (not shown in figure 1) and an outer surface. These surfaces meet to form the surface area of the frame 104. The outer surface of the frame 104 is visible in Figure 1, from point of view A. The inner surface of the frame 104 would be visible from point of view B. The eyewear device 100 is configured to be worn and function, for the purpose of aiding vision of the wearer, in the same manner as conventional eyeglasses as is well understood. The eyewear device 100 may include clear or tinted lenses 102, and may function as one or more of corrective eyeglasses, safety eyeglasses, sunglasses, magnification glasses, sports glasses or the like. The eyewear device 100 is a form of personal eyewear, meaning it is not a clinical piece of equipment that usually belongs in a clinical setting or to a healthcare provider. Rather, 'personal' eyewear, in this context, means a piece of eyewear generally purchasable and useable by the wider public in any setting.
[0127] The eyewear device 100 further comprises an electronics module 106 integrated with or attached to the frame 104 and/or lenses 102. For illustration purposes, Figure 1 shows a schematic of the electronics module 106. The electronics module 106 includes a plurality of sensors 108, a processor 110, and a power source 112. The electronics module 106 also includes a memory (not shown). These components may be arranged in a suitable electronic circuit. The plurality of sensors 108 are positioned on, or integrated within, an inner surface of the frame 104, whereby the inner surface of the frame 104 is the surface of the frame 104 that faces the wearer when the eyewear device 100 is worn. The processor 110 is communicatively connected to the plurality of sensors 108. 'Communicatively connected' includes any suitable connection over which signals can be communicated and received. In the context of the processor 110 and the one or more sensors 108, this connection may be established via an electronic circuit as will be well-understood. The power source 112 provides power to the processor 110 and the plurality of sensors 108. The processor 110 and the power source 112 may be positioned or attached anywhere on or in the eyewear device 100.
[0128] The electronics module 106 may be referred to as an eyewear sensor apparatus which is configured to be integrated into a personal eyewear device such as the eyewear device 100. In some examples, the eyewear sensor apparatus may be retrofitted to an existing personal eyewear device, in other examples, the eyewear sensor apparatus may be manufactured together with the personal eyewear device. The eyewear sensor apparatus may be considered to comprise the personal eyewear device, or vice versa. For consistency, the following description refers to an eyewear device 100 comprising the electronics module 106. However, it is to be understood that the reference to the eyewear device 100 is equally as applicable to an eyewear sensor apparatus, including the components of the electronics module 106, configured to be integrated or attach with existing personal eyewear.
[0129] The eyewear device 100 is powered by the power source 112 as shown in figure 1. The power source 112 is a rechargeable battery or other suitable source of power, capable of providing power to the components of the eyewear device 100 over a period of time such as a plurality of hours, a day, or a week, for example. The power source 112 may be recharged through a removable wired connection or via wireless charging. The power source 112 may be augmented by one or more photovoltaic cells connected to the power source 112. The photovoltaic cells provide solar power to the eyewear device 100. The photovoltaic cells are located on the outer surface of the frame 104 such that they are exposed to light such as sunlight when the eyewear device 100 is worn by the wearer. For example, the one or more photovoltaic cells are positioned on arms of the frame 104 of the eyewear device 100, between the eye and ear of the wearer when the eyewear device 100 is worn. The power source 112 and the electronics module 106 may be turned completely off and on via a manually operable switch located on the eyewear device 100.
[0130] The plurality of sensors 108 are configured to detect one or more eye-related parameters of the wearer of the eyewear device 100, by observing characteristics of the wearer's eyes. The characteristics of the wearer's eyes may be observed by observing the whole of each of the wearer's eyes or a portion thereof. The one or more eye-related parameters detectable by the sensors 108 include one or more of: an eye tracking parameter, an eye or outer eye (e.g. iris/sclera/cornea) imaging parameter, and a fundus imaging parameter.
[0131] Each of the eye-related parameters are observed with respective groups of sensors of the plurality of sensors 108 included in the electronics module 106 of the eyewear device 100 to obtain data corresponding to each eye-related parameter. Each of these respective groups form part of a collection of components hereinafter described as 'subsystems', such that the electronics module 106 includes one or more of: an eye tracking subsystem; an eye imaging subsystem; and a fundus imaging subsystem, configured to detect and obtain eye tracking data, eye imaging data (e.g. iris/sclera/cornea data), and fundus imaging data respectively. It is to be understood that an individual sensor 108 or other component may be used in more than one subsystem, or in other words, there may be an overlap of components between the subsystems. These subsystems are explained in more detail later.
[0132] The eyewear device 100 may include any one or more of the subsystems, each of which may operate continuously, periodically, or intermittently. Activation and deactivation of the subsystems or powering-on and powering-off of the subsystems is controlled by a scheduling module, which is in communication with, or forms part of, the processor 110. The scheduling module operates according to a set of rules or triggers that determine which of the subsystems should be actively capturing and recording eye-related data corresponding to the eye-related parameters, at any given time. Each subsystem has an active or 'power-on' mode, in which the subsystem is active and recording eye-related data, and an inactive or 'power-off' mode, in which the subsystem is inactive and not capturing eye-related data. The scheduling module is configured to transition the subsystems between these modes according to the set of rules. Multiple subsystems can operate and be in the 'power-on' mode simultaneously. The scheduling module is implemented in software or in hardware, such as the processor 110, as will be understood. The scheduling module is implemented to address the power and memory constraints of the eyewear device 100, by managing the operation of the eyewear device to improve the use of limited data storage capacity and battery or power source capacity. The scheduling module and its set of rules effectively represent dynamic power and memory consumption algorithms may be used to collect eye-related data and other sensor data in a manner that provides a balance between computational load, energy use, and data collection of high-quality.
[0133] The processor 110, powered by the power source 112, is connected to, or otherwise communicates with, the plurality of sensors 108 to record eye-related data corresponding to the eye-related parameter or parameters being observed. The processor 110 is configured to cause the recorded eye-related data to be sent to a data processing module configured to further process the eye-related data to determine a status of the wearer. The 'status' of the wearer includes any one or more of: a health-related condition of the wearer, an activity status of the wearer based on an activity performed by the wearer; a physiological state of the wearer; a neurological state of the wearer, and a psychological state of the wearer. The processor 110 itself may also or alternatively perform processing of some or all of the data. The data processing module may be situated on the eyewear device 100, or may be external to the eyewear device 100, situated on any one or more separate computing devices. The eyewear device 100 further includes a transceiver 114 for transmitting the data to the external data processing module, wherein the transceiver 114 is communicatively coupled to the processor 110.
[0134] Figure 2 shows a system including the eyewear device 100 of Figure 1 with a separate data processing module 202 and a user device 204. Although the data processing module 202 and the user device 204 are illustrated as separate entities in Figure 2, it is to be understood that the data processing module 202 could reside on the user device 204. In the system of Figure 2, the eyewear device 100 is configured to transmit recorded data of at least some of the eye-related parameters to the data processing module 202 for processing and/or further processing. To do this the eyewear device 100 is firstly configured to communicate with the user device 204, to transmit the recorded eye-related data to the user device 204. Once received at the user device 204, the recorded data is sent to the data processing module 202 for further processing. The further processing includes any suitable data processing and/or data analysis for determining the status of the wearer of the eyewear device 100 and is described in detail later. Once a status of the wearer is determined, it is recorded as status data and optionally sent to the user device 204 for display or retrieval by a user such as the wearer. The status data may additionally or alternatively be sent to an authorized third party such as a healthcare professional for analysis or use. [0135] An application is installed on the user device 204 to perform the functionality set out above with respect to the user device 204. In particular, the wearer or user may download, install, and use an application on the user device 204 to communicate with the eyewear device 100, to receive data from the eyewear device 100, to communicate with the data processing module 202, and to display status data to the wearer. The application is configured to accept user input to control the eyewear device 100 or the functionality thereof, and to receive, from the wearer, input data and information as required. Software updates for the personal eyewear device 100 may be downloaded on the user device 204 and uploaded to the personal eyewear device 100 via the application.
[0136] The eyewear device 100 and the system as illustrated in figures 1 and 2 are configured to provide status data regarding the wearer simply from the wearer wearing the eyewear device 100. No other device is required to obtain the status data, and if the wearer already wears eyeglasses or similar personal eyewear, this means that no additional wearable device is required for the wearer. The status data may be used to infer a diagnosis or prognosis of a physiological, neurological, psychological, psychiatric or disease condition. The status data may also be used to infer an activity being performed by the wearer. [0137] The electronics module 106 and its components are configured to be implemented in new or existing conventional eyeglasses, without substantially impeding on the size, weight or any other measure/dimension of the eyeglasses. In this respect, the comfort, appearance and structure of the eyeglasses are not compromised by the inclusion of the components of the eyewear device 100 and system described here.
[0138] Each of the one or more subsystems included in the eyewear device 100 will now be described in further detail. Firstly, the eye tracking subsystem 300 is described here, with reference to figure 3. [0139] Figure 3 shows a schematic diagram of the eyewear device 100 including the eye tracking subsystem 300. Figure 3 is shown from point of view B as shown in figure 1, and as such, shows the inner surface of the frame 104. The eye tracking subsystem 300 includes a first group of sensors of the plurality of sensors 108, whereby the first group of sensors are used to observe the eyes to obtain eye tracking data. The eye tracking data may be referred to as 'eye-gaze tracking data'. The first group of sensors include eye tracking cameras 308, configured to capture images of the eyes of the wearer. The eye tracking cameras 308 are attached or integrated on or in the inner surface of the frame 104, and are directed inwards, such that, when the eyewear device 100 is worn, the eye tracking cameras 308 are directed towards an eye or eyes of the wearer. As such, each eye of the wearer is configured to be within the field of view of at least one of the eye tracking cameras 308. The first group of sensors include at least one camera for each eye of the wearer, totaling at least two eye tracking cameras 308.
[0140] Figure 4 shows a schematic diagram of the eyewear device 100 including the eye tracking subsystem 300, wherein the first group of sensors include a left pair of eye tracking cameras 308 including a first left eye tracking camera 408a and a second left eye tracking camera 408b, each configured to capture images of the wearer's left eye, and a right pair of eye tracking cameras 308 including a first right eye tracking camera 408c and a second right eye tracking camera 408d, each configured to capture images of the wearer's right eye. Pairs of cameras arranged in this manner allow for the eye tracking subsystem 300 to use stereo computer vision or similar processes to obtain spatial information regarding the location of components of the eye, such as the pupil and centre of the pupil, at a higher accuracy when compared to using only one eye tracking camera per eye.
[0141] Whilst each pair of eye tracking cameras 308 are capable of stereo computer vision as long as the cameras of each pair are positionally separated from each other, the accuracy of stereo computer vision is improved when the distance between the eye tracking cameras 308 of each pair of cameras is optimized according to the dimensions of the frame 104 and the distance from the frame 104 to the eye of the wearer.
[0142] In an example, the left pair of eye tracking cameras 408a, 408b are positioned on opposite sides of a left rim 104a of the frame 104, and the right pair of eye tracking cameras 408c, 408d are positioned on opposite sides of a right rim 104b of the frame 104. In this arrangement, the fields of view of the eye tracking cameras 308 of each pair overlap, but the eye tracking cameras 308 of each pair have different points of view to each other with respect to the eye of the wearer (since the cameras of each pair are directed to the eye at effectively opposite angles from opposite sides a rim) of the frame 104. This allows the components of an eye of the wearer, including for example the centre of the pupil, to be easily determinable by at least one eye tracking cameras 308 of a particular pair of eye tracking cameras 308 regardless of a direction of gaze of the eye of the wearer, and this determination may be confirmed using stereo computer vision using the other eye tracking camera of the particular pair to ensure that the determination is accurate.
[0143] In this arrangement, the left pair of eye tracking cameras 408a, 408b are positioned on or in the inner surface of the frame 104, particularly around the left rim 104a of the left lens of the eyewear device 100. This allows the first left eye tracking camera 408a to capture images of the left eye of the wearer from a first angle that is opposite to a second angle from which the second left eye tracking camera 408b captures images of the left eye. Similarly, the right pair of eye tracking cameras 308 are positioned on or in the inner surface of the frame 104, particularly around the right rim 104b of the right lens of the eyewear device 100. This allows the first right eye tracking camera 408c to capture images of the right eye of the wearer from a third angle that is opposite to a fourth angle from which the second right eye tracking camera 408d captures images of the left eye. The magnitude of the first, second, third and fourth angles may be equal, although this is not required.
[0144] Figure 5 shows a schematic diagram of an alternative example to that of Figure 4, wherein each pair of eye tracking cameras 508a 508b, 508c, 508d are positioned in a substantially linear manner, on or in the inner surface of the frame 104 substantially along an uppermost supporting bar 104c of the frame 104 when worn. Each of the left pair of eye tracking cameras 508a, 508b are still physically separate from each other but are positioned above or near the top of the left rim 104a of the left lens. Similarly, each of the right pair of eye tracking cameras 508c, 508d are still physically separate from each other but above or near the top of the right rim of the right lens. A benefit of arranging the pairs of eye tracking cameras 508a, 508b, 508c, 508d in a substantially linear manner is that the design and manufacturing complexity of the eyewear device 100 is reduced, since the electronics module 106 can be arranged within a continuous strip along the uppermost supporting bar 104c of the frame 104, which can also save materials in terms of electronics, support, and electrical wiring.
[0145] It is to be understood that the position of the cameras of each pair of cameras may vary according to the design of the eyeglasses and the shape of the frame 104. Although Figures 3 to 5 show the first group of sensors, including the eye tracking cameras 408a-d, 508a-d, as circular and of a size that overlaps the frame 104, it is to be understood that this is for illustrative purposes only and the first group of sensors do not overlap the edges of the frame 104.
[0146] In an example, the eye tracking cameras 308 of the eye tracking subsystem 300 are configured to detect light in the infrared spectrum (IR) or near infrared spectrum (NIR). The left and right pairs of eye tracking cameras 308 are therefore infrared or near-infrared cameras. The cameras are small enough to fit on or in the frame 104 of the eyeglasses without: compromising the structural integrity of the frame 104; obstructing a line of sight of the wearer; and touching the wearer or otherwise reducing the comfort of the wearer. The cameras are not visible from the perspective of a third party observing the outer surface of the eyeglasses opposite the inner surface. The eye tracking cameras 308 are equal to or smaller than 2mm wide and 2mm long by 2.5mm deep. The eye tracking cameras 308 are configured to operate in global shutter mode wherein pixels corresponding to the field of view of the cameras capture simultaneously for each captured frame 104. The eye tracking cameras 308 preferably have a frame rate of at least 200 frames per second (fps), but may have any suitable frame rate, for example in the range of 30 fps to 2000fps. The eye tracking cameras 308 have a low power consumption, for example under 60mW each when active, which is achieved by using a small size and relatively low resolution, for example a pixel resolution in the range of 200 x 200 pixels to 500 x 500 pixels. Higher resolution cameras may be used, but this is to be balanced with the power demands and size of the cameras. The eye tracking cameras 308 are configured to image components of the eye such as the pupil. Specific locations of these components should be identifiable from the images. Therefore, the resolution of the eye tracking cameras 308 is only required to be high enough to determine the location of the major components of the eye such as the pupil, and additional resolution to provide higher detail of the eye is not required. Keeping the resolution of the eye tracking cameras 308 relatively low, as well as the size of the eye tracking cameras 308, improves the power and space efficiency of the electronics module 106 on the eyewear device 100, which is advantageous given the power and space constraints of the eyewear device 100.
[0147] The eye tracking cameras 308 are configured to capture images of the eyes of the wearer in the IR or near NIR spectrum. Whilst the eye tracking subsystem 300 has been described as including 'cameras', it is to be understood that the eye tracking cameras 308 may be any imaging sensor capable of detecting IR or NIR light. For example, the cameras may be imaging sensors such as opto-electrical transducers, photodetectors, photodiodes, active pixel sensors such as a complementary metal-oxide semiconductor (CMOS), a charge-coupled device (CCD) and/or electromagnetic trackers.
[0148] In some examples of the eye tracking subsystem 300, the eye tracking subsystem 300 further includes an illumination source for illuminating the eye of the wearer. This feature is explained with reference to Figure 6.
[0149] Figure 6 shows a schematic diagram of the eyewear device 100 including the eye tracking subsystem 300, further including a light illumination source for illuminating the eyes of the wearer. Figure 6 is shown from point of view B as shown in figure 1, and as such, shows the inner surface of the frame 104. The illumination source includes at least two illumination sources 602a, 602b. As shown in Figure 6, these include a left illumination source 602a and a right illumination source 602b. The left illumination source 602a is configured to illuminate a left eye of the wearer. The left illumination source 602a is positioned on or integrated in the inner surface of the frame 104, in the vicinity of the left rim 104a, and is orientated such that, when in use, the left illumination source 602a is configured to illuminate the left eye of the wearer. The left illumination source 602a is preferably positioned substantially along a vertical central axis of the left rim 104a, in order to be aligned with the vertical central axis of the left eye of the wearer to maximise illumination of the left eye of the wearer. Although illustrated as being positioned at the top of the left rim 104a in Figure 6, it is to be understood that the left illumination source 602a could equally be positioned at the bottom of the left rim 104a or anywhere around the edge of the left rim 104a. Furthermore, although only one illumination source is illustrated in Figure 6 as the left illumination source 602a, it is to be understood that the left illumination source 602a may include multiple left illumination source 602as arranged around the left rim 104a. The right illumination source 602b is configured to illuminate the right eye of the wearer. The right illumination source 602b is positioned on or integrated in the inner surface of the frame 104, in the vicinity of the right rim 104b, and is orientated such that, when in use, the right illumination source 602b is configured to illuminate the right eye of the wearer. The right illumination source 602b is preferably positioned substantially along a vertical central axis of the right rim 104b, in order to be aligned with the vertical central axis of the right eye of the wearer to maximise illumination of the right eye of the wearer. Although illustrated as being positioned at the top of the right rim 104b in Figure 6, it is to be understood that the right illumination source 602b could equally be positioned at the bottom of the right rim 104b or anywhere around the edge of the right rim 104b. Furthermore, although only one illumination source is illustrated in Figure 6 as the right illumination source 602b, it is to be understood that the right illumination source 602b may include multiple right illumination sources arranged around the right rim 104b. When the left and right illumination sources are positioned as shown in figure 6, such that they substantially bisect the space between the eye tracking cameras 308 on each side of each rim of the eyewear device 100, the illumination sources are exactly between the eye tracking cameras 308 on each rim and are aligned with the centre of the eyes of the wearer. This has two benefits: firstly, the eyes of the wearer are each fully illuminated, (minimising any shadow or dark regions on either side); and secondly, the light conditions, reflections, and scattering are substantially equal in terms of incident light to the eye tracking cameras 308 reflected from the illumination sources, such that each of the eye tracking cameras 308 are subject to substantially equal intensity of incident light.
[0150] The left illumination source 602a and the right illumination source 602b are of the same type and are each configured to emit electromagnetic radiation in the IR or NIR spectrum. The emission of light in the IR or NIR from the illumination sources illuminates the eyes of the wearer to improve the process of capturing images of the eyes using the eye tracking cameras 308. In particular, the eye tracking cameras 408a, 408b, 408c, 408d capture images in the IR or NIR spectrum and as such, illuminating the eyes with IR or NIR at the corresponding wavebands provides more reflections and a higher intensity thereof from the structures in the eye, such as the pupil, leading to more detailed and/or clearer images, regardless of environmental or other external lighting conditions. IR or NIR light is not detectable by the human eye, and as such, the illumination of the eyes with the illumination sources does not affect the sight of the wearer, whilst enabling the eye tracking subsystem 300 to obtain improved images of the eyes. The illumination sources 602a and 602b are any suitable light emitter, such as a light-emitting diode (LED) configured to emit light in the IR or NIR spectral band. In an example, the illumination sources 602a and 602b include quantum dot IR or NIR light emitters. The illumination sources 602a and 602b may include respective light collimators. In particular, the illumination sources 602a and 602b may include a polarizer at a distal end (nearest to the eyes when in use) and another respective polarizer may be provided over the eye tracking cameras 408a, 408b, 408c, 408d in a perpendicular configuration.
[0151] The illumination sources 602a and 602b are connected to the processor 110 and obtain power from the power source 112. The illumination sources 602a and 602b are powered and may be activated by the processor 110 to illuminate the eyes of the wearer when the eye tracking cameras 308 are active and may be turned off or deactivated when the eye tracking cameras 308 are deactivated.
[0152] In some examples, the eye tracking subsystem 300 includes a respective illumination source for each of the eye tracking cameras 308, whereby the respective illumination sources are located with the eye tracking cameras 308, such that each eye tracking camera is positioned next to an illumination source. A benefit of this is that the light emitted from the illumination sources corresponds substantially to the point of view of the eye tracking cameras 308, reducing the likelihood of shadows and a lack of illumination in the eye from the perspective of the eye tracking cameras 308.
[0153] In some examples, the eye tracking subsystem 300 further includes an accelerometer and a gyroscope or gyroscopic sensor. These sensors may form part of an inertial measurement unit (IMU) or the like, which may include a combination of accelerometers, gyroscopes and magnetometers. Figure 7 shows a schematic diagram of the eyewear device 100 including the eye tracking subsystem 300, further including an accelerometer 702 and a gyroscope 704. Figure 7 is shown from point of view B as shown in figure 1 and thus shows the inner surface of the frame 104.
[0154] The accelerometer 702 and the gyroscope 704 are connected to the processor 110 and obtain power from the power source 112. Although illustrated as being positioned centrally on the inner surface of the frame 104, the accelerometer 702 and gyroscope 704 may be positioned anywhere on or integrated within the frame 104 of the eyewear device 100. The gyroscope 704 and accelerometer 702 are used to determine an orientation of the wearer and particularly the head of the wearer when the eyewear device 100 is worn by the wearer. The accelerometer 702 and gyroscope 704 are configured to be used by the eye tracking subsystem 300 to determine pitch (movement of the head in an up and down direction when viewed from the front), yaw (movement of the head in a side-to-side direction when viewed from the front) and roll, (rotation of the head around a central axis colinear with the axis of the front view of the head), with respect to a calibrated initial orientation and position of the head of the wearer.
[0155] The fundus imaging subsystem 800 is now described here, with reference to figure 8. Figure 8 shows a schematic diagram of the eyewear device 100 including the fundus imaging subsystem 800. Figure 8 is shown from point of view B as shown in figure 1, and as such, shows the inner surface of the frame 104. The fundus imaging subsystem 800 includes a second group of sensors of the plurality of sensors 108, whereby the second group of sensors are used to observe the fundus imaging parameter. The second group of sensors include fundus imaging cameras 808, configured to capture images of specific components of the eyes of the wearer, including the fundus. The fundus imaging cameras 808 are attached or integrated on or in the inner surface of the frame 104, and are directed away from the inner surface of the frame, such that, when the eyewear device 100 is worn, the fundus imaging cameras 808 are directed towards an eye or eyes of the wearer, and particularly the fundus of the eyes of the wearer. As such, each of the fundus imaging cameras 808 are configured to have at least an eye of the wearer be within their field of view when the eyewear device 100 is worn by the wearer. In a further example, the fundus imaging cameras 808 may include multiple cameras per eye, such as a pair of cameras per eye.
The fundus imaging cameras 808 may be positioned anywhere on the frame 104 of the eyewear device 100. The second group of sensors include at least one fundus imaging camera for each eye of the wearer, totalling at least two fundus imaging cameras 808.
[0156] The fundus imaging cameras 808 are sensitive to IR (infrared) or NIR (near-infrared) light and are therefore configured to capture images in the IR or NIR spectral bands. The fundus imaging cameras 808 may be monochromatic in the IR spectrum and sensitive to wavelengths above 700nm of wavelength, or may be multispectral or hyperspectral to allow the capturing of images at various wavelengths, for example in the range of 700nm to 1050nm. A benefit of using multispectral cameras is that capturing images at multiple wavelengths allows multiple corresponding depths of tissue in the retina to be detected and imaged by the fundus imaging cameras 808. A capture provides one monochromatic image using a monochromatic camera 808 at a given wavelength, or various images simultaneously using the multispectral or hyperspectral fundus imaging cameras 808, at different wavelengths. The fundus imaging subsystem 800 is configured to save captured images to the local memory on the eyewear device 100. From there, the transceiver of the electronics module 106 is configured to transmit the captured images to the user device for further processing. The images captured by the fundus imaging cameras 808 may be referred to as fundus images and form part of the fundus imaging data.
[0157] The fundus imaging cameras 808 of the fundus imaging subsystem 800 have different properties when compared to the eye tracking cameras 308, 408a-d, 508a-d of the eye tracking subsystem 300. The fundus imaging cameras 808 have a higher resolution, when compared to the eye tracking cameras. For example, the fundus imaging cameras 808 may have a resolution greater than 1 megapixel, and may further still have a resolution greater than 2, 2.5 or 3 megapixels. The relatively higher resolution of the fundus imaging cameras 808, when compared to the eye tracking cameras, is balanced with the fact that the fundus imaging cameras 808 require more power, since generally, the greater the number of pixels, the higher the resolution, but the more power required. The fundus imaging cameras 808 may be any suitable imaging sensor that provides high resolution images, and may have a frame rate of 30, 60, 120 or 200 fps, for example. The size of the fundus imaging cameras 808 is similar to the size of the eye tracking cameras, to avoid interference with the vision and comfort of the wearer and the weight and dimensions of the eyewear device 100. The size of the fundus imaging cameras may be in the range of approximately 1mm wide x 1mm long to 2mm wide x 2mm long, although other sizes are also possible. The fundus imaging cameras 808 may be any suitable imaging sensors such as opto-electrical transducers, photodetectors, photodiodes, active pixel sensors such as a complementary metal-oxide semiconductor (CMOS), a charge-coupled device (CCD), quantum dot-based technologies and/or electromagnetic trackers.
[0158] Figure 9 shows a schematic diagram of the eyewear device 100 including the fundus imaging subsystem 800. In this arrangement, the left fundus imaging camera 808a and the right fundus imaging camera 808b are configured to capture images of the left eye fundus and the right eye fundus respectively. Although illustrated as being on opposing sides of the left rim 104a and the right rim 104b, the left fundus imaging camera 808a may be positioned on the same side of the left rim 104a as the right fundus imaging camera 808b is with respect to the right rim 104b. For example, the left fundus imaging camera 808a and the right fundus imaging camera 808b may be positioned on the left sides of the left rim 104a and right rim 104b respectively. In this arrangement, it is possible to capture fundus imaging data from the left fundus camera 808a and the right fundus camera 808b simultaneously, since the wearer can look at both fundus imaging cameras simultaneously. Figure 9 further shows the fundus imaging subsystem 800 having a fundus light illumination source for illuminating the fundus of the eyes of the wearer. Figure 9 shows two such fundus illumination sources 902a, 902b. As shown in Figure 9, these include a left fundus illumination source 902a and a right fundus illumination source 902b. The left fundus illumination source 902a is positioned on or integrated in the inner surface of the frame 104, in the vicinity of the left rim 104a, and is orientated such that, when in use, the left fundus illumination source 902a is configured to illuminate the fundus of the left eye of the wearer. The left fundus illumination source 902a is preferably positioned substantially adjacent the left fundus imaging camera 808a, and directed in the same direction the left fundus imaging camera 808a is directed in, to maximise illumination of the fundus of the left eye of the wearer from the perspective of the left fundus imaging camera 808a. Although illustrated as being positioned at the top left of the left rim 104a in Figure 9, it is to be understood that the left fundus illumination source 902a could equally be anywhere around the edge of the left rim 104a adjacent to the left fundus imaging camera 808a. Furthermore, although only one illumination source is illustrated in Figure 9 as the left fundus illumination source 902a, it is to be understood that the left fundus illumination source 902a may include multiple left fundus illumination sources arranged around the left rim 104a. The right fundus illumination source 902b is positioned on or integrated in the inner surface of the frame 104, in the vicinity of the right rim 104b, and is orientated such that, when in use, the right fundus illumination source 902b is configured to illuminate the fundus of the right eye of the wearer. The right fundus illumination source 902b is preferably positioned substantially adjacent the right fundus imaging camera 808b, and directed in the same direction the right fundus imaging camera 808b is directed in, to maximise illumination of the fundus of the right eye of the wearer from the perspective of the right fundus imaging camera 808b. Although illustrated as being positioned at the top right of the right rim 104b in Figure 9, it is to be understood that the right fundus illumination source 902b could equally be anywhere around the edge of the right rim 104b adjacent to the right fundus imaging camera 808b. Furthermore, although only one illumination source is illustrated in Figure 9 as the right fundus illumination source 902b, it is to be understood that the right fundus illumination source 902b may include multiple right illumination fundus sources arranged around the right rim 104b.
[0159] The left fundus illumination source 902a and the right fundus illumination source 902b are of the same type and are each configured to emit light in the IR or NIR spectrum. The spectral bands of emission correspond to the same spectral bands that the fundus imaging cameras 808a, 808b are sensitive to. The emission of light in the IR or NIR from the illumination sources illuminates the fundus of the eyes of the wearer to improve the process of capturing images of the eyes using the fundus imaging cameras. Illuminating the eyes with IR or NIR allows for obtaining fundus imaging data with respect to inner layers of the retina, which can be useful for detecting retinopathies for example that may affect different layers of the retina. Using IR and NIR light to illuminate the eyes rather than visible light also has the advantages of little to no annoyance or discomfort to the user, and IR and NIR light does not trigger the closing of the pupil, making imaging of the fundus easier. Illuminating the eye with IR end NIP light ffiso r.,mvides more reflections and a higher intensity thereof from the structures and features of the retina in the eye, leading to more detailed and/or clearer images, regardless of environmental or other external lighting conditions. IR or NIR light is not detectable by the human eye, and as such, the illumination of the eyes with the fundus illumination sources does not affect the sight of the wearer, whilst enabling the fundus imaging subsystem 800 to obtain improved images of the fundus of the eyes. The fundus illumination sources 902a and 902b are any suitable light emitter configured to emit light in the same IR or NIR spectral band to which the fundus imaging cameras 808a, 808b are sensitive. For example, the fundus illumination sources 902a and 902b may be LED IR illuminators or laser IR illuminators.
[0160] The fundus illumination sources 902a and 902b are connected to the processor 110 and obtain power from the power source 112. The fundus illumination sources 902a and 902b are powered and illuminate the eyes of the wearer when the fundus imaging cameras are active, and may be turned off or deactivated when the fundus imaging cameras are deactivated, for example when the fundus imaging cameras or the whole fundus imaging subsystem 800 are deactivated.
[0161] The eye imaging subsystem 1000 is described here, with reference to figure 10. It is to be understood that, although the eye imaging subsystem 1000 is described below primarily with respect to 'iris imaging data', and being configured to obtain iris imaging data, the eye imaging subsystem 1000 is actually configured to obtain imaging data with respect to any outer portion of the human eye, including the sclera, iris, cornea and conjunctiva. Figure 10 shows a schematic diagram of the eyewear device 100 including the eye imaging subsystem 1000. Figure 10 is shown from point of view B as shown in figure 1, and as such, shows the inner surface of the frame 104 (e.g. the back-side of the frame when worn by the wearer). The eye imaging subsystem 1000 includes a third group of sensors of the plurality of sensors 108, whereby the third group of sensors are used to obtain iris imaging data. The third group of sensors include eye imaging cameras 1008, configured to capture images of specific components of the eyes of the wearer, including the iris, the pupil, the cornea (even if transparent, any aberration or alteration of the cornea is detected through imaging) and/or the sclera. The eye imaging cameras 1008 are attached or integrated on or in the inner surface of the frame 104, and are directed away from the inner surface of the frame, such that, when the eyewear device 100 is worn, the eye imaging cameras 1008 are directed towards an eye or eyes of the wearer, and particularly the iris of the eyes of the wearer. Each eye imaging camera 1008 is configured to point at least towards one eye of the wearer, such that each eye of the wearer is within the field of view of at least one eye imaging camera 1008. The eye imaging cameras 1008 may be positioned anywhere on the frame 104 of the eyewear device 100. The third group of sensors include at least one eye imaging camera for each eye of the wearer, totalling at least two eye imaging cameras 1008. In the example illustrated in figure 10, the iris imaging system 1000 comprises two eye imaging cameras per eye, including a first eye imaging camera 1008a and a second eye imaging camera 1008b arranged on or proximate the left rim 104a of the eyewear device 100, and a third eye imaging camera 1008c and as fourth eye imaging camera 1008d arranged on or proximate the right rim 104b of the eyewear device 100. These may be referred to as a first pair and second pair of eye imaging cameras 1008 respectively.
[0162] The eye imaging cameras 1008 are sensitive to visible light and are therefore configured at a minimum to capture images in the visible spectrum. The eye imaging cameras 1008 may alternatively be sensitive to infrared red light. The eye imaging cameras are configured to capture images of the left and right eyes of the wearer. The local memory on the eyewear device 100 is configured to save and record these images. From there, the transceiver of the electronics module 106 is configured to transmit the captured images to the user device for further processing. The images captured by the eye imaging cameras 1008 may be referred to as iris images and form part of the iris imaging data.
[0163] The eye imaging cameras 1008 of the eye imaging subsystem 1000 have different properties when compared to the eye tracking cameras 308, 408a-d, 508a-d of the eye tracking subsystem 300. The eye imaging cameras 1008 have a higher resolution, when compared to the eye tracking cameras. For example, the eye imaging cameras 1008 may have a resolution greater than 1 megapixel each, and may further still have a resolution greater than 2, 2.5 or 3 megapixels. The relatively higher resolution of the eye imaging cameras 1008, when compared to the eye tracking cameras, is balanced with the fact that the eye imaging cameras 1008 require more power, since generally, the greater the number of pixels, the higher the resolution, but the more power required. The eye imaging cameras 1008 may be any suitable imaging sensor that provides high resolution images, and may have a frame rate of 30, 60, 120 or 200 fps, for example. The eye imaging cameras 1008 are synchronised, and are capable of recording timing data with respect to the time of capture of the images, so that all images captured have a time-stamp or the like to ensure, when the images are processed, the images captured at the same time by each of the eye imaging cameras 1008 can be identified. It is to be understood that this synchronisation may also apply to any one or more of the other subsystems, including the eye tracking subsystem 300 and/or the fundus imaging subsystem 800. This allows the capability of using the eye tracking subsystem to better inform the process for forming 3D models of each eye (which is explained in more detail later).
[0164] The size of the eye imaging cameras 1008 is similar to the size of the eye tracking cameras, to avoid interference with the vision and comfort of the wearer and the weight and dimensions of the eyewear device 100. The size of the eye imaging cameras 1008 may be in the range of approximately 1mm wide x 1mm long to 2mm wide x 2mm long, although other sizes are also possible.
[0165] The eye imaging cameras 1008 may be any suitable imaging sensors such as opto-electrical transducers, photodetectors, photodiodes, active pixel sensors such as a complementary metal-oxide semiconductor (CMOS), a charge-coupled device (CCD), quantum dot-based technologies and/or electromagnetic trackers. The eye imaging cameras 1008 may further comprise one or more polarizers provided at a distal end of the eye imaging cameras 1008 to remove glare or reflections from incoming light.
[0166] The eye imaging subsystem 1000 may further include one or more ambient light sensors 1020a, 1020b. The ambient light sensor may be any suitable sensor capable of detecting light in the visible spectrum, such as a phototransistor, a photodiode or other photodetector. The ambient light sensor is positioned on or integrated in the frame 104 of the eyewear device 100. The ambient light sensor may be located anywhere on the frame. In the example illustrated in figure 10, the eye imaging subsystem 1000 includes two ambient light sensors: a first ambient light sensor 1020a and a second ambient light sensor 1020b. There may be more or fewer ambient light sensors. The ambient light sensors 1020a, 1020b are positioned in close proximity or adjacent to at least one of the eye imaging cameras, to ensure that the intensity of ambient light detected is an accurate determination of the ambient light at the eye imaging cameras. In figure 10, the first ambient light sensor 1020a is adjacent to the first eye imaging camera 1008a, and the second ambient light sensor 1020b is adjacent to the fourth eye imaging camera 1008d. [0167] The eye imaging subsystem 1000 forms part of the electronics module 106. This means that the first and second ambient light sensors 1020a, 1020b, and the eye imaging cameras 1008a, 10086, 1008c, 1008d are connected to the processor 110 and obtain power from the power source 112. The eye imaging cameras are powered and are configured to capture the iris images when the eye imaging cameras are active and may be turned off or deactivated.
[0168] The above description refers to the eyewear device 100 and the various subsystems that it includes, which are used to obtain eye-related data. This eye-related data, which may refer to several eye-related parameters, can be used to determine the status of the wearer. The functionality of the eyewear device 100, methods of using the eyewear device 100, and methods of processing the eye-related data are now described here.
[0169] Firstly, a functionality of the eye tracking subsystem 300 will now be explained with reference to the above-described eye tracking subsystem 300 components and Figure 11.
[0170] The eye tracking subsystem 300 is configured to function and operate in an active mode, and is powered-off in an inactive mode. The scheduling module is configured to control power supply to the eye tracking subsystem 300, to transition the eye tracking subsystem 300 between the active mode and the inactive mode. In the active mode, the eye tracking subsystem 300 is configured to be supplied with power and record eye tracking data. In the inactive mode, power supply to the cameras and the illumination sources is stopped, but power may be continued to be supplied to the accelerometer 702 and gyroscope 704 to continue to monitor accelerometer and gyroscope data. Usually, the eye tracking subsystem 300 operates in the active mode in a continuous basis, such that it is 'always on', when the manual switch of the eyewear device 100 is turned to the on position.
[0171] The scheduling module may however be configured to transition the eye tracking subsystem 300 to the inactive mode in response to gyroscope and accelerometer data. If the accelerometer and gyroscope data from the accelerometer 702 and gyroscope 704 indicate substantially no movement over a period of time such as a minute, 10 minutes, or an hour, the scheduling module is configured to power down components of the eye tracking subsystem 300, such that the eye tracking subsystem 300 enters the inactive or power-off mode to conserve power. Substantially no movement may be indicated by the accelerometer and gyroscope data providing measurements of movement and/or orientation changes below a minimum threshold level continuously over the period of time. Conversely, the eye tracking system is turned on and transitioned to the active mode by the scheduling module when the accelerometer and gyroscope data shows measurements of movement and/or orientation changes above the minimum threshold level. Using the accelerometer and gyroscope data in this manner allows the scheduling module to automatically turn components of the eye tracking subsystem 300 on or off according to movement of the eyewear device 100. For example, when the wearer picks the eyewear device 100 up from a state of rest up to wear them, the minimum threshold level will be exceeded and the eye tracking subsystem 300 will 'wake up' by transitioning from the inactive mode to the active mode. This achieves increased power-efficiency of the eye tracking subsystem 300 without the need for human intervention.
[0172] When the eyewear device 100 is being worn by the wearer, the accelerometer 702 and gyroscope 704 data should not show measurements of movement and/or orientation changes below the minimum threshold level continuously over the period of time, which means that the eye tracking subsystem 300 should be continuously active when the eyewear device 100 is worn. As described above with reference to figure 1, the eyewear device 100 may include further subsystems other than the eye tracking subsystem 300, and activation or deactivation of these subsystems is controlled by the scheduling module according to a set of rules. The set of rules may relate to conditions such as time of day, and environmental conditions including ambient light conditions, but may also include conditions based on eye-related data itself. In particular, the rules effectively take the form of one or more activation conditions or thresholds, specific to each subsystem, which must be met in order to activate or power-on that subsystem. The subsystems are configured to remain in an inactive or power-off mode until one or more or all of the activation conditions for that subsystem are satisfied. This means that the other subsystems may be configured not to record data continuously, to increase power and memory efficiency. Since the eye tracking subsystem 300 operates continuously whilst the eyewear device 100 is worn by the wearer, eye tracking data may be used as a source of data for comparing to certain activation conditions. The activation conditions for particular subsystems are explained in more detail later with reference to the description of the particular subsystems. The activation conditions are set to reflect good conditions for obtaining eye-related data from the subsystems.
[0173] The eye tracking subsystem 300 may therefore be used by the scheduling module to determine which of the other subsystems should be in the active mode or inactive mode. The scheduling module transitions the subsystems between active modes and inactive modes in real or near-real time, since the good conditions for obtaining eye-related data from the subsystems can quickly change. For this reason, the eye tracking data is initially processed on the processor 110 of the eyewear device 100, such that the scheduling module of the eyewear device 100 can locally use the eye tracking data in real or near real-time, to determine whether to activate or deactivate other subsystems of the eyewear device 100. [0174] Figure 11 shows a top view of a schematic diagram of the eye tracking subsystem 300 of the eyewear device 100 when worn by a wearer having a left eye L and a right eye R. The subsystem 300 includes the eye tracking cameras 408a, 408b, 408c, 408d as illustrated in Figure 4. However, it is to be understood that any configuration of the eye tracking cameras 308, such as that shown in Figure 5 or another configuration, such as a non-linear configuration, is equally applicable to the following description of Figure 11. The first left 408a, second left 408b, first right 408c and second right 408d eye tracking cameras 308 each have a field of view (FOV): a first left FOV 1102a, a second left FOV 1102b, a first right FOV 1102c and a second right FOV 1102d respectively. These FOVs preferably include a whole target eye of the user, whereby the target eye is the left eye 1150 for the left pair of cameras 408a, 408b, and the target eye is the right eye 1160 for the right pair of cameras 408c, 408d. However, the FOVs can be larger than the target eye or smaller. The eye tracking cameras 308 function in pairs, (the left pair and right pair), to capture stereo images by utilizing the overlap in the FOVs. In particular, the first left FOV 1102a overlaps with the second left FOV 1102b, and similarly the first right FOV 1102c overlaps with the second right FOV 1102d. The eye tracking cameras 308 of each pair (e.g. the first left camera and the second left camera for the left pair) capture respective images which are then processed to form a single image using stereo computer vision, with associated 3D information of each eye. In particular, the baseline, or distance between each eye tracking camera within the pair of cameras, is predetermined having been measured precisely and accurately. The two eye tracking cameras 308 of each pair simultaneously capture two images using the global shutter to ensure that the stereo images are captured at the same moment in time. Using two eye tracking cameras 308 per eye, to perform stereo vision, is advantageous over using one camera as it negates viewpoint error associated with determining a position of a feature when imaging a 3D object (an eye) from one viewpoint (one camera). In other words, using stereo computer vision increases the accuracy of the determination of locations of components of the eye, such as the pupil. [0175] The images captured by the eye tracking cameras 308 may be subject to a variety of pre or post processing steps. Generally, techniques for stereovision include some pre-processing, such as undistorting the captured images, such that barrel distortion and tangential distortion are removed. This ensures that the observed image matches the projection of an ideal pinhole camera.
[0176] Generally, techniques for stereovision include projecting the captured images back to a common plane to allow comparison of the image pairs in an image rectification step. An information measure which compares the two images is then minimized to obtain correspondence between the two images. This gives the best estimate of the position of common features in the two images, such as the pupil and the centre of the pupil, and creates a disparity map. The received disparity map includes depth information, which may then be used to project the data from the two images into a 3D point cloud. By utilizing the cameras' projective parameters, and the baseline, the point cloud can be computed such that it provides measurements at a known scale. In the examples described here, these techniques provide point cloud data showing the location of components of the eye such as the pupil.
[0177] The information measure to be minimized may take on various forms. Features in the two images such as corners can be easily found in one image, and the same can be searched in the other image. Other image processing techniques such as filtering, edge-detection, feature detection and the like may also be used to find correspondence between the two captured images for each pair of eye tracking cameras 308.
[0178] The images captured by the eye tracking cameras 308 correspond to light captured in the IR or NIR wavebands. Capturing images in these wavebands reduces the presence of glare and reflections that are found in the visible spectrum, meaning at least the sclera, iris and pupil of the eyes is clearer in the images. As such, each of the left pair of eye tracking cameras 408a, 408b, and the right pair of eye tracking cameras 408c, 408d capture, using stereovision, images of the left eye and the right eye of the wearer respectively, which are combined to determine 3D information regarding the location of components of the eyes such as the pupils. Since the cameras have a relatively high frame rate and a global shutter, the process of obtaining the images and the corresponding 3D information (e.g. point cloud data) for each of the left eye and the right eye of the wearer occurs rapidly, such that the images and associated 3D information from stereo vision is continuously recorded by the eye tracking subsystem 300. This data is stored to memory and associated with timing information such as time of capture. This may be done via any suitable method. For example, the data may be stored to memory and associated with a time-stamp or other timing metadata, in a data-structure such as a database or table. Movements or alterations of components of the eye can be tracked in real-time or near real-time across updates of the point cloud data. This enables the eye tracking subsystem 300 to compute and track the gaze of each eye of the wearer in a continuous manner, to obtain further eye tracking data.
[0179] Initial processing of the captured images is performed on the eyewear device 100, by the processor 110. The initial processing includes the following steps. Firstly, from the 3D information of each of the left and right eye, a location of a biological feature in each eye is determined, indicative of a direction of gaze of each eye. From this location data, an eye gaze vector for each eye is determined. The 3D information of the left eye is processed to determine a left eye gaze vector, and the 3D information of the right eye is processed to determine a right eye gaze vector. These eye gaze vectors indicate the direction of the gaze of each eye of the wearer. The left and right eye gaze vectors may converge at a point in 3D space. This convergence point is referred to as a 'gaze point' and indicates the point in 3D space the wearer is looking at. Each of the captured images, the locations of biological features of the eyes, the eye gaze vectors, and the gaze point may be recorded as eye tracking data on the memory of the eyewear device 100. This eye tracking data may also be further processed by the data processing module 202. The process of determining the eye gaze vectors and the gaze point of the eye gaze vectors is explained in more detail with reference to Figure 12.
[0180] Figure 12 shows a diagram of a method 1200 of determination of the eye-gaze vectors and the gaze point of a wearer of the eyewear device 100 having the eye tracking subsystem 300. This determination includes the following steps, which may collectively be referred to as a 'measurement' by the eye tracking subsystem 300.
[0181] At a first step 1202, the method 1200 includes capturing the images using each eye tracking camera of each pair of cameras, as explained above.
[0182] At a second step 1204, the image captured by a first eye tracking camera of a pair is combined with the image captured by a second camera eye tracking of the pair according to stereo computer vision techniques. This occurs for both the left and right pairs of cameras, to obtain a stereo 3D information of each eye of the wearer.
[0183] At a third step 1206, a location of a biological feature of the eye is identified using the 3D Information of each eye. The biological feature may be any feature or features of the eye, the location of which can provide information regarding the direction of gaze of the eye. The biological feature may be identified or detected in the captured images by any suitable means or technique, and may include the application of computer vision processes such as feature recognition, filtering, edge detection and the like. A pixel or group of pixels are determined to most likely correspond to the biological feature, in order to identify the biological feature in the 3D information. In an example, the biological feature is the centre of the pupil, and a determination of a pixel most likely to correspond to the centre of the pupil is made. [0184] At a fourth step 1208, an eye gaze vector is determined from the identified biological feature of the eye. In particular, the position/orientation of the identified biological feature of the eye is indicative of the direction of the eye gaze vector of the eye. Taking the example of the centre of the pupil, the pixel location of the centre of the pupil is determined from the 3D information and the captured images. A pixel distance is then calculated between the pixel location of the centre of the pupil and a calibration pixel location from a calibration measurement. The calibration process is explained in more detail later, but in summary, the calibration process provides correspondence between the calibration pixel location and a known 3D angle of gaze. Thus, by calculating the pixel distance between the calibration pixel location and the current pixel location of the centre of the pupil, it is possible to correspondingly translate the known 3D angle of gaze from the calibration measurement to a current 3D angle (in x,y,z directions) according to the pixel distance. In a primitive example of this, a pixel distance of 1 in a particular direction (e.g. horizontal or vertical) may equate to a 1° angular change in that direction. Once the known angle of gaze of the calibration measurement is translated according to the pixel distance, and the current 3D angle determined, the eye gaze vector is formed by extrapolating of the direction of the current 3D angle. By using the centre of the pupil as the biological feature, the eye-gaze vector is effectively a perpendicular bisector to what would be the surface of the eye at the centre of the pupil.
[0185] At a fifth step 1210, once the first to fourth steps above are performed with respect to each eye, and the left and right eye-gaze vectors are determined, the gaze point of the eye-gaze vectors is determined by extrapolating the eye-gaze vectors until they meet at a point in 3D space. If the eye-gaze vectors do not meet, the point of minimum distance between the eye-gaze vectors is calculated and identified as the gaze point.
[0186] At a sixth step 1212, the gaze point in 3D space is recorded, as a distance and direction from the wearer, and the left and right eye-gaze vectors are also recorded. The gaze point corresponds to the captured image data, recorded at an instant or moment in time when the wearer was looking at a position in 3D space corresponding to the location of the gaze point. The gaze point is thus time dependent and is recorded according to timing information such as the time of capture t of the images by the pairs of eye tracking cameras 308, a frame number, or the like. As explained above this may be recorded according to any suitable method, such as by using timestamps and a global or local clock signal, for example. [0187] At a seventh step 1214, the above first to sixth steps are repeated for new image data, recorded at a time t+1, where t is the time at which the most recently previous images were captured, and +1 indicates the next iteration of image capture, as determined by the frame rate of the cameras and/or computational speed. This provides a second gaze point, recorded against the time t+1.
[0188] This process repeats as new image data is obtained from the eye tracking cameras 308, resulting in the recording of a continuous sequence of images, eye-gaze vectors, and gaze points. Since the gaze points are recorded according to the time at which the images are captured by the cameras, the continuous sequence of gaze points is chronological. This sequence of points is connected to form a 3D gaze path in 3D space, which illustrates how the direction of gaze of the wearer changes over a period of time. For example, data obtained over a period of time such as a minute results in a continuous sequence of gaze points, which forms a 3D gaze path indicative of where the wearer was looking over the minute period. The eye tracking subsystem 300 is therefore configured to determine, from images obtained from its eye tracking cameras 308, a 3D eye gaze path indicative of a wearer's vision behaviour.
[0189] The 3D gaze path recorded according to the above method may be adjusted or modified according to sensor data from the gyroscope 704 and accelerometer 702. Gyroscope and accelerometer readings may be used as image metadata attached to the originally captured eye tracking image data, to aid in orientation normalization and image registration. In turn, this can be used to adjust the 3D gaze path to account for changes in orientation or movement of the wearer's head, relative to a calibrated orientation. Usually, the calibrated orientation is determined with the wearers head positioned upright and looking horizontally whilst at rest. The 3D gaze path can be calculated and continuously updated as data is received from the accelerometer 702 and gyroscope 704 (relating to changes in orientation of the head of the wearer). The adjusted 3D gaze path may be referred to as a global 3D gaze path, since the global 3D gaze path is consistent on a global level, irrespective of the orientation of the wearer's head. It is to be understood that reference to the 3D gaze path in the following description also applies to the global 3D gaze path.
[0190] Since the 3D gaze path is computed from the gaze point of the two eye gaze vectors, the system and method does not require any reference plane during operation, or a screen or display. This means that the eye tracking subsystem 300 is capable of performing eye tracking in 3D without the need for a target or display.
[0191] The 3D gaze path is recorded as a chronological sequence of gaze points, wherein each gaze point of the path is associated with timing data indicating a time of capture, and a distance indicating a distance of the gaze point from the wearer. The distance may be measured from a point on the wearer such as the centre point between the eyes of the wearer. Each of the captured images, the eye-gaze vectors, the gaze point, the 3D gaze path and the accelerometer 702 and gyroscope 704 data may form the eye tracking data. This represents a continuous stream of data, the generation of which is performed by the processor 110 working through the method 1200 on the eyewear device 100, such that the eye tracking data may be used by the scheduling module to determine whether to activate other subsystems. All of the eye tracking data is stored in memory on the eyewear device 100 for transmission to the user device 204. Further processing, to determine a status of the wearer, is then performed on the data processing module 202, which may be on the user device 204 or on a separate computing device.
[0192] Figures 13a and 13b shows a schematic diagram of the eyewear device 100 with a visualisation of the eye tracking data. In particular, figure 13a shows a visualisation of a left eye-gaze vector 1302L, a right eye-gaze vector 1302R, and a gaze point 1304, with respect to a wearer wearing the eyewear device 100. The images of the eyes are not shown. A midpoint 1306 of the left eye 1350 and the right eye 1360 of the wearer provides a point on the wearer from which to calculate distance to the gaze point 1304. Figure 13b shows a visualisation of the 3D global gaze path 1308, formed from connecting a path between chronological gaze points 1304a to 1304n. The distance to the gaze points from the midpoint 1306 is recorded as part of the 3D global gaze path, such that each gaze point 1304a to 1304n is associated with a distance of gaze a1310 to 1310n. In the example of figure 13b, the 3D global gaze path 1308 may correspond to a recorded 'smooth pursuit' ocular event.
[0193] As explained above, in the method of determining the eye gaze vectors, the gaze point, and the 3D gaze path, a calibration measurement is used as a reference (in terms of a calibration pixel location and a calibrated orientation. The calibration measurement is recorded in a separate calibration process or method. This process is explained with reference to Figures 14 and 15, which show geometric diagrams of the calibration process. The calibration process is performed to provide the reference data required for determining the eye-gaze vectors and the global 3D gaze path, and to reduce the presence of errors in the eye tracking data. The calibration process effectively involves the wearer testing the eye tracking subsystem 300 in a controlled environment of known geometries. Thus, in preparation for calibration, the controlled environment of known geometries is firstly set up.
[0194] Figure 14 shows a schematic example of a controlled environment 1400 for performing calibration of the eye tracking subsystem 300. The controlled environment 1400 includes a set of points 1402 arranged on a surface 1404 a predetermined distance D away from the wearer. The set of points 1402 are separated by predetermined distances and directions with respect to each other, such that the geometry of the set of points 1402 is known. The set of points may include any number of points. An increased number of points, for example 9 as shown in Figure 14, provides a more accurate calibration than fewer points, for example with 3 points. The set of points 1402 may be provided as an optometry tool, or the wearer may manually add the set of points 1402 to the surface 1404 at measured intervals. The surface 1404 may be any surface of a planar object, such as a wall.
[0195] To prepare for calibration, the wearer stands at the distance D away from the surface 1404, such that the wearer's eyes are also substantially D from the surface 1404. The wearer aligns themselves with a central point, referred to as a base gaze point 1402a, such that the base gaze point 1402a is directly ahead of the wearer along a horizontal line 1406 (perpendicular bisector) from the centre of the wearer's head. The base gaze point may however be any point whereby the distance from the point to the wearer's eyes is known or can be determined with substantial accuracy. In an example, the surface 1404 is a wall and the set of points 1402 are affixed to or overlaid on the wall. The distance D is arbitrary, and could be any suitable distance, for example 2 metres. The distance D to the wall is measured via any manual or automated process, and the wearer positions themselves such that they are substantially at the distance D from the wall. This may be done by marking the distance D from the wall, and the wearer standing on the resultant mark.
[0196] Figure 15 shows the geometry of the controlled environment 1400 when the wearer looks directly at the base gaze point 1402a from a central position, from the predetermined distance D away. The base gaze point 1402a is the point on the surface 1404 that is the distance D along a perpendicular bisector 1502 of a line 1504 that bisects the centre of the eyes 1550, 1560 of the wearer. Taking the centre-point of the line 1504 that bisects the centre of the eyes of the wearer as the origin, (0,0,0), this means that the base gaze point 1402a is at coordinates (0,0,0), in cartesian coordinates (x,y,z) where x is the axis colinear with the line 1504 that bisects the centre of the eyes of the wearer, y is perpendicular to x in the horizontal plane and z is the vertical axis. This arrangement is illustrated in Figure 15.
[0197] The distance between the centre of the eyes of the wearer, L, is also known. The distance L may be estimated from the dimensions of the eyeglasses of the eyewear device 100, for example as being the distance between the centre points of the left and right lenses 102. Alternatively, the distance L is measured using a manual or automated process. The centre of the eyes of the wearer are thus at (L/2,0,0) for the left eye 1550, and (LJ2,0,0) for the right eye 1560.
[0198] Therefore, in the controlled environment 1400, Both D and L are known to the eye tracking subsystem 300, as well as the geometry of the set of points 1402. These parameters are recorded to memory. Given D and L, it is possible to calculate a first expected angle a between a line from the centre of each eye to the base gaze point and the normal line (perpendicular bisector 1502) from the base gaze point to the origin (0,0,0), whereby a = arctan(L/2D). Similarly, it is possible to calculate a second expected angle /3, as shown in figure 15 as the expected angle of gaze from each eye of the wearer, using /3 = arctan (2D/L) or 90°-a. The first expected angle a and/or the second expected angle 13 are thus also known and the controlled environment 1400 is set up and ready for calibration. It is to be understood that a similar process is undertaken for each set of points 1402 in addition to the base gaze point 1402a, since the geometry of these points is also known, including their position relative to the base gaze point 1402a. As such, for each point of the set of points 1402, an expected distance and angle from the wearer to the point (with the wearer staying still) is known and can be compared against a series of calibration measurements for the purposes of performing the calibration. It is to be understood that the controlled environment 1400 is 'controlled' in that the geometries of the set of points 1402 and the distance of the set of points 1402 to the wearer are known or at least estimated to reasonable degree of accuracy.
[0199] The calibration process using the controlled environment 1400 is now described. The calibration process may be initiated manually by a user such as the wearer, or automatically, for example on power-on of the eyewear device 100. In either case, the calibration process involves the eyewear device 100 and the user device 204 communicating with each other to instruct the wearer to perform certain actions, and to record calibration measurements using the eyewear device 100. In the case where the calibration process is initiated automatically by action of the eyewear device 100, the eyewear device 100 enters a calibration mode and is configured to communicate with the user device 204 to trigger the user device 204 to launch a calibration feature on the user device 204. In the case where the calibration process is initiated manually, the user launches the calibration feature on the user device 204, and the user device 204 is configured to communicate with the eyewear device 100 to trigger the eyewear device 100 to enter the calibration mode. The calibration feature of the user device 204 may be a part of the application installed on the user device 204, configured to display or otherwise convey instructions to the wearer according to the calibration process and provide feedback when the instructions are completed. Geometric data of the controlled environment 1400 is input and stored in the calibration feature on the user device 204, including the distance D, the distance between the eyes of the wearer L and the angle a. Distances and angles to all of the set of points 1402 are also stored. This geometric data may be predetermined and prepopulated on the calibration feature on the user device 204 or may be manually input by a user such as the wearer. [0200] With the eyewear device 100 in the calibration mode, the wearer is instructed, whilst wearing the eyewear device 100, to stand the predetermined distance D away from a surface having the set of points 1402 and stare straight ahead, to look at the set of points on the surface such that the base gaze point 1402a is directly ahead. The wearer is optionally instructed, via the calibration feature of the user device 204, to keep the head position upright and looking ahead as a precondition of the calibration process. This ensures that the eyewear device 100 is horizontal. The orientation of the eyewear device 100 and thus the head of the wearer in adopting this position is measured and verified by the gyroscope 704. The calibration feature may provide feedback to the wearer via the user device 204, such as a sound, vibration or haptic feedback response when this orientation condition is met and confirmed by the gyroscope 704. The position of the points on the surface may be moved up or down to ensure that the wearer does not have to move their head in a tilted upwards or downwards position to view the points 1402.
[0201] The wearer is also optionally instructed, via the calibration feature of the user device 204, not to move their head as a further precondition of the calibration process. Movement of the eyewear device 100 and thus movement of the head of the wearer is measured by the accelerometer 702. The calibration feature may provide further feedback to the wearer via the user device 204, such as a sound, vibration or haptic feedback response when the movement of the head is below an acceptable threshold level or substantially zero, as confirmed by the accelerometer 702.
[0202] Once the position of the head is horizontal and steady as confirmed by the gyroscope 704 and the accelerometer 702, the orientation of the head of the wearer is saved as the calibration orientation, and the calibration process continues by taking calibration measurements for each of the set of points. The process of obtaining a calibration measurement is described here with respect to the base gaze point 1402a. The calibration feature may instruct the wearer, via any suitable signal, to look at the base gaze point 1402a, such that a calibration measurement can be recorded. For the calibration measurement, the left and right pair of eye tracking cameras 308 are controlled to capture images of each eye of the wearer to obtain calibration 3D information data of each eye. From the calibration 3D information data, a pixel corresponding to the centre of the pupil is determined, and its coordinates recorded. This pixel should equate to the centre of the pupil when the eye is looking at the base gaze point 1402a, with the angle of incidence of the gaze to the base gaze point 1402a equating to the first expected angle a with respect to the normal line 1502 as seen in figure 15. The recorded coordinates of the pixel at the centre of the pupil are associated in the calibration measurement to the second expected angle p (the angle of gaze). This process is performed for both the left and right eye calibration 3D information data, such that the pixel corresponding to the centre of the pupil for each eye is determined. The location of these pixels in the calibration 3D information data is recorded as calibration pixel locations. These calibration pixel locations are then associated with one or both of the first or second expected angles. For example, the calibration pixel locations are associated with p, the gaze angles of the eyes. It is to be understood that, due to unique differences between the left and right eyes, the calibration pixel location of the left eye may not directly mirror the calibration pixel location of the right eye.
[0203] Calibration eye gaze vectors are then calculated for the calibration measurement, and a calibration gaze point of the calibration eye gaze vectors is subsequently determined. Since the calibration pixel locations are assigned the known angle /3 for the base gaze point 1402a, the calibration eye gaze vectors and the calibration gaze point for the base gaze point 1402a represent the ideal scenario, whereby the gaze vectors follow the angle /3 and meet directly at the base gaze point 1402a.
[0204] The wearer is then instructed to look at a first additional point of the set of points 1402 without moving their head, to repeat this process for the first additional point. Since the first additional point of the set of points 1402 is at a different, known angle 0 from the base gaze point 1402a, once the calibration pixel locations are determined for the first additional point, and assigned to the angle B, the calibration process effectively has two sets of calibration pixel locations with associated angles, one set for the base gaze point 1402a and one for the first additional point. From this information, a pixel-to-angle ratio can be determined by effectively dividing the angular differential between /3 and 0 by the distance between the calibration pixel locations for the base gaze point 1402a and the first additional point. The pixel-to-angle ratio forms a scale which allows the determination of any gaze angle from pixel information, and particularly a pixel distance from a reference point pixel of known gaze angle.
[0205] For further additional points of the set of points 1402, the wearer is instructed to look at each and every point of the set of points 1402 without moving their head. The calibration feature of the user device 204 may provide feedback once each calibration measurement, corresponding to each one of the set of points, is successfully recorded. For each of the further additional points, a calibration measurement is recorded, to determine calibration pixel locations for each of the left and right eye at each additional point. From these calibration pixel locations, calibration eye-gaze vectors and calibration gaze points are determined for the further additional points using the scale determined as explained above. Since the geometry of the set of points 1402 is known (the known dimensions of the controlled environment 1400, including the distance D, the angles to the set of points, and the distance L between the eyes of the wearer), an ideal measurement, including ideal gaze angles, eye-gaze vectors, and gaze points for each of the set of points 1402 is known. By comparing the calibration eye gaze vectors and gaze points, calculated from the calibration pixel locations and the scale, to their ideal counterparts, it is possible to determine errors for each further additional point, based on the difference between the calibration measurement and the ideal measurement.
[0206] In reality, the calibration eye gaze vectors may be independently erroneous in terms and left and right eye gaze vectors, which may result in an erroneous gaze point found at a different position in the x, y and/or z dimensions compared to the location of the ideal gaze point (i.e. the point of the set of points that the wearer is looking at). In some cases, there may be no calibration gaze point at all if the calibration eye gaze vectors do not meet. These errors originate from the scale not being entirely accurate. To rectify this, the scale is iteratively adjusted for each eye based on the determined errors from the comparison of the ideal measurement and the calibration measurement, for each further additional point. In the case where there is no calibration gaze point due to the calibration eye gaze vectors not meeting, the global minimum distance between the lines of the calibration eye gaze vectors may be determined, and this global minimum taken as the calibration gaze point. The calculated error in the eye-gaze vectors and gaze points is stored as a 'calibration error' as part of the calibration measurement for each of the further additional points. The calculated error may then be used in a corrective function to compensate for the error when real measurements are acquired outside the controlled setting of the calibration process. In particular, the scale or ratio used for pixel location to gaze angle determinations is adjusted according to the corrective function. This function may be non-linear and/or vary in different dimensions.
[0207] When a real measurement is made after the calibration process, and the pixel corresponding to the centre of the pupil is found, the angle of gaze is calculated relative to a calibration pixel location, and its associated gaze angle, which form a reference point. The scale, corrected according to the corrective function, is then used to translate the distance between the pixel of the real measurement and the calibration pixel location to a gaze angle for the real measurement.
[0208] Using multiple points 1402 to calibrate the eye tracking subsystem 300 in this way improves the determination of the angle of the gaze of the wearer, accuracy of the real measurements, and also provides more data regarding the errors found for each calibration measurement. The errors may vary between the sets of points 1402, such that some points exhibit a large error (for example where calibration eye gaze vectors never converge), whilst some points may exhibit a low error (for example when the gaze point is near the point of the set of points 1402 that the wearer is looking at. The errors are recorded and associated with the point to which they correspond. Using this data, it is possible to map the calibration errors spatially, to determine how the error varies according to angles of gaze, specific calibration pixel locations, and distances with respect to the wearer. The spatial mapping of errors may be used to determine more accurate and/or smoother corrective functions, which may include higher order mathematical functions, for example.
[0209] In some examples, the process of determining the corrective function or functions to be applied to real measurements may be performed using a machine-learning algorithm. In particular, a machine-learning algorithm may take as input, the determined calibration errors and/or the spatial mapping of the calibration errors, and output a corrective function for mitigating the calibration errors. The machine learning algorithm may be trained using training data linking calibration errors and/or spatial mapping of calibration errors to corrective functions. The machine learning algorithm may take the form of any suitable machine learning algorithm such as those explained later.
[0210] As a result of the calibration process for the set of points 1402, data recorded for each point is stored, including the calibration pixel location corresponding to the centre of the pupil, the angle assigned to it, and the error for the corrective function. By storing this calibration data, as well as the sensor data for the accelerometer 702 and gyroscope 704, the eye tracking subsystem 300 is calibrated, and any new, real measurements are made with reference to the calibration measurements, and gyroscope and accelerometer data. This allows new real measurements to be recorded according to the method 1200 as set out above with reference to figure 12. The data from these real measurements is corrected according to the corrective function or functions and the accelerometer and gyroscope data.
[0211] Once a real measurement is made according to this method 1200 of making measurements, data from the measurement is recorded and stored for further processing. In particular, any or all of the images, the eye gaze vectors, the gaze point, and/or the 3D gaze path may form the eye tracking data to be further processed. In an example, all of this data is stored as part of the eye tracking data. Each of these pieces of data are time-dependent, and are thus stored in association with the time of capturing the images. The gaze point, and thus the 3D gaze path and global 3D gaze path, are also recorded with respect to a distance from the wearer, whereby the distance may be determined as the average distance of the eye-gaze vectors, from the centre of the pupils to the gaze point. In other examples, the distance may be determined from a point on the wearer to the gaze point. The point on the wearer may be the midpoint of the line 1504 that bisects the centre of the eyes 1550, 1560, of the wearer as shown in figure 15. [0212] In an example, the eyewear device 100 is configured to iteratively or periodically communicate with the user device 204 to send a payload of data corresponding to eye tracking data (or raw data for building the eye tracking data) recorded over a period of time. For example, eye tracking data may be stored on the memory of the eyewear device 100 over a predetermined period of time of 1 minute, 10 minutes, 30 minutes, an hour, 12 hours, 24 hours, or longer. The eye tracking data may then be sent, after the period of time has elapsed, to the user device 204. Alternatively, the eye tracking data may be 'pushed' to the user device 204 in response to a request received at the eyewear device 100 by the user device 204. The request may be automatedly generated after the period of time or manually requested by the user of the user device 204 via an application installed on the user device 204. The application may be the same application that hosts the calibration feature of the user device 204 as set out above. Alternatively, the eye tracking data may be sent to the user device 204 according to the storage availability of the memory on the eyewear device 100, such that eye tracking data is sent before the memory on the eyewear device 100 becomes full. Upon sending the eye tracking data for a particular period of time to the user device 204, the eyewear device 100 is configured to erase the eye tracking data from its memory and/or overwrite with new eye tracking data.
[0213] The eye tracking data is initially processed as explained above, from the raw images obtained by the cameras of the eye tracking subsystem 300 and the sensor data of gyroscope 704 and the accelerometer 702, to obtain the images, the two eye gaze vectors, the gaze point of the eye gaze vectors, and the 3D gaze path and 3D global gaze path. One or more parts of the eye tracking data is further processed to determine the status of the wearer.
[0214] The further processing of the eye tracking data takes, as an input, eye tracking data corresponding to one or more of the images, the eye gaze vectors and the 3D gaze path and the 3D global gaze path. This data is associated with timing information and is recorded over a period, such as a minute, an hour, day, week month or longer, which allows trends or patterns to be identified in the data. The status of the wearer is the output of the further processing. The further processing may be performed by any suitable method, examples of which are provided later.
[0215] The functionality of the fundus imaging subsystem 800 will now be described. The fundus imaging cameras 808 of the fundus imaging subsystem 800 have properties such as a much higher resolution, and the ability to capture images at different wavelengths, which require more power when compared to the eye tracking cameras. Furthermore, unlike the eye tracking subsystem 300, wherein eye tracking data is recorded continuously as it may change continuously, it is not necessary to record fundus imaging data continuously because changes in the fundus of the eyes of the wearer are generally more gradual. For these reasons, the fundus imaging subsystem 800 is configured not to operate continuously, to save power onboard the eyewear device 100.
[0216] The fundus imaging subsystem 800 operates periodically or intermittently, such that the fundus imaging subsystem 800 is activated for a period of time, in which it records fundus imaging data including images of the fundus of the eyes of the wearer, and is deactivated for the remaining time, such that no fundus imaging data is recorded. These two modes of operation are referred to as an active mode of the fundus imaging subsystem 800 when fundus imaging data is being captured and recorded, and an inactive mode of the fundus imaging subsystem 800 when fundus imaging data is not being captured and recorded. In the inactive mode, power supply to the fundus imaging cameras 808 and the fundus illumination sources may be suspended or stopped to save energy.
[0217] A determination of when to switch the fundus imaging subsystem 800 from inactive mode to the active mode, or vice versa, is made by the scheduling module. As described above, the scheduling module operates according to a set of rules and particularly activation conditions, to determine whether to activate and thus power-on a subsystem. With respect to the fundus imaging subsystem 800, a first activation condition includes eye alignment. Each eye of the wearer should be in a position such that it is substantially aligned (looking towards) the fundus imaging cameras 808. Determination of eye alignment is made by the scheduling module, from eye tracking data. In an example, the location of the fundus imaging cameras 808 on the frame 104 of the eyewear device 100 is known, and from this location information, an estimated gaze angle (e.g. estimated gaze vector or other eye-gaze parameter) corresponding to alignment between the eyes of the wearer and the fundus imaging cameras 808 can be determined. The eye tracking data is processed locally on the processor of the eyewear device 100, and is analysed by the scheduling module to determine whether a current eye-gaze vector from eye tracking data matches the estimated eye-gaze vector corresponding to alignment. The first activation condition may be met when the current eye-gaze vector is within a threshold difference from the estimated eye-gaze vector corresponding to alignment. For example, the threshold difference may be between 0.1 and 5 degrees in each dimension of the 3D angle corresponding to the current eye-gaze vector. Any measure of vector similarity may also be used to determine the similarity between the predicted eye-gaze vector and the current eye-gaze vector, such as the dot product or cosine similarity. In a further alternative example, the threshold difference may be a maximum number of pixels (i.e. a pixel distance) between a pixel corresponding to the estimated eye-gaze vector and the pixel relating to the same feature of the eye (e.g. the centre of the pupil) in the current eye tracking data.
[0218] A second activation condition of the fundus imaging subsystem 800 includes a time condition, such as a time of day, week, month, or a time elapsed since previous fundus imaging data was obtained. The second activation condition including the time condition may be met when a current time of day exceeds the time condition, or when a time elapsed since previous fundus imaging data was obtain exceeds the time condition. Time data used to compare against the time condition may be obtained from a global clock (typically from the user device, since the eyeglasses may communicate with the user device to synchronise the with the global clock to ensure eye-related data recorded against correct timing information) or any other source of timing information data. For example, the time condition may be to capture new fundus imaging data once a week. In this example, a time elapsed is recorded and incremented from timing information data until the time elapsed exceeds a one-week threshold. Upon exceeding this threshold, the second activation condition -the time condition -is passed and data indicating that the second activation condition has been met is provided to the scheduling module. [0219] Once each of the first and second activation conditions of the fundus imaging subsystem 800 are met, the scheduling module is configured to transition the fundus imaging subsystem 800 to the active mode. Once fundus imaging data has been subsequently captured and recorded in the active mode, the fundus imaging subsystem 800 is configured to transition back to the inactive mode. At this point, the time condition of the second activation condition for the fundus imaging subsystem 800 resets, (for example by resetting the elapsed time to zero), meaning the second activation condition is no longer met. The process then repeats, and the fundus imaging subsystem 800 transitions to an active mode to capture and record fundus imaging data once the activation conditions are met again.
[0220] In this manner, the fundus imaging subsystem 800 switches between an inactive mode and an active mode to intermittently capture and record the fundus imaging data. Using both the first activation condition, relating to eye alignment, and the second activation condition, relating to the time condition, ensures that the scheduling module turns the fundus imaging subsystem 800 on to record fundus imaging data at the optimal moment, when the eyes of the wearer are aligned with the fundus imaging cameras 808. This in turn ensures that the fundus imaging subsystem 800 operates efficiently in terms of power, memory, and computations, since it does not operate continuously, and also ensures that, when the fundus imaging subsystem 800 is in the active mode, it is more likely to capture useful fundus imaging data due to the eye alignment condition.
[0221] The activation conditions for switching the fundus imaging subsystem 800 between the active and inactive modes may be set and compared against with relevant data (such as timing information data for the second activation condition), by the scheduling module. Further activation conditions for the fundus imaging subsystem 800 may also be used to determine whether to activate the fundus imaging subsystem 800. For example, a quality measure for previously acquired fundus imaging data may be used to determine whether recently acquired fundus imaging data is of adequate quality before transitioning the fundus imaging subsystem 800 to the inactive mode. If the recently acquired fundus imaging data is deemed not adequate, the fundus imaging system may not transition to the inactive mode and may instead repeat a capture process to obtain adequate fundus imaging data.
[0222] It is to be understood that the scheduling module may activate and deactivate the fundus imaging cameras independently. In particular, the first activation condition, relating to eye alignment, may be considered separately by the scheduling module for each of the left fundus imaging camera 808a and the right fundus imaging camera 808b, by comparing left and right eye-gaze parameters (such as eye-gaze vectors, pixel locations etc.) against predicted left and right eye-gaze parameters corresponding to the fundus imaging cameras. The scheduling module may determine that the first activation condition is met for the left fundus imaging camera 808a but not for the right fundus imaging cameras 808b, for example. In this case, the left fundus imaging camera 808a is activated but the right fundus imaging camera 808b is not. In this manner the 'active mode' of the fundus imaging subsystem 800 may be an active mode for each independent fundus imaging camera, whereby each fundus imaging camera has an active and inactive mode, the transition to and from which is controlled by the scheduling module.
[0223] Alternatively, the fundus imaging subsystem 800 is activated and deactivated as a whole, for example once both left and right eye-gaze parameters meet the first activation condition. In a further example, the scheduling module may only consider the first activation condition with respect to one eye and its corresponding eye tracking data to reduce computational burden.
[0224] In the active mode, the fundus imaging cameras 808 are active and capturing, and the fundus illumination sources are powered and illuminate the fundus of the eyes of the wearer. Depending on the configuration of the fundus imaging cameras 808, the image capture process occurs in one of three ways. In a first example, the fundus imaging cameras 808 capture images at a single wavelength or waveband at the same time, in a global shutter mode. The captured images of this first example correspond to images of a single layer of the fundus such as a single layer of the retina. In a second example, the fundus imaging cameras 808 capture images in a sequence of varying wavelengths, wherein each capture in the sequence is sensitive to a different wavelength. This provides a plurality of captured images corresponding to different layers of the fundus based on the wavelengths the captured images were captured at. In this second example, the fundus imaging cameras 808 operate in a global shutter mode. In a third example, the fundus imaging cameras 808 are multispectral or hyperspectral sensors, which capture obtaining images at various wavelengths in a single capture. In this example the fundus imaging cameras capture the images corresponding to various wavelengths at the same time in a global shutter mode. The captured images according to any of the first, second or third examples described here form fundus imaging data. After capture and recording of the fundus imaging data, the scheduling module transitions the fundus imaging subsystem 800 to the inactive mode, in which the fundus imaging cameras 808 and the fundus illumination sources are turned off. The fundus imaging data is then transmitted to the user device, and then to the data processing module, for further processing to determine the status of the wearer. The fundus imaging data may include data regarding various structures of the fundus including the retina, optic disc, and macula.
[0225] In an additional example, a plurality of captured images of the fundus within a time band may be combined using a photogrammetry process. In particular, a set of captured fundus images for each eye, wherein each image of the set may include images of the eye at variable orientations (from different relative viewpoints) is combined using photogrammetry. To ensure that the captured images of the set used in the photogrammetry or stereo computer vision process relate to the eye at substantially the same moment in time, the set of fundus images are firstly selected from the raw captured images according to a time band, which may be a second, a minute, or another suitable time band over which substantial structural changes in the fundus are unlikely to have taken place. The captured images are associated, via metadata or otherwise, with timing information such as time of capture, which is used to determine whether a recorded fundus image falls within a particular time band. The respective sets of captured images include at least a minimum number of images, whereby the minimum number of images is set such that the likelihood of good coverage of the fundus is high. Once at least the minimum number of captured images within a time band has been recorded, the set of fundus images is created and processed. The photogrammetry process may be used to form an extended image from the set of fundus images, by effectively stitching the set of fundus images together, to improve the resolution over a wider area of the fundus. Furthermore, the photogrammetry process may be used to form point cloud data for each eye. Once the point cloud data is obtained, a meshing process fills the point cloud data with texture data, relating to the light detected in the captured images associated with the locations in the point cloud data. [0226] The photogrammetry process used to obtain the point cloud data from the set of captured images of the fundus imaging data may be any suitable method, examples of which are described later with respect to the functionality of the eye imaging subsystem 1000. Furthermore, if there are two or more fundus imaging cameras per eye, a stereo computer vision process or stereophotogrammetry may be used. The photogrammetry or stereo computer vision process may further include conventional image processing tasks such as feature detection, edge detection, kernel application and masking to identify the common points between two or more captured images of a set, as will be understood. In this example, the first activation condition, relating to eye alignment with the fundus imaging cameras, may not be required. Additional image processing tasks may be used when forming the extended image or point cloud data of each eye.
[0227] The further processing of the fundus imaging data takes, as an input, the fundus imaging data including one or more of the captured images of the fundus of the eyes of the wearer, or the extended image or point cloud data if a photogrammetry process is used. This data is associated with timing information such as time of capture, which allows trends or patterns to be identified in repeat readings from the fundus imaging subsystem 800 over a period of time. The status of the wearer is the output of the further processing. The further processing may be performed by any suitable method, examples of which are provided later.
[0228] The functionality of the eye imaging subsystem 1000 will now be described. The eye imaging cameras 1008 of the fundus imaging subsystem 800 1000 have properties such as a much higher resolution when compared to the eye tracking cameras, which means they require more power to operate. Furthermore, unlike the eye tracking subsystem 300, wherein eye tracking data is recorded continuously as it may change continuously, it is not necessary to record iris imaging data continuously because changes in the iris, pupil, or sclera of the eyes of the wearer are generally more gradual. For these reasons, the eye imaging subsystem 1000 is configured not to operate continuously, to save power onboard the eyewear device 100.
[0229] The eye imaging subsystem 1000 operates periodically or intermittently, such that the eye imaging subsystem 1000 is activated for a period of time, in which it records iris imaging data including images of the iris, pupil, cornea and sclera of the eyes of the wearer, or a portion thereof, and is deactivated for the remaining time, such that no iris imaging data is recorded. These two modes of operation are referred to as an active mode of the eye imaging subsystem 1000 when iris imaging data is being captured and recorded, and an inactive mode of the eye imaging subsystem 1000 when iris imaging data is not being captured and recorded. In the inactive mode, power supply to the eye imaging cameras 1008 may be suspended or stopped to save energy.
[0230] A determination of when to switch the eye imaging subsystem 1000 from inactive mode to the active mode, or vice versa, is made by the scheduling module. As described above, the scheduling module operates according to a set of rules and particularly activation conditions or thresholds, to determine whether to activate and thus power-on a subsystem. With respect to the eye imaging subsystem 1000, a first activation condition includes eye alignment. This eye alignment condition is similar to the corresponding eye alignment condition of the fundus imaging subsystem 800 1300 as discussed above, whereby each eye of the wearer should be in a certain position such that it is aligned (looking towards) a specific direction. However, whereas the eye alignment for the fundus imaging system refers to an eye alignment with a direction substantially towards the fundus imaging cameras, the alignment for the eye imaging subsystem 1000 is alignment with a variable direction that is not necessarily in the direction of the eye imaging cameras 1008. Due to the focal length of the eye imaging cameras and the three-dimensional, round nature of the surface and near-surface of the human eye, it is possible that sometimes portions of the iris, cornea, and/or sclera (for example features at the edges of captured images) may be out of focus when imaged by the eye imaging cameras. To address this potential issue, the first activation condition may be triggered when the eye alignment is a different alignment compared to alignments of a number of previously captured iris imaging data. In other words, the first activation condition of the eye imaging subsystem 1000 may be met when the eyes of the wearer are looking in a direction that is different from directions associated with a number of previously recorded iris imaging data. Passing this condition to activate the eye imaging subsystem 1000 ensures that a different portion of the iris, pupil and/or sclera is the focus of the eye imaging cameras at each capture, compared to a number of previous captures. These different portions may then be combined together (across multiple captures) to form the 3D model of the eye. The number of previous captures to compare the eye alignment against (to determine whether the alignment is different), may be an arbitrary number, but should be selected such that captures of the same portions of the iris, sclera, and/or cornea, or portions thereof, are not prohibited over extended periods of time.
[0231] Determination of eye alignment for the first activation condition is made by the scheduling module, from eye tracking data. In an example, the eye tracking data is processed locally on the processor of the eyewear device 100.
[0232] The first activation condition may be triggered when a current eye-gaze parameter (such as a current eye-gaze vector) is substantially different from previous eye-gaze parameters (such as previous eye-gaze vectors) for which respective previous iris imaging data was captured. To determine this, the scheduling module may compare the current eye-gaze parameter, such as the eye-gaze vector or a pixel location, to a set of previous eye-gaze parameters corresponding to previously captured iris imaging data. To trigger the first activation condition, the current eye-gaze parameter should be 'substantially different' from the set of recent eye-gaze parameters in that the new eye-gaze parameter data is at least a minimum threshold level of difference from the recent eye-gaze parameters. The comparison between the current eye-gaze parameter and the set of recent eye-gaze parameters may be performed using any suitable similarity measure, for example the dot product or cosine similarity for comparing eye-gaze vectors. In a further alternative example, the minimum threshold level of difference may be a minimum number of pixels (i.e. a pixel distance) between a pixel corresponding to the current eye-gaze vector and the pixels relating to the same feature of the eye (e.g. the centre of the pupil) in the set of previous eye tracking data. [0233] The minimum threshold level of difference may be modified and set according to the capabilities of the eye imaging cameras and according to power and computational efficiency considerations. On one hand, the lower the minimum threshold level of difference, the more often the first activation condition is triggered meaning the more iris imaging data is captured, potentially increasing the amount of useful iris imaging data that can be combined to form the 3D model of each eye. However, on the other hand, activating the eye imaging cameras more often can increase computational burden and deplete power of the eyewear device 100 more rapidly. In an example, the minimum threshold level is set based on the characteristics of the eye imaging cameras. In particular, the minimum threshold level may be based on the capability of the eye imaging cameras to capture a focused image of the eye, and the effective boundaries of this focused image before image quality decreases at the edges of the image. An area of focused image or associated angular span may be translated to a corresponding change in eye-gaze parameter, which may then be assigned as the minimum threshold level. In this manner, the minimum threshold level may be set to ensure that the eye imaging cameras capture current iris imaging data that is rich and at a wide variety of angles with respect to the eye to enable formation of the 3D model of the eye, but without unnecessary overlap between portions of the current iris imaging data and previously captured iris imaging data. This maximises the efficiency of the eye imaging subsystem 1000.
[0234] A second possible activation condition of the eye imaging subsystem 1000 includes a time condition, such as a time of day, week, month, or a time elapsed since previous iris imaging data was obtained (a time period). The second activation condition including the time condition may be met when a current time of day exceeds the time condition, or when a time elapsed since previous iris imaging data was obtain exceeds the time condition. Time data used to compare against the time condition may be obtained from a global clock (typically from the user device, since the eyeglasses may communicate with the user device to synchronise the with the global clock to ensure eye-related data recorded against correct timing information) or any other source of timing information data. For example, the time condition may be to capture new iris imaging data once a week. In this example, a time elapsed is recorded and incremented from timing information data until the time elapsed exceeds a one-week threshold. Upon exceeding this threshold, the second activation condition -the time condition -is passed and data indicating that the second activation condition has been met is provided to the scheduling module.
[0235] A third possible activation condition of the eye imaging subsystem 1000 includes an environmental condition such as an ambient light level condition. The ambient light condition may include minimum and/or maximum light intensity thresholds which must be met to activate the eye imaging subsystem 1000. The ambient light is measured using the first ambient light sensor 1020a and the second ambient light sensor 1020b for an ambient light level for each eye of the wearer. It is to be understood that alternatively a single ambient light sensor may be used for ambient light levels at both eyes. The minimum light intensity threshold may be set based on a minimum visible light level required to obtain good quality iris imaging data. Similarly, a maximum light intensity threshold may be set based on the maximum light level before the quality of iris imaging data degrades. The first and second ambient light sensors may function continuously such that they can be used by the scheduling module regardless of the state (active or inactive mode) of the eye imaging cameras. Alternatively, the ambient light sensors may themselves only be activated once a prior activation condition, such as the second activation condition (the time condition) is met, in order to save power. In this way ambient light may only be considered once the time condition is met.
[0236] Any combination of the first, second and third activation conditions may be used to determine when to activate or deactivate the eye imaging subsystem 1000. Once the particular combination of activation conditions of the eye imaging subsystem 1000 are met, the scheduling module is configured to transition the eye imaging subsystem 1000 to the active mode. Once iris imaging data has been subsequently captured and recorded in the active mode, the eye imaging subsystem 1000 is configured to transition back to the inactive mode. At this point, the time condition of the second activation condition for the eye imaging subsystem 1000 resets, (for example by resetting the elapsed time to zero), meaning the second activation condition is no longer met. The process then repeats, and the eye imaging subsystem 1000 transitions to an active mode to capture and record iris imaging data once the activation conditions are met again.
[0237] In this manner, the eye imaging subsystem 1000 switches between an inactive mode and an active mode to periodically or intermittently capture and record the iris imaging data. Using any of the first, second and/or third activation conditions set out above ensures that the scheduling module turns the eye imaging subsystem 1000 1300 on to record iris imaging data at the optimal moment for capturing the iris imaging data. This in turn ensures that the eye imaging subsystem 1000 operates efficiently in terms of power, memory, and computations, since it does not operate continuously, and also ensures that, when the eye imaging subsystem 1000 is in the active mode, it is more likely to capture useful iris imaging data due to the eye alignment and lighting conditions.
[0238] The activation conditions for switching the eye imaging subsystem 1000 between the active and inactive modes are set and compared against with relevant data (such as timing information data for the second activation condition), by the scheduling module. Further activation conditions for the eye imaging subsystem 1000 may also be used to determine whether to activate the eye imaging subsystem 1000. For example, a quality measure for previously acquired iris imaging data may be used to determine whether recently acquired iris imaging data is of adequate quality before transitioning the eye imaging subsystem 1000 to the inactive mode. If the recently acquired iris imaging data is deemed not adequate, the iris imaging system may not transition to the inactive mode and may instead repeat a capture process to obtain adequate iris imaging data.
[0239] It is to be understood that the scheduling module may activate and deactivate the eye imaging cameras independently of the rest of the eye imaging subsystem 1000 and may further activate and deactivate the eye imaging cameras independently of each other. In particular, the first activation condition, relating to eye alignment, may be considered separately by the scheduling module for each of the eye imaging cameras 1008a, 1008b, 1008c, 1008d or for each pair of eye imaging cameras, by comparing left and right eye-gaze parameters (such as eye-gaze vectors, pixel locations etc.) against predicted eye-gaze parameters corresponding to the eye imaging cameras. The scheduling module may determine that the first activation condition is met for the first eye imaging camera 1008a but not for the third eye imaging cameras 1008c, for example. In this case, the first eye imaging camera 1008a is activated but the third eye imaging camera 1008b is not. The same logic may apply to any of the eye imaging cameras or the first and second pair of eye imaging cameras. In this manner the 'active mode' of the eye imaging subsystem 1000 may be an active mode for each independent eye imaging camera, or pairs thereof, whereby each eye imaging camera has an active and inactive mode, the transition to and from which is controlled by the scheduling module.
[0240] Alternatively, the eye imaging subsystem 1000 is activated and deactivated as a whole, for example once both left and right eye-gaze parameters meet the first activation condition at a global level, for each of the first and second pairs of eye imaging cameras. In a further example, the scheduling module may only consider the first activation condition with respect to one eye and its corresponding eye tracking data to reduce computational burden.
[0241] In the active mode, the eye imaging cameras 1008 are active and capture images of the iris, cornea and/or sclera in the visible spectrum, which forms the iris imaging data. The eye imaging cameras 1008 capture a plurality of such images from different viewpoints, with respect to both the left and the right eyes of the wearer. After capture and recording of the iris imaging data, the scheduling module transitions the eye imaging subsystem 1000 to the inactive mode, in which the eye imaging cameras 1008 are turned off. The iris imaging data is transmitted to the user device, and then to the data processing module, for processing of the iris imaging data and then further processing to determine the status of the wearer. [0242] The post-processing of the raw captured images of the iris imaging data is performed at the data processing module. This processing of the raw data involves the application of a photogrammetry or stereo computer vision process in order to combine a set of the raw captured images to form a 3D model of each eye of the wearer. In particular, a respective set of captured images of each eye, wherein each image of the set includes images of the eye at different eye-gaze parameters and thus at different orientations, is combined using photogrammetry such as stereophotogrammetry in order to determine point-cloud data that is used to form a 3D model of each eye. To ensure that the captured images of the set used in the photogrammetry or stereo computer vision process relate to the eye at substantially the same moment in time, the set of images are firstly selected from the raw captured images according to a time band, which may be a second, a minute, or another suitable time band over which substantial structural changes in the iris and sclera are unlikely to have taken place. The captured images are associated, via metadata or otherwise, with timing information such as time of capture, which is used to determine whether a recorded iris image falls within a particular time band. The respective sets of captured images include at least a minimum number of images, whereby the minimum number of images is set such that the likelihood of a good coverage of the pupil, iris and sclera in the point cloud data is high. Once at least the minimum number of captured images within a time band have been recorded, the set of images is created and processed to form the point cloud data for each eye. Once the point cloud data is obtained, a meshing process fills the point cloud data with texture data, relating to the light detected in the captured images associated with the locations in the point cloud data. For example, the texture data may be RGB data from RGB images captured by the eye imaging cameras.
[0243] The photogrammetry or stereo computer vision process used to obtain the point cloud data from the set of captured images of the iris imaging data may be any suitable method. For example, a photogrammetry algorithm may be used to attempt to minimize the sum of the squares of errors over coordinates and relative displacements of identified common reference points within at least two of the set of images. This minimization is known as bundle adjustment and may be performed using the LevenbergMarquardt algorithm. In stereophotogrammetry, the process calculates 3D coordinates of points on the eye by using measurements made in two or more of the captured images, either taken from different positions (e.g. cameras of the same pair such as the first and second eye imaging cameras) or taken when the eye is at different orientations (e.g. a different eye-gaze vector). Common points are identified in each of these captured images. A line of sight can be constructed from the camera location to the point on the eye. The intersection of these rays (e.g. triangulation) is then used to determine the three-dimensional location of the point on the eye. Using a pair of eye imaging cameras per eye is beneficial as it increases the number of images acquired from different viewpoints of the eye, which means that fewer different orientations of the eye are required. It also increases the accuracy of the photogrammetry process, as there are two fixed viewpoints, relative to each other, per eye. Ultimately these benefits lead to a decrease in required capture time for obtaining the iris imaging data, which improves efficiency of the eye imaging subsystem 1000. [0244] The photogrammetry or stereo computer vision process may further include conventional image processing tasks such as feature detection, edge detection, kernel application and masking to identify the common points between two or more captured images of a set, as will be understood. Additional image processing tasks may be used when using the point cloud data of each eye to form the 3D model complete with texture data.
[0245] At completion of the processing, a 3D textured model of each eye is output, corresponding to the time band of the set of captured images from which it was generated. Because the eye imaging cameras are relatively high resolution, the 3D models are precise and clearly show the surface shape of each eye. The 3D models of the eye form part of the iris imaging data to be further processed to determine a status of the wearer. Across different active modes of the eye imaging subsystem 1000, iris images corresponding to different time bands are captured, which allows for the generation of multiple 3D eye models of the iris and sclera from the iris images, each associated with a different time band. By comparing these 3D models against various measures, it is possible to determine one or more statuses of the wearer. Furthermore, by comparing the 3D models against each other, in a chronological manner according to the associated time bands, it is possible to determine a degradation or improvement rate of the status of the wearer over time. These determinations are performed by the data processing module in the further processing phase.
[0246] The further processing of the iris imaging data takes, as an input, the iris imaging data which may include the captured images from the eye imaging cameras, and/or the 3D models of the eyes of the wearer. This data is associated with timing information such as time of capture of the captured images, those that make up the 3D model, or the time band to which the 3D model is associated. The conservation of this timing information allows trends or patterns to be identified in repeat readings from the eye imaging subsystem 1000 over a period of time. The status of the wearer is the output of the further processing. The further processing may be performed by any suitable method, examples of which are provided later. [0247] The further processing of the eye-related data, including any one or more of eye tracking data, fundus imaging data and iris imaging data is explained in detail here. Since the eyewear device 100 may include any one or more of the subsystems described above, the further processing that can be performed varies according to which subsystems are being used to obtain the eye-related data.
[0248] With respect to the further processing of eye tracking data, many health-related conditions and activities have symptoms and/or markers that are present and thus detectable in the eye tracking data. To determine the presence of these symptoms and/or markers, the further processing may involve intermediate steps or processes to determine secondary eye tracking data, such as oculomotor events. Oculomotor events function as the basis for several eye movement and pupil measures. These events include: 1) fixations and saccades, 2) smooth pursuit, 3) fixational eye movements (tremors, microsaccades, drifts), 4) blinks, and 5) ocular vergence. These terms are well-understood and correspond to particular movement of the eye or lack thereof. The presence of any one or more of these events can be determined from the eye tracking data specifically because the eye tracking data is recorded with respect to time as explained above. In this manner, the eye tracking data effectively provides a chronological recording of the wearer's gaze, in terms of direction and distance. This eye tracking data can be used to determine a fixation for example, when the eye tracking data indicates a consistent gaze point of the eye-gaze vectors over a period of time, and thus over consecutive recorded measurements from the eye tracking subsystem 300. Similarly, a saccade, which is a rapid eye movement between two consecutive fixations, may be determined from the eye tracking data when a first set of recorded measurements indicate a first consistent gaze point of the eye-gaze vectors over a first period of time, and when the that first set of recorded measurements is followed rapidly by a second set of recorded measurements that indicate a second consistent gaze point of the eye gaze vectors over a second period of time. A fixation can be determined directly from the 3D gaze path or global gaze path data over a period of time, as the 3D gaze path or global gaze path will be consistent over the period of time. A saccade will also be evident in the 3D gaze path or global gaze path as a rapid change in the gaze path.
[0249] Determining the secondary eye tracking data from the eye tracking data aids in determining status information related to the status of the wearer. For example, Saccadic eye movements are brief, and have a reliable amplitude-velocity relationship known as the main sequence. This shows that saccade velocity and saccade amplitude follow a linear relationship, up to 15° to 20°. It is known that this relationship varies with age and also in certain disorders, and as such, the status information may be determined by observing the main sequence with respect to the wearer to determine whether the main sequence is indicative of common age-related issues or the certain disorders.
[0250] In a further example, saccade latency may be determined from the eye tracking data. Cortical processing is associated with saccade latency, with shorter latency indicating advanced motor preparation. Thus, determination of the saccade rate, saccade accuracy, and saccade latency, may provide status information regarding an underlying deployment of visual attention of the wearer. Measures such as the saccade rate, saccade accuracy, and saccade latency may be determined from the eye tracking data since the eye tracking data is time-dependent. This allows for velocity and latency calculations to be performed with relative ease, based on changes in the eye tracking data over time.
[0251] In a further example, tremors, microsaccades and drift may also be determined, for example when a fixation is identified in the eye tracking data. Tremors represent high-frequency low-amplitude oscillations of the point of gaze (gaze point) and can thus be determined directly from any one or more of the eye-gaze vectors, the gaze point, or the 3D gaze path/global gaze path over time. The identification of tremors and microsaccades may be differentiated from saccades by comparison to one or more thresholds (based on frequency or amplitude for example). Tremors, microsaccades and drift are each potential symptoms or markers of a status including various possible conditions of the wearer as is well understood.
[0252] In a further example, a blink may be identified, as secondary eye tracking data, from time gaps in the recorded chronological sequence of eye tracking data. In other words, Blinks will be evident at least from the images of the eyes, forming part of the eye tracking data. The identification of a blink, the duration of the blink, and the frequency of blinks may all be recorded as secondary eye tracking data. Blinks are primitive but are indicators of a variety of conditions. When the blink originates from a voluntary action, the blink is known as a voluntary blink or a wink. In the case of non-voluntary blinks, they are of two types: spontaneous blinks and reflexive blinks. For reflexive blinks, external stimuli evoke reflexive blinks as a form of protection, while any involuntary blink not belonging to any of these categories is a spontaneous blink. The winks or voluntary blinks may be identified in the eye tracking data as a form of interaction, indicating a status of the user as 'interacting', for example. In contrast, involuntary blinks may indicate status information regarding a psychological or neurological status of the wearer, or may be indicative of a reflex action to a stimulus. By identifying involuntary blinks, spontaneous blinks, and voluntary blinks over a period of time from the eye tracking data, more status information may be determined to determine a status, particularly a psychological status of the wearer.
[0253] In a further example, ocular vergence may be determined as secondary eye tracking data from the eye tracking data, and particularly data relating to the gaze point. Ocular vergence includes vergence movements in either direction of far or near focus of the vision of the wearer, resulting in convergence or divergence. Far-to-near focus triggers convergent movements and near-to-far focus triggers divergent movements. The eye tracking data may be used to determine, over extended periods of time, an error associated with the determined gaze point of the two eye-gaze vectors. Although the error is corrected due to the process of calibration as explained above, the error may change over time. Tracking the error in the chronological sequence of eye tracking data and determining a rate of change of the error is useful in determining a deterioration in the wearer's ability to perform ocular vergence movements. In particular, if eye gaze vectors are determined to no longer convergence, and the minimum distance between eye gaze vectors grows over time, processing of this eye tracking data may lead to a conclusion that the status of the wearer is 'convergence insufficiency' or 'divergence insufficiency'. In this example secondary eye tracking data from the eye tracking data is used to determine eye-related health conditions.
[0254] It is to be understood that the examples set out above may be used in combination, and further health and physiological conditions may be determined using the eye tracking data, data recorded from the accelerometer 702 and gyroscope 704, and/or the secondary eye tracking data determined therefrom including the ocular events. It is well understood that many conditions have symptoms or markers in eye-gaze and eye tracking data such as that recorded in the eye tracking data and/or data regarding ocular events determined thereafter. For example, health-related conditions such as a stroke can be determined as a status of the wearer based on the eye tracking data indicating unsynchronized eye movement from each eye, which may be determined from the images of the eyes or the eye-gaze vectors. Determining stroke as the status of the wearer may also be determined from the eye tracking data and the data from the accelerometer 702 and gyroscope 704, if for example, the orientation of the head of the wearer is determined not to correspond to expected eye tracking data values. In another example, age-related macular degeneration may be determined from abnormal eye movement of the wearer, determined from analysis of the eye tracking data.
[0255] Similarly, psychological, and neurological conditions may contribute to a determination of the status of the wearer. Many of these such conditions exhibit symptoms or markers in the eye of the wearer. For example, schizophrenia of the wearer may be determined as the status of the wearer if the eye tracking data indicates abnormal eye movement, particularly with respect to the secondary eye tracking data including smooth pursuit, saccade control, and visual search. Schizophrenia may have a particular data profile in terms of these ocular events which may be matched to the profile of the wearer. Conditions such as depression, ADHD and autism are identifiable in a similar manner.
[0256] In other examples, data profiles with respect to activities may be matched against eye tracking data. Activities such as reading, sleeping, social interactions, watching a display, participating in a physical activity such as a game or sport may each have a predetermined ocular event profile. For example, reading may have a particular profile of smooth pursuit and saccades. The eye tracking data and head orientation data from the accelerometer 702 and gyroscope 704 may be analysed and matched to the predetermined ocular event profile to determine that the status of the wearer is 'reading'. This may be done in any suitable manner, for example, by comparing data of the predetermined ocular event profile to the actual eye tracking data and determining a measure of similarity therebetween, such as comparing to boundary thresholds of similarity or the like. In the process of determining that the status of the wearer includes the 'reading' activity, the processing may also determine status information regarding atypical reading patterns. Further conditions such as dyslexia may then also be identified as part of the status of the wearer. For example, the status of the wearer may identify a combination of outcomes, such as the wearer is 'reading' and is exhibiting signs of dyslexia.
[0257] The examples specified above with respect to the determined 'status' of the wearer are merely a selection of possible status data output by the further processing of the eye tracking data and/or the gyroscope 704 and accelerometer 702 data. Many conditions and activities have markers or symptoms exhibited in eye tracking data and may be determined by the further processing of the eye tracking data. It is to be understood that additional status data may be determined, including individual elements or combinations of status data. The status data may include data regarding the health-related condition of the wearer, an activity status of the wearer based on an activity performed by the wearer, a physiological state of the wearer, and a psychological state of the wearer, or combinations thereof. The status data may be used to infer a diagnosis or prognosis of a physiological, neurological, psychological or disease condition. The status data may also be used to infer an activity being performed by the wearer.
[0258] With respect to the further processing of fundus imaging data, many health-related conditions and activities have symptoms and/or markers that are present and thus detectable in the fundus imaging data. For example, it is well understood that health-related conditions including Glaucoma, Coronary Heart Disease, Peripheral Arterial Disease, Diabetic Retinopathy, Age-related Macular Degeneration, Hypertension, Aneurysm, smoking status and disseminated Tuberculosis have markers or symptoms observable on the fundus or a component thereof, such as the retina. The further processing of the fundus imaging data includes image processing the fundus images, using feature recognition processes for example, to identify the existence of any markers or symptoms of known conditions. The further processing further includes monitoring such symptoms or markers across a sequence of consecutively captured fundus images for progression. For example, fundus images corresponding to images taken in a first week, a second week, a third week, and a fourth week in a row may be analysed to determine whether symptoms or markers existing in the first week have progressed. Conversely, when the wearer is recovering from a condition or undergoing treatment, similar analysis of the fundus images may show a removal of the symptoms or markers of the conditions.
[0259] With respect to the further processing of iris imaging data, many health-related conditions and activities have symptoms and/or markers that are present and thus detectable in the iris imaging data. As explained above, the iris imaging data refers to the images captured by the eye imaging cameras 1008 and/or the 3D models of each eye, and such data is not limited to the iris and may also include data regarding the sclera, the cornea, and other parts of the outer eye. Symptoms and/or markers that may be present and thus detectable in this iris imaging data include the iris/cornea angle (observable from the 3D model of the eye), the iris status/ changes in the iris over a period of time, the coloration of the iris/cornea boundary and its evolution over time, a blur of the iris, and the coloration and quality of the sclera (to detect jaundice, high blood pressures), for example. These symptoms can be indicative of conditions. For example, it is well understood that conditions such as Closed Angle Glaucoma and high Cholesterol have markers or symptoms observable on the iris or sclera or a component thereof. Closed Angle Glaucoma is associated with a shallower angle between the iris and the cornea. This angle may be measured from the surface of the 3D models of the eyes, processed from the iris imaging data, in order to determine the presence or likelihood of Close Angle Glaucoma. High Cholesterol is associated with Arcus senilis, which is identifiable from the iris imaging data including the 3D models or the raw iris images. The further processing of the iris imaging data may include image processing the raw iris images, using feature recognition processes for example, to identify the existence of any markers or symptoms of known conditions. The further processing may also or alternatively include processing and analysing the 3D models of the eyes to identify symptoms and markers of common conditions. The further processing further includes monitoring such symptoms or markers across a sequence of consecutively captured iris images or 3D models processed therefrom for progression. For example, 3D models of the iris corresponding to iris images taken in a first week, a second week, a third week, and a fourth week in a row may be analysed to determine whether symptoms or markers existing in the first week have progressed. Conversely, when the wearer is recovering from a condition or undergoing treatment, similar analysis of the 3D models and/or iris images may show a removal of the symptoms or markers of the conditions. [0260] The further processing performed by the data processing module 202 for each of the eye-related data as set out above may be performed by a data processing machine-learning model, or a plurality of such models. A machine-learning model is a module, implemented by hardware, software, or a combination thereof, that is configured to perform one or more machine-learning processes or algorithms to perform tasks without explicit instructions or programming. The machine-learning algorithms or processes may be supervised or unsupervised, and may include one or more architectures. Without limitation, the machine-learning algorithms or processes used in the machine-learning model may include neural network algorithms, such as artificial neural networks, recurrent neural networks, convolutional neural networks, and/or transformer networks including attention layers such as generative pre-trained transformers. The machine-learning algorithms or processes may additionally or alternatively include: linear discriminant analysis, quadratic discriminate analysis, kernel ridge regression, support vector machines, support vector classification-based regression processes, stochastic gradient descent algorithms, including classification and regression algorithms based on stochastic gradient descent, nearest neighbours classification algorithms, Gaussian processes such as Gaussian Process Regression, cross-decomposition algorithms, including partial least squares and/or canonical correlation analysis, naïve Bayes methods, and/or algorithms based on decision trees, such as decision tree classification or regression algorithms. Furthermore, the machine-learning algorithms may include one or more of ensemble methods such as bagging meta-estimator, forest of randomized tress, gradient tree boosting, and/or voting classifier methods.
[0261] The machine-learning algorithms may be supervised, unsupervised, self-supervised or semi-supervised. Supervised machine learning algorithms, as defined herein, include algorithms that receive a training set relating a number of inputs to a number of outputs, and seek to find one or more mathematical relations relating inputs to outputs, where each of the one or more mathematical relations is optimal according to some criterion specified to the algorithm using a scoring function. For instance, a supervised learning algorithm may include the eye tracking data and/or oculomotor events as described above as inputs, status data as outputs, and a scoring function representing a desired form of relationship to be detected between inputs and outputs. The scoring function may seek to maximize the probability that a given input and/or combination of inputs is associated with a given output, to minimize the probability that a given input is not associated with a given output. The scoring function may be expressed as a risk function representing an "expected loss" of an algorithm relating inputs to outputs, where loss is computed as an error function representing a degree to which a prediction generated by the relation is incorrect when compared to a given input-output pair provided in training data.
[0262] Supervised machine-learning algorithms may include classification algorithms, defined as processes whereby a computing device derives, from training data, a model for sorting inputs into categories or bins of data. Classification may be performed using, without limitation, linear classifiers such as without limitation logistic regression and/or naive Bayes classifiers, nearest neighbour classifiers, support vector machines, decision trees, boosted trees, random forest classifiers, and/or neural network-based classifiers. In an example, the machine learning algorithm includes an artificial neural network (ANN), such as a convolutional neural network or transformer network, comprising an input layer of nodes, one or more intermediate layers, and an output layer of nodes. Connections between nodes may be created via the process of "training" the network, in which elements from a training dataset are applied to the input nodes. The network is then trained using a suitable training algorithm (such as conjugate gradient, Levenberg-Marquardt, simulated annealing, or other algorithms) to adjust the connections by modifying the weights at nodes in adjacent layers of the neural network. Performing this training process iteratively, according to a cost function, eventually produces desired values at the output nodes. This process is sometimes referred to as deep learning.
[0263] An example trained artificial neural network 1600 is shown in figure 16. Figure 16 shows eye-related data 1602 that is obtained according to the previously described one or more sensor subsystems. This data may then be pre-processed (for example to form secondary data, to determine a 3D model, or to detect a specific feature in the raw data or the like), and is then used to form pre-processed eye-related input data 1604. The eye-related input data 1604 is input to an input layer 1606 of the artificial neural network 1600, which includes one or more input nodes configured to receive the eye-related input data 1604. The illustrated example artificial neural network is configured to process the input data via a first hidden layer 1608a and a second hidden layer 1608b, however, fewer or more hidden layers are possible with a greater number of nodes. At an output layer 1610, one or more outputs nodes of the artificial neural network provide output data, such as status data indicative of the status of the wearer.
[0264] Supervised algorithms such as the artificial neural network illustrated in figure 16 are trained using training data. The training data includes correlations that a machine-learning process may use to model relationships between two or more categories of data elements. Training data includes a plurality of data entries, each entry representing a set of data elements that were recorded, received, and/or generated together, wherein the data elements may be correlated by shared existence in a given data entry, by proximity in a given data entry, or the like. Multiple data entries in training data may show one or more trends in correlations between categories of data elements. For example, a higher value of a first data element belonging to a first category of data element may tend to correlate to a higher value of a second data element belonging to a second category of data element, indicating a possible proportional or other mathematical relationship linking values belonging to the two categories. Training data may be organised or formatted according to categories of data elements, for instance by associating data elements with one or more metadata or descriptors corresponding to categories of data elements. Training data may include data entered in standardised forms by persons or processes, and elements in training data may be linked to descriptors of categories by tags, tokens, or other data elements. For example, training data may be provided in self-describing formats such as JavaScript Object Notation (JSON), extensible markup language (XML), or fixed-length formats, or formats linking positions of data to categories such as comma-separated value (CSV) formats.
[0265] With respect to the eye tracking data, the training data is used to train the machine learning algorithm or algorithms to correlate input data, such as the oculomotor events, one or more of the images, the eye gaze vectors, the 3D gaze path and the 3D global gaze path, or a portion thereof, to output data such as status data of the wearer. The training data includes a plurality of data entries, wherein each data entry correlates a particular status data to eye tracking data or oculomotor event. For example, for the activity of 'reading' a 'reading' training dataset is obtained, whereby the reading training dataset includes one or more of a series of oculomotor events, eye tracking data, such as the images, the eye gaze vectors, the 3D gaze path and/or the 3D global gaze path, correlated with status output data indicative of the status of 'reading'. Such training data may be obtained in any suitable manner. For example, the reading training dataset may be acquired from a pre-existing database, or in a controlled experiment whereby eye tracking data is collected for control users instructed to read.
[0266] With respect to fundus imaging data, the training data used to train the machine learning algorithm or algorithms may include a plurality of images of eye fundus of clinically diagnosed health conditions, correlated to the clinical diagnosis. For example, the training data may include a first plurality of fundus images of clinically diagnosed patients of Open Angle Glaucoma, each correlated to output status data 'Open Angle Glaucoma'. A common marker of Open Angle Glaucoma is retinal cupping and retinal damage. These features may be identified in the first plurality of fundus images. The first plurality of fundus images may be captured under a variety of light conditions, at different stages in the progression of Open Angle Glaucoma, and from different points of view. Varying the training dataset in this way usually increases the capabilities of the machine learning algorithms to accurately determine the status of the wearer, regardless of the fundus image properties. The training data set may include additional pluralities of images, wherein each plurality corresponds to a clinical diagnosis of a specific condition.
[0267] With respect to iris imaging data, the training data used to train the machine learning algorithm or algorithms may include a plurality of images of eye iris, sclera, cornea and pupil regions of clinically diagnosed health conditions, correlated to the clinical diagnosis. For example, the training data may include a first plurality of iris images or 3D representations of clinically diagnosed patients of Closed Angle Glaucoma, each correlated to output status data 'Closed Angle Glaucoma'. A common marker of Closed Angle Glaucoma is a shallow iris-cornea angle. These angles may be identified in the first plurality of iris images or 3D representations. The first plurality of fundus images may be captured under a variety of light conditions, at different stages in the progression of Closed Angle Glaucoma, and from different points of view. Varying the training dataset in this way usually increases the capabilities of the machine learning algorithms to accurately determine the status of the wearer, regardless of the iris image properties. The training data set may include additional pluralities of images, wherein each plurality corresponds to a clinical diagnosis of a specific condition.
[0268] The training data is used to train the data processing machine learning model including the supervised machine learning algorithm or algorithms. In the example of an ANN, the training data is used to iteratively train the ANN, by iteratively adjusting weights at nodes to predict the correct output for a given set of inputs. The set of weights in the network encapsulates what the network has learned from the training data and are recorded for use on real data.
[0269] The data processing machine-learning model responsible for post-processing the eye tracking data may be trained using the training data to determine and output the status of the wearer from a number of inputs (the one or more of the images, the pixel locations, the eye gaze vectors, the 3D gaze path and the 3D global gaze path, or a portion thereof), and may also be trained to determine multiple possible statuses and combinations of possible statuses using each of the inputs from the eye tracking data, and/or by using other eye-related data from other subsystems as further inputs. This data processing machine-learning model may therefore include multiple inputs or multiple algorithms or networks. In order to train this data processing machine-learning model effectively, each input stream of data, for example a stream for each of the images, the eye gaze vectors, the gaze point, the 3D gaze path and the 3D global gaze path, is processed and trained on the data processing machine-learning model separately. In particular, the data processing machine-learning model is trained using separate training datasets that correlate respective input stream data types to one or more status data. Each separate training dataset correlates one or more of types of input data, for example: oculomotor events, images for image input data, or gaze points for gaze point input data with status data indicative of one or more of a health-related condition of the wearer, an activity status of the wearer based on an activity performed by the wearer, a physiological state of the wearer, and a psychological state of the wearer. At the outcome of this training process, the data processing machine learning model is configured to take a plurality of inputs corresponding to the type of input data it is trained on (from the separate training datasets) to produce status data as an output.
[0270] With respect to the fundus and/or eye imaging data, the data processing machine-learning model may be trained with a training data set including fundus and/or eye imaging training data correlated with status data. The data processing machine-learning model is trained in the same manner as explained above with respect to the eye tracking data.
[0271] As explained previously, each of the eye-related data, including eye tracking data, fundus imaging data, and eye imaging data, are time dependent, and thus the training datasets may include chronological sequences of input data correlated against status data entries. In the case of eye tracking data, the gaze point, the 3D gaze path, and the 3D global gaze path are also associated with a distance measurement, and thus the training datasets may also include distance data as part of the input. The training datasets for eye tracking data may therefore include any one or mare of the eye tracking data types, timing data, and distance data, correlated with status data indicative of health-related conditions, such as stroke, age-related macular degeneration, convergence insufficiency, divergence insufficiency and others. Further training datasets may be used to train the data processing machine-learning model to detect psychological or neurological conditions, by correlating eye tracking input data with status data indicative of psychological or neurological conditions such as depression, ADHD, schizophrenia, autism, dyslexia and others. Similarly other training datasets may be used to train the data processing machine-learning model to detect wearer activities, by correlating eye tracking input data with status data indicative of wearer activities such as reading, watching a display, sleeping, socialising, or undertaking a sport, game, or exercise. In the training datasets for fundus imaging data may include fundus imaging data and timing data as inputs that are correlated to status data indicative of other health related conditions such as Aneurysm, Age-related Macular Degeneration, and Hypertension.
[0272] It is to be understood that further training datasets are possible, and that any condition or activity that has symptoms or markers detectable via the subsystems may be the subject of a training dataset to train the data processing machine-learning model accordingly. The training datasets may also include accelerometer and/or gyroscope data, particularly with respect to eye tracking data. Furthermore, the data processing machine-learning model may further include one or more machine learning algorithms trained and configured to detect secondary eye tracking data explained above, such as ocular events, from the recorded eye tracking data. The output of such algorithms includes the secondary eye tracking data such as the ocular events. This may then form the input to further machine learning algorithms, trained using datasets correlating the secondary eye tracking data, such as ocular events, to the status data indicating the condition or activity of the wearer.
[0273] It is to be understood that various architectures may be used to implement the data processing machine-learning model such that it is capable of taking a plurality of inputs, including any or all of the types of eye tracking data, the accelerometer data, the gyroscope data, and/or the secondary eye tracking data, and/or other eye-related data such as fundus imaging data and the eye imaging data to output one or more statuses of the wearer. This may include a series of machine learning algorithms, whereby the output of a first machine learning algorithm forms the input of a second machine learning algorithm, or may include multiple parallel machine learning algorithms, or a combination thereof.
[0274] Alternatively, or additionally, the machine learning algorithms may be unsupervised, and training data may include one or more elements that are not categorized. An unsupervised machine-learning process is a process that derives inferences in datasets without regard to labels; as a result, an unsupervised machine-learning process may be free to discover any structure, relationship, and/or correlation provided in the data. Unsupervised processes may not require a response variable; unsupervised processes may be used to find interesting patterns and/or inferences between variables, to determine a degree of correlation between two or more variables, or the like. For unsupervised machine-learning algorithms, the training data may not be formatted, or be linked to descriptors or metadata. for some elements of data. Machine-learning algorithms and/or other processes may sort training data according to one or more categorizations using, for instance, natural language processing algorithms, tokenization, detection of correlated values in raw data and the like; categories may be generated using correlation and/or other processing algorithms.
[0275] The training data and training datasets described above may be obtained by any suitable method. In some examples, data from previous or current wearers of the eyewear device 100, or other eyewear devices, may be used as training data upon clinical or wearer-inputted determination of a particular status. For example, a wearer of an eyewear device 100 may be clinically diagnosed with depression, autism, or suffer from age-related macular degeneration or suffer a stroke or aneurysm. The historical eye-related data (such as eye tracking data, fundus imaging data and/or eye imaging data) of the wearer may then be retrieved subsequent to the clinical diagnosis and correlated to the particular status corresponding to the diagnosis for the purpose of providing training data. In an example, historical eye tracking data, obtained and determined for the previous or current wearer prior to the clinical status determination may form input data of a training dataset, whereby this input data is correlated with status data corresponding to the clinical diagnosis that has been determined. In this example, the historical data may further include accelerometer, gyroscope, distance and timing information data, and any secondary eye tracking data. The historical data to be used in training data may also or alternatively include other eye-related data such as historical fundus imaging data. In some examples, where the diagnosed status is associated with symptoms in markers in various eye-related data, the historical data may include a plurality of eye-related data.
[0276] Additionally or alternatively, all or part of the historical data may not be correlated with the clinically diagnosed status. Such data may be processed with one or more unsupervised machine learning algorithms, such as k-means clustering, to determine patterns, trends or the like in the historical eye tracking data, for example to identify a previously unknown data indicator or symptom of the clinically determined status of the wearer.
[0277] The above-described method of obtaining historical wearer data for current training datasets may be extended over a plurality of wearers using a plurality of eyewear devices to build a large set of contemporary training data. In this manner, training datasets may be updated based on historical or recent data obtained by one or more eyewear devices.
[0278] Training data may also be obtained from a clinical setting, for example, from data obtained in a hospital or laboratory, using clinical equipment. This 'clinical' training data may be used separately from or in addition to the historical training data described above, which is sourced from eye-related data from the eyewear device 100 itself. In this manner, historical data and clinical data may be combined to form the training data.
[0279] Additionally or alternatively to the further processing set out above, the eye-related data may be provided, via the user device 204, to one or more third parties for independent testing and processing. For example, the eye-related data (e.g. the raw data or pre-processed data from the various subsystems) may be provided to a medical practitioner or health professional via an advisor terminal, over a communications network or the like. The eye-related data may then be used to support conventional or new methods performed by the third party for analysis and diagnosis. Additionally still, the wearer may be able to input ancillary data relating to the eye-related data on the user device 204, via an application on the user device 204, to be sent to the third party with the eye-related data. The ancillary data may include an activity performed by the wearer, such as an exercise, or a symptom noted by the wearer. The ancillary data may be associated with a specific period of time, indicated by the wearer. The addition of the ancillary data for specific time periods may provide a contextual aid to third parties in their analysis of the eye-related data. [0280] The eye-related data, and the much of the functionality related to this data, involve the user device 204. The user device 204 is a computing device comprising a display and a user input device, such as a touchscreen. Apart from the eye tracking data, which is processed locally on the eyewear device 100 for use by the scheduling module, the raw eye-related data is sent to the user device 204 from the eyewear device 100, and then from the user device 204 to the data processing module 202 for processing and subsequently further processing. The user device 204 is configured to host an application for wearer interaction with the eye-related data and/or status data that is determined from the further processing of the eye-related data. As explained previously, the application may include calibration functionality, for guiding the wearer through the eye tracking subsystem 300 calibration process. The application further includes one or more interactive display windows, which the wearer may navigate via the user input device, to retrieve eye-related data and/or status data obtained therefrom. The interactive display windows may include data elements, graphical elements, or other pictorial representations of time-series data corresponding to the eye-related data and/or status data. The application is installed on the user device 204, and utilizes the communication/network components of the user device 204 to communicate with the eyewear device 100, to receive eye-related data, and to communicate with the data processing module 202, to send the eye-related data and to receive status data or further processed eye-related data in response. The application further comprises a graphical user interface for obtaining user input via the user input device, and may include additional functionality such as a settings function, an ancillary data input function, and an alarm function, for example. The settings function of the application may include a number of settings of the eyewear device 100 that are configurable by the wearer. For example, activation conditions and associated thresholds for activating/deactivating subsystems of the eyewear device 100 by the scheduling module may be user-configurable in the settings function of the application. Manual override of one or more subsystems, to force activate or force deactivate the subsystems, may also be configurable in the settings function of the application. Settings regarding data transfer of the eye-related data may also be viewed and adjusted by the wearer in the settings function of the application, for example, to set a specific time and/or frequency for automatically uploading data from the eyewear device 100 to the user device 204, for manually uploading data, and for deleting data on the eyewear device 100 and/or the user device 204. Software and firmware upgrades may also be identified via an upgrader function of the settings function, downloaded to the user device 204 and then sent to and installed upon the eyewear device 100. The ancillary input function may allow the wearer to input ancillary data using the user input device, associated with a particular period of time. The ancillary input function is configured to correlate the ancillary data with eye-related data according to the particular period of time, by reading the timing information such as time of capture from the eye-related data received from the user device 204. This may be done by adding metadata corresponding to the ancillary data to the eye-related data. The eye-related data with the metadata may then be sent to the data processing module 202 for further processing, or to third parties for further analysis. The alarm function of the application may be used for providing one or more alarm signals to the wearer based on one or more alarm conditions. One such alarm condition may be related to status data, received from the data processing module 202 after further processing of the eye-related data. The status data may be provided to the wearer via a display on the application, via a notification or the like. The status data is also or alternatively provided to a third party such as a healthcare professional via a third-party device. Any of the application on the user device 204, the data processing module 202, and/or the third-party device may be further configured to perform one or more checks of the status data to determine whether an alarm condition is met. The alarm condition may be met when status data indicates the existence of a particular health condition, physiological state, or psychological state. For example, if the status data indicates 'autism', that the wearer is exhibiting signs of autism, a corresponding autism alarm condition may be met, and an alarm triggered on the user device 204 and/or the third-party device. The alarm may be accompanied by the display of information relating to the status data, to indicate a reason for the alarm. For example, the accompanying information displayed to the user may read: your eye data indicates that you may be showing signs of autism. You may wish to consult a healthcare professional to investigate this further'. Alarm conditions may be assigned different priorities, whereby a higher priority alarm condition being met invokes a different type of alarm. In a further example, if the status data indicates 'stroke', that the wearer is exhibiting stroke symptoms, a corresponding stroke alarm condition may be met, and an alarm triggered on the third-party device. The stroke alarm condition may be assigned a high priority. For such a high priority alarm, the alarm on the third-party device may be accompanied by an audible alarm, and the display of information relating to the status data, to indicate an urgent reason for the alarm. For example, the accompanying information displayed to the third party may read: 'URGENT: Patient X may be suffering a stroke -provide medical attention immediately'. The alarm may comprise user instruction and/or third-party instruction to provide recommendations for the wearer based on the status data. For example, if the status data indicates that the wearer watched television for three consecutive hours, the user instruction may include the display of a message such as: You have been watching television for 3 hours, time to do something else and remain active'. A plurality of other alarm conditions may trigger other alarms, which may result in audible or visual signals on the user device 204 to alert the wearer. These other alarm conditions may include a faulty component alarm, triggered when a particular component of the eyewear device 100 is non-responsive or otherwise identified as malfunctioning, a memory-full alarm, triggered when the memory on the eyewear device 100 is full or near full (e.g. 95% full), or a low power alarm.
[0281] The application may also include a battery indicator graphic or metric to illustrate a remaining power on the eyewear device 100, from power data sent to the user device 204 from the eyewear device 100. When the eyewear device 100 falls under a minimum threshold power such as 20%, 10% or 5%, the application is configured to display or communicate a low-power warning signal to the wearer based on the low power alarm.
[0282] In the examples described above, the data processing module 202 may comprise a server. The server may include a single server or network of servers. In some examples, the functionality of the server may be provided by a network of servers distributed across a geographical area, such as a worldwide distributed network of servers, and a user may be connected to an appropriate one of the network servers based upon, for example, a user location.
[0283] The above description discusses examples of the invention with reference to a single user/wearer for clarity. It will be understood that in practice the system may be shared by a plurality of users, and possibly by a very large number of users simultaneously.
[0284] The processes and methods described above are fully automatic. In some examples a user or operator of the system may manually instruct some steps of the method to be carried out.
[0285] In the described examples above, the system may be implemented, at least in terms of the user device 204 and the data processing module 202, by any form of a computing and/or electronic device.
Such a device may comprise one or more processors which may be microprocessors, controllers or any other suitable type of processors for processing computer executable instructions to control the operation of the device in order to gather and record routing information. In some examples, for example where a system on a chip architecture is used, the processors may include one or more fixed function blocks (also referred to as accelerators) which implement a part of the method in hardware (rather than software or firmware). Platform software comprising an operating system, or any other suitable platform software may be provided at the computing-based device to enable application software to be executed on the device. [0286] Various functions and methods described herein can be implemented in hardware, software, or any combination thereof. If implemented in software, the functions can be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media may include, for example, computer-readable storage media. Computer-readable storage media may include volatile, or non-volatile, removable or non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. A computer-readable storage media can be any available storage media that may be accessed by a computer. By way of example, and not limitation, such computer-readable storage media may comprise RAM, ROM, EEPROM, flash memory or other memory devices, CD-ROM or other optical disc storage, magnetic disc storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disc and disk, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray (RTM) disc (BD). Further, a propagated signal is not included within the scope of computer-readable storage media. Computer-readable media also includes communication media including any medium that facilitates transfer of a computer program from one place to another. A connection, for instance, can be a communication medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fibre optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of communication medium. Combinations of the above should also be included within the scope of computer-readable media.
[0287] Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, hardware logic components that can be used may include Field-programmable Gate Arrays (FPGAs), Program-specific Integrated Circuits (ASICs), Program-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs). Complex Programmable Logic Devices (CPLDs), etc. [0288] Although illustrated as a single system, it is to be understood that the computing device that forms the user device 204 and/or data processing module 202 may be a distributed system. Thus, for instance, several devices may be in communication by way of a network connection and may collectively perform tasks described as being performed by the computing device.
[0289] Although illustrated as a local device it will be appreciated that the computing device forming the user device 204 and/or the data processing module 202 may be located remotely and accessed via a network or other communication link (for example using a communication interface).
[0290] Where there are multiple devices, such as the user device 204, the eyewear device 100, and the data processing module 202, these may communicate and send and receive data via any suitable transceiver and receiver. Such transceivers and receivers connect to other devices or networks via different wireless protocols such as Bluetooth, NFC, Wi-Fi, 3G/4G/5G cellular, etc., that can operate to some extent interactively and autonomously.
[0291] The term 'computer' is used herein to refer to any device with processing capability such that it can execute instructions. Those skilled in the art will realise that such processing capabilities are incorporated into many different devices and therefore the term 'computer includes PCs, servers, mobile telephones, personal digital assistants and many other devices.
[0292] Those skilled in the art will realise that storage devices utilised to store program instructions can be distributed across a network. For example, a remote computer may store an example of the process described as software. A local or terminal computer may access the remote computer and download a part or all of the software to run the program.
[0293] Alternatively, the local computer may download pieces of the software as needed, or execute some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also realise that by utilising conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a ASP, programmable logic array, or the like.
[0294] It will be understood that the benefits and advantages described above may relate to one example or may relate to several examples. The examples are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. Variants should be considered to be included into the scope of the invention.
[0295] Any reference to 'an' item refers to one or more of those items. The term 'comprising' is used herein to mean including the method steps or elements identified, but that such steps or elements do not comprise an exclusive list and a method or apparatus may contain additional steps or elements.
[0296] As used herein, the terms "component' and "system" are intended to encompass computer-readable data storage that is configured with computer-executable instructions that cause certain functionality to be performed when executed by a processor. The computer-executable instructions may include a routine, a function, or the like. It is also to be understood that a component or system may be localized on a single device or distributed across several devices.
[0297] Further, as used herein, the term "exemplary" is intended to mean "serving as an illustration or example of something".
[0298] Further, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim.
[0299] Moreover, the acts described herein may comprise computer-executable instructions that can be implemented by one or more processors and/or stored on a computer-readable medium or media. The computer-executable instructions can include routines, sub-routines, programs, threads of execution, and/or the like. Still further, results of acts of the methods can be stored in a computer-readable medium, displayed on a display device, and/or the like.
[0300] The order of the steps of the methods described herein is exemplary, but the steps may be carried out in any suitable order, or simultaneously where appropriate. Additionally, steps may be added or substituted in, or individual steps may be deleted from any of the methods without departing from the scope of the subject matter described herein. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought.
[0301] It will be understood that the above description of a preferred example is given by way of example only and that various modifications may be made by those skilled in the art. What has been described above includes examples of one or more examples. It is, of course, not possible to describe every conceivable modification and alteration of the above devices or methods for purposes of describing the aforementioned aspects, but one of ordinary skill in the art can recognize that many further modifications and permutations of various aspects are possible.
[0302] Accordingly, the described aspects are intended to embrace all such alterations modifications, and variations that fall within the scope of the appended claims.