CROSS-REFERENCE TO RELATED APPLICATIONSThis application claims the benefit of U.S. Provisional Application No. 61/662,217, filed on Jun. 20, 2012, which is incorporated by reference herein in its entirety.
BACKGROUND1. Technical Field
The subject matter described herein relates to hearing assist devices and devices and services that are capable of providing external operational support to such hearing assist devices.
2. Description of Related Art
Persons may become hearing impaired for a variety of reasons, including aging and being exposed to excessive noise, which can both damage hair cells in the inner ear. A hearing aid is an electro-acoustic device that typically fits in or behind the ear of a wearer, and amplifies and modulates sound for the wearer. Hearing aids are frequently worn by persons who are hearing impaired to improve their ability to hear sounds. A hearing aid may be worn in one or both ears of a user, depending on whether one or both of the user's ears need hearing assistance.
Less expensive hearing aids amplify all frequencies equally, while mid-range analog and digital hearing aids can be programmed to amplify in a manner tuned to a hearing impaired wearer's actual frequency response. Most expensive models adapt via operating modes. In some modes, a directional microphone is used, while an omnidirectional microphone is used in others.
Since most hearing aids rely on battery power to operate, it is critical that hearing aids are designed so as not consume battery power too quickly. This places a constraint on the types of features and processes that can be built into a hearing aid. Furthermore, it is desirable that hearing aids be lightweight and small so that they are comfortable to wear and not readily discernible to others. This also operates as a constraint on both the size of the batteries that can be used to power the hearing aid as well as the types of functionality that can be integrated into a hearing aid.
If the hearing aid batteries are dead or a hearing aid is left at home, a wearer needing hearing aid support is at a loss. This often results in someone raising their speaking volume to help the wearer hear what they are saying. Unfortunately, because hearing problems often have a frequency profile, merely raising one's volume may not work. Similarly, raising the volume on a cell phone may not adequately provide understandable audio to someone with hearing impairment.
BRIEF DESCRIPTION OF THE DRAWINGS/FIGURESThe accompanying drawings, which are incorporated herein and form part of the specification, illustrate the subject matter of the present application and, together with the description, further serve to explain the principles of the embodiment described herein and to enable a person skilled in the relevant art(s) to make and use such embodiments.
FIG. 1 shows a communication system that includes a multi-sensor hearing assist device that communicates with a near field communication (NFC)-enabled communications device, according to an exemplary embodiment.
FIGS. 2-4 show various configurations for associating a multi-sensor hearing assist device with an ear of a user, according to exemplary embodiments.
FIG. 5 shows a multi-sensor hearing assist device that mounts over an ear of a user, according to an exemplary embodiment.
FIG. 6 shows a multi-sensor hearing assist device that extends at least partially into the ear canal of a user, according to an exemplary embodiment.
FIG. 7 shows a circuit block diagram of a multi-sensor hearing assist device that is configured to communicate with external devices according to multiple communication schemes, according to an exemplary embodiment.
FIG. 8 shows a flowchart of a process for a hearing assist device that processes and transmits sensor data and receives a command from a second device, according to an exemplary embodiment.
FIG. 9 shows a communication system that includes a multi-sensor hearing assist device that communicates with one or more communications devices and network-connected devices, according to an exemplary embodiment.
FIG. 10 shows a flowchart of a process for a wirelessly charging a battery of a hearing assist device, according to an exemplary embodiment.
FIG. 11 shows a flowchart of a process for broadcasting sound that is generated based on sensor data, according to an exemplary embodiment.
FIG. 12 shows a flowchart of a process for generating and broadcasting filtered sound from a hearing assist device, according to an exemplary embodiment.
FIG. 13 shows a flowchart of a process for generating an information signal in a hearing assist device based on a voice of a user, and transmitting the information signal to a second device, according to an exemplary embodiment.
FIG. 14 shows a flowchart of a process for generating voice based at least on sensor data to be broadcast by a speaker of a hearing assist device to a user, according to an exemplary embodiment.
FIG. 15 is a block diagram of an example system that enables external operational support to be provided to a hearing assist device in accordance with an embodiment.
FIG. 16 is a block diagram of a system comprising a hearing assist device and a cloud/service/phone/portable device that may provide external operational support thereto.
FIG. 17 is a block diagram of an enhanced audio processing module that may be implemented by a hearing assist device to provide such enhanced spatial signaling in accordance with an embodiment.
FIG. 18 depicts a flowchart of a method for providing audio playback support to a hearing assist device in accordance with an embodiment.
FIG. 19 is a block diagram of a noise suppression system that may be utilized by a hearing assist device or a device/service communicatively connected thereto in accordance with an embodiment.
FIGS. 20-23 depict flowcharts of methods for providing external operational support to a hearing assist device worn by a user in accordance with various embodiments.
FIG. 24 is a block diagram of an audio processing module that may be implemented in a hearing assist device in accordance with an embodiment.
The features and advantages of the subject matter of the present application will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number.
DETAILED DESCRIPTIONI. IntroductionThe following detailed description discloses numerous example embodiments. The scope of the present patent application is not limited to the disclosed embodiments, but also encompasses combinations of the disclosed embodiments, as well as modifications to the disclosed embodiments.
References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
Furthermore, it should be understood that spatial descriptions (e.g., “above,” “below,” “up,” “left,” “right,” “down,” “top,” “bottom,” “vertical,” “horizontal,” etc.) used herein are for purposes of illustration only, and that practical implementations of the structures described herein can be spatially arranged in any orientation or manner.
II. Example Hearing Assist Device EmbodimentsPersons may become hearing impaired for a variety of reasons, including aging and being exposed to excessive noise, which can both damage hair cells in the inner ear. A hearing aid is an electro-acoustic device that typically fits in or behind the ear of a wearer, and amplifies and modulates sound for the wearer. Hearing aids are frequently worn by persons who are hearing impaired to improve their ability to hear sounds. A hearing aid may be worn in one or both ears of a user, depending on whether one or both of the user's ears need hearing assistance.
Opportunities exist with integrating further functionality into hearing assist devices that are worn in/on a human ear. Hearing assist devices, such as hearing aids, headsets, and headphones, are typically worn in contact with the user's ear, and in some cases extend into the user's ear canal. As such, a hearing assist device is typically positioned in close proximity to various organs and physical features of a wearer, such as the inner ear structure (for example, the ear canal, ear drum, ossicles, Eustachian tube, cochlea, auditory nerve, or the like), skin, brain, veins and arteries, and further physical features of the wearer. Because of this advantageous positioning, a hearing assist device may be configured to detect various characteristics of a user's health. Furthermore, the detected characteristics may be used to treat health-related issues of the wearer, and perform further health-related functions. As such, hearing assist devices may be used by users that do not even have hearing problems, but instead may be used by these users to detect other health problems.
For instance, in embodiments, health monitoring technology may be incorporated into a hearing assist device to monitor the health of a wearer. Examples of health monitoring technology that may be incorporated in a hearing assist device include health sensors that determine (for example, sense/detect/measure/collect, or the like) various physical characteristics of the user, such as blood pressure, heart rate, temperature, humidity, blood oxygen level, skin galvanometric levels, brain wave information, arrhythmia onset detection, skin chemistry changes, falling down impacts, long periods of activity, or the like.
Sensor information resulting from the monitoring may be analyzed within the hearing assist device, or may be transmitted from the hearing assist device and analyzed at a remote location. For instance, the sensor information may be analyzed at a local computer, in a smart phone or other mobile device, or at a remote location, such as at a cloud-based server. In response to the analysis of the sensor information, instructions and/or other information may be communicated back to the wearer. Such information may be provided to the wearer by a display screen (for example, a desktop computer display, a smart phone display, a tablet computer display, a medical equipment display, or the like), by the hearing assist device itself (for example, by voice, beeps, or the like), or may be provided to the wearer in another manner. Medical personnel and/or emergency response personnel (for example, reachable at the 911 phone number) may be alerted when particular problems with the wearer are detected by the hearing assist device. The medical personnel may evaluate information received from the hearing assist device, and provide information back to the hearing assist device/wearer. The hearing assist device may provide the wearer with reminders, alarms, instructions, etc.
The hearing assist device may be configured with speech/voice recognition capability. For instance, the wearer may provide commands, such as by voice, to the hearing assist device. The hearing assist device may be configured to perform various audio processing functions to suppress background noise and/or other sounds, as well amplifying other sounds, and may be configured to modify audio according to a particular frequency response of the hearing of the wearer. The hearing assist device may be configured to detect vibrations (for example, jaw movement of the wearer during talking), and may use the detected vibrations to aid in improving speech/voice recognition.
Hearing assist devices may be configured in various ways, according to embodiments. For instance,FIG. 1 shows acommunication system100 that includes a multi-sensor hearing assistdevice102 that communicates with a near field communication (NFC)-enabledcommunications device104, according to an exemplary embodiment. Hearing assistdevice102 may be worn in association with the ear of a user, and may be configured to communicate with other devices, such ascommunications device104. As shown inFIG. 1, hearing assistdevice102 includes a plurality ofsensors106aand106b,processing logic108, an NFC transceiver110,storage112, and arechargeable battery114. These features of hearingassist device102 are described as follows.
Sensors106aand106bare medical sensors that each sense a characteristic of the user and generate a corresponding sensor output signal. Although twosensors106aand106bare shown in hearing assistdevice102 inFIG. 1, any number of sensors may be included in hearing assistdevice102, including three sensors, four sensors, five sensors, etc. (e.g., tens of sensors, hundreds of sensors, etc.). Examples of sensors forsensors106aand106binclude a blood pressure sensor, a heart rate sensor, a temperature sensor, a humidity sensor, a blood oxygen level sensor, a skin galvanometric level sensor, a brain wave information sensor, an arrhythmia onset detection sensor (for example, a chest strap with multiple sensor pads), a skin chemistry sensor, a motion sensor (e.g., to detect falling down impacts, long periods of activity, etc.), an air pressure sensor, etc. These and further types of sensors suitable forsensors106aand106bare further described elsewhere herein.
Processing logic108 may be implemented in hardware (e.g., one or more processors, electrical circuits, etc.), or any combination of hardware with software and/or firmware.Processing logic108 may receive sensor information fromsensors106a,106b, etc., and may process the sensor information to generate processed sensor data.Processing logic108 may execute one or more programs that define various operational characteristics, such as: (i) a sequence or order of retrieving sensor information from sensors of hearingassist device102, (ii) sensor configurations and reconfigurations (via a preliminary setup or via adaptations over the course of time), (iii) routines by which particular sensor data is at least pre-processed, and (iv) one or more functions/actions to be performed based on particular sensor data values, etc.
For instance,processing logic108 may store and/or access sensor data instorage112, processed or unprocessed. Furthermore,processing logic108 may access one or more programs stored instorage112 for execution.Storage112 may include one or more types of storage, including memory (e.g., random access memory (RAM), read only memory (ROM), etc.) that is volatile or non-volatile.
NFC transceiver110 is configured to wirelessly communicate with a second device (for example, a local or remote supporting device), such as NFC-enabledcommunications device104 according to NFC techniques. NFC uses magnetic induction between two loop antennas (e.g., coils, microstrip antennas, or the like) located within each other's near field, effectively forming an air-core transformer. As such, NFC communications occur over relatively short ranges (e.g., within a few centimeters), and are conducted at radio frequencies. For instance, in one example, NFC communications may be performed by NFC transceiver110 at a 13.56 MHz frequency, with data transfers of up to 424 kilobits per second. In other embodiments, NFC transceiver110 may be configured to perform NFC communications at other frequencies and data transfer rates. Examples of standards according to which NFC transceiver110 may be configured to conduct NFC communications include ISO/IEC 18092 and those defined by the NFC Forum, which was founded in 2004 by Nokia, Philips and Sony.
NFC-enabledcommunications device104 may be configured with an NFC transceiver to perform NFC communications. NFC-enabledcommunications device104 may be any type of device that may be enabled with NFC capability, such as a docking station, a desktop computer (e.g., a personal computer, etc.), a mobile computing device (e.g., a personal digital assistant (PDA), a laptop computer, a notebook computer, a tablet computer (e.g., an Apple iPad™), a netbook, etc.), a mobile phone (e.g., a cell phone, a smart phone, etc.), a medical appliance, etc. Furthermore, NFC-enabledcommunications device104 may be network-connected to enable hearing assistdevice102 to communicate with entities over the network (e.g., cloud computers or servers, web services, etc.).
NFC transceiver102 enables sensor data (processed or unprocessed) to be transmitted by processinglogic108 from hearingassist device102 to NFC-enabledcommunications device104. In this manner, the sensor data may be reported, processed, and/or analyzed externally to hearing assistdevice102. Furthermore,NFC transceiver102 enablesprocessing logic108 at hearingassist device102 to receive data and/or instructions/commands from NFC-enabledcommunications device104 in response to the transmitted sensor data. Furthermore,NFC transceiver102 enablesprocessing logic108 at hearingassist device102 to receive programs (e.g., program code), including new programs, program updates, applications, “apps”, and/or other programs from NFC-enabledcommunications device104 that can be executed by processinglogic108 to change/update the functionality of hearingassist device102.
Rechargeable battery114 is a rechargeable battery that includes one or more electrochemical cells that store charge that may be used to power components of hearingassist device102, including one or more ofsensor106a,106b, etc.,processing logic108, NFC transceiver110, andstorage112.Rechargeable battery114 may be any suitable rechargeable battery type, including lead-acid, nickel cadmium (NiCd), nickel metal hydride (NiMH), lithium ion (Li-ion), and lithium ion polymer (Li-ion polymer). Charging of the batteries may be through a typical tethered recharger or via NFC power delivery.
Although NFC communications are shown, alternative communication approaches can be employed. Such alternatives may include wireless power transfer schemes as well.
Hearing assistdevice102 may be configured in any manner to be associated with the ear of a user. For instance,FIGS. 2-4 show various configurations for associating a hearing assist device with an ear of a user, according to exemplary embodiments. InFIG. 2, hearing assistdevice102 may be a hearing aid type that fits and is inserted partially or fully in anear202 of a user. As shown inFIG. 2, hearing assistdevice102 includes sensors106a-106nthat contact the user. Examples forms of hearingassist device102 ofFIG. 2 include ear buds, “receiver in the canal” hearing aids, “in the ear” (ITE) hearing aids, “invisible in canal” (IIC) hearing aids, “completely in canal” (CIC) hearing aids, etc. Although not illustrated, cochlear implant configurations may also be used.
InFIG. 3, hearing assistdevice102 may be a hearing aid type that mounts on top of, or behindear202 of the user. As shown inFIG. 3, hearing assistdevice102 includes sensors106a-106nthat contact the user. Examples forms of hearingassist device102 ofFIG. 3 include “behind the ear” (BTE) hearing aids, “open fit” or “over the ear” (OTE) hearing aids, eyeglasses hearing aids (e.g., that contain hearing aid functionality in or on the glasses arms), etc.
InFIG. 4, hearing assistdevice102 may be a headset or head phones that mounts on the head of the user and include speakers that are held close to the user's ears. As shown inFIG. 4, hearing assistdevice102 includes sensors106a-106nthat contact the user. In the embodiment ofFIG. 4, sensors106a-106nmay be spaced further apart in the headphones, including being dispersed in the ear pad(s) and/or along the headband that connects together the ear pads (when a head band is present).
It is noted that hearing assistdevice102 may be configured in further forms, including combinations of the forms shown inFIGS. 2-4, and is not intended to be limited to the embodiments illustrated inFIGS. 2-4. For instance, hearing assistdevice102 may be a cochlear implant-type hearing aid, or other type of hearing assist device. The following section describes some example forms of hearingassist device102 with associated sensor configurations.
III. Example Hearing Assist Device Forms and Sensor Array EmbodimentsAs described above, hearing assistdevice102 may be configured in various forms, and may include any number and type of sensors. For instance,FIG. 5 shows ahearing assist device500 that is an example of hearingassist device102 according to an exemplary embodiment. Hearing assistdevice500 is configured to mount over an ear of a user, and has a portion that is at least partially inserted into the ear. A user may wear a singlehearing assist device500 on one ear, or may simultaneously wear first and second hearing assistdevices500 on the user's right and left ears, respectively.
As shown inFIG. 5, hearing assistdevice500 includes a case orhousing502 that includes afirst portion504, asecond portion506, and athird portion508.First portion504 is shaped to be positioned behind/over the ear of a user. For instance, as shown inFIG. 5,first portion504 has a crescent shape, and may optionally be molded in the shape of a user's outer ear (e.g., by taking an impression of the outer ear, etc.).Second portion506 extends perpendicularly from a side of an end offirst portion504.Second portion506 is shaped to be inserted at least partially into the ear canal of the user.Third portion508 extends fromsecond portion506, and may be referred to as an earmold shaped to conform to the user's ear shape, to better adhere hearing assistdevice500 to the user's ear.
As shown inFIG. 5, hearing assistdevice500 further includes aspeaker512, a forward IR/UV (ultraviolet)communication transceiver520, a BTLE (BLUETOOTH low energy)antenna522, at least onemicrophone524, atelecoil526, atethered sensor port528, askin communication conductor534, avolume controller540, and a communication andpower delivery coil542. Furthermore, hearing assistdevice500 includes a plurality of medical sensors, including at least onepH sensor510, an IR (infrared) orsonic distance sensor514, an innerear temperature sensor516, a position/motion sensor518, a WPT (wireless power transfer)/NFC coil530, aswitch532, aglucose spectroscopy sensor536, aheart rate sensor538, and asubcutaneous sensor544. In embodiments, hearing assistdevice500 may include one or more of these further features and/alternative features. The features of hearingassist device500 are described as follows.
As shown inFIG. 5,speaker512, IR orsonic distance sensor514, and innerear temperature sensor516 are located on a circular surface ofsecond portion506 of hearingassist device500 that faces into the ear of the user. Position/motion sensor518 andpH sensor510 are located on a perimeter surface ofsecond portion506 around the circular surface that contacts the ear canal of the user. In alternative embodiments, one or more of these features may be located in/on different locations of hearingassist device500.
pH sensor510 is a sensor that may be present to measure a pH of skin of the user's inner ear. The measured pH value may be used to determine a medical problem of the user, such an onset of stroke.pH sensor510 may include one or more metallic plates. Upon receiving power (e.g., fromrechargeable battery114 ofFIG. 1),pH sensor510 may generate a sensor output signal (e.g., an electrical signal) that indicates a measured pH value.
Speaker512 (also referred to as a “loudspeaker”) is a speaker of hearingassist device500 that broadcasts environmental sound received by microphone(s)524, that is subsequently amplified and/or filtered by processing logic of the hearing assistdevice600, into the ear of the user to assist the user in hearing the environmental sound. Furthermore,speaker512 may broadcast additional sounds into the ear of the user for the user to hear, including alerts (e.g., tones, beeping sounds), voice, and/or further sounds that may be generated by or received by processing logic of hearingassist device500, and/or may be stored in hearing assistdevice500.
IR orsonic distance sensor514 is a sensor that may be present to sense a displacement distance. Upon receiving power, IR orsonic distance sensor514 may generate an IR light pulse, a sonic (e.g., ultrasonic) pulse, or other light or sound pulse, that may be reflected in the ear of the user, and the reflection may be received by IR orsonic distance sensor514. A time of reflection may be compared for a series of pulses to determine a displacement distance within the ear of user. IR orsonic distance sensor514 may generate a sensor output signal (e.g., an electrical signal) that indicates a measured displacement distance.
A distance and eardrum deflection that is determined using IR or sonic distance sensor514 (e.g., by using a high rate sampling or continuous sampling) may be used to calculate an estimate of the “actual” or “true” decibel level of an audio signal being input to the ear of the user. By incorporating such functionality, hearing assistdevice500 can perform the following when a user inserts and turns on hearing assist device500: (i) automatically adjust the volume to fall within a target range; and (ii) prevent excess volume associated with unexpected loud sound events. It is noted that the amount of volume adjustment that may be applied can vary by frequency. It is also noted that the excess volume associated with unexpected loud sound events may be further prevented by using a hearing assist device that has a relatively tight fit, thereby allowing the hearing assist device to act as an ear plug.
Hearing efficiency and performance data over the spectrum of normal audible frequencies can be gathered by delivering each frequency (or frequency range) at an output volume level, measuring eardrum deflection characteristics, and delivering audible test questions to the user via hearingassist device500. This can be accomplished solely by hearingassist device500 or with assistance from a smartphone or other external device or service. For example, a user may respond to an audio (or textual) prompt “Can you hear this?” with a “yes” or “no” response. The response is received by microphone(s)524 (or via touch input for example) and processed internally or on an assisting external device to identify the response. Depending on the user's response, the amplitude of the audio output can be adjusted to determine a given user's hearing threshold for each frequency (or frequency range). From this hearing efficiency and performance data, input frequency equalization can be performed by hearingassist device500 so as to deliver to the user audio signals that will be perceived in much the same way as someone with no hearing impairment. In addition, such data can be delivered to the assisting external device (e.g., to a smartphone) for use by such device in producing audio output for the user. For example, the assisting device can deliver an adjusted audio output tailored for the user if (i) the user is not wearing hearing assistdevice500, (ii) the battery power of hearingassist device500 is depleted, (iii) hearingassist device500 is powered down, or (iv) hearingassist device500 is operating in a lower power mode. In such situations, the supporting device can deliver the audio signal: (a) in an audible form via a speaker which will be generated with intent of directly reaching the eardrum; (b) in an audible form intended for receipt and amplification control by hearingassist device500 without further need for user specific audio equalization; and (c) in a non-audible form (e.g.) electromagnetic transmission for receipt and conversion to an audible form by hearingassist device500 and again without further equalization.
After testing and setup, a wearer may further tweak their recommended equalization via slide bars and such in a manner similar to adjusting equalization for other conventional audio equipment. Such tweaking can be carried out via the supporting device user interface. In addition, a plurality of equalization settings can be supported with each being associated with a particular mode of operation of hearingassist device500. That is conversation in a quiet room with one other might receive one equalization profile while a concert hall might receive another. Modes can be selected in many automatic or commanded ways via either or both hearing assistdevice500 and the external supporting device. Automatic selection can be performed via analysis and classification of captured audio. Certain classifications may trigger selection of a particular mode. Commands may delivered via any user input interface such as voice input (voice recognized commands), tactile input commands, etc.
Audio modes also comprise alternate or additional audio processing techniques as well. For example, in one mode, to enhance audio perspective and directionality, delays might be selectively introduced (or increased in a stereoscopic manner) to enhance a wearer's ability to discern the location of an audio source. Sensor data may support automatic mode selection in such situations. Detecting walking impacts and outdoor GPS (Global Positioning System) location might automatically trigger such enhanced perspective mode. A medical condition might trigger another mode which attenuates environmental audio while delivering synthesized voice commands to the wearer. In another exemplary mode, both echoes and delays might be introduced to simulate a theater environment. For example, when audio is being sourced by a television channel broadcast of a movie, the theater environment mode might be selected. Such selection may be in response to a set top box, television or media player's commands or by identifying one of the same as the audio source.
Other similar and all of such functionality can be carried out by one or both of hearingassist device500 and an external supporting device. When assisting the hearing aid device, the external supporting device may receive the audio for processing: (i) directly via built in microphones; (ii) from storage; or (iii) via yet another external device. Alternatively, the source audio may be captured by hearingassist device500 itself and delivered via a wired or wireless pathway to the external supporting device for processing before delivery of either the processed audio signals or substitute audio back to hearing assistdevice500 for delivery to the wearer.
Similarly, sensor data may be captured in one or both of hearingassist device500 and an external supporting device. Sensor data captured by hearingassist device500 may likewise be delivered via such or other wired or wireless pathways to the external supporting device for (further) processing. The external supporting device may then respond to the sensor data received and processed by delivering audio content and/or hearing aid commands back to hearing assistdevice500. Such commands may be to reconfigure some aspect of hearingassist device500 or manage communication or power delivery. Such audio content may be instructional, comprise queries, or consist of commands to be delivered the wearer via the ear drums. Sensor data may be stored and displayed in some form locally on the external supporting device along with similar audio, graphical or textual content, commands or queries. In addition, such sensor data can be further delivered to yet other external supporting devices for further processing, analysis and storage. Sensors within one or both hearing assistdevice500 and an external supporting device may be medical sensors or environmental sensors (e.g., latitude/longitude, velocity, temperature, wearer's physical orientation, acceleration, elevation, tilt, humidity, etc.).
Although not shown, hearing assistdevice500 may also be configured with an imager that may be located neartransceiver520. The imager can then be used to capture images or video that may be relayed to one or more external supporting device for real time display, storage or processing. For example, detecting a medical situation and no response to audible content queries delivered via hearingassist device500, the imager can be commanded (internal or external command origin) to capture an image or a video sequence. Such imager output can be delivered to medical staff via a user's supporting smartphone so that a determination can be made as to the user's condition or the position/location of hearingassist device500.
Innerear temperature sensor516 is a sensor that may be present to measure a temperature of the user. For instance, in an embodiment, upon receiving power, innerear temperature sensor516 may include a lens used to measure inner ear temperature. IR light may be reflected from the user skin by an IR light emitter, such as the ear canal or ear drum, and received by a single temperature sensor element, a one-dimensional array of temperature sensor elements, a two-dimensional array of temperature sensor elements, or other configuration of temperature sensor elements. Innerear temperature sensor516 may generate a sensor output signal (e.g., an electrical signal) that indicates a measured inner ear temperature.
Such a configuration may also be used to determine a distance to the user's ear drum. The IR light emitter and sensor may be used to determine a distance to the user's ear drum from hearingassist device500, which may be used by processing logic to automatically control a volume of sound emitted from hearingassist device500, as well as for other purposes. Furthermore, the IR light emitter/sensor may also be used as an imager that captures an image of the inside of the user's ear. This could be used to identify characteristics of vein structures inside the user's ear, for example. The IR light emitter/sensor could also be used to detect the user's heartbeat, as well as to perform further functions.
Position/motion sensor518 includes one or more sensors that may be present to measure time of day, location, acceleration, orientation, vibrations, and/or other movement related characteristics of the user. For instance, position/motion sensor518 may include one or more of a GPS (global positioning system) receiver (to measure user position), an accelerometer (to measure acceleration of the user), a gyroscope (to measure orientation of the head of the user), a magneto (to determine a direction the user is facing), a vibration sensor (for example, a micro-electromechanical system (MEMS) vibration sensor), or the like. Position/motion sensor518 may be used for various benefits, including determining whether a user has fallen (e.g., based on measured position, acceleration, orientation, etc.), for local VoD, and many more benefits. Position/motion sensor518 may generate a sensor output signal (e.g., an electrical signal) that indicates one or more of the measured time of day, location, acceleration, orientation, vibration, etc.
The sensor information indicated by position/motion sensor518 and/or other sensors may be used for various purposes. For instance, position/motion information may be used to determine that the user has fallen down/collapsed. In response, voice and/or video assist (e.g., by a handheld device in communication with hearing assist device500) may be used to gather feedback from the user (e.g., to find out if they are ok, and/or to further supplement the sensor data collection (which triggered the feedback request)). Such sensor data and feedback information, if warranted, can be automatically forwarded to medical staff, ambulance services, and/or family members, for example, as described elsewhere herein. The analysis of the data that triggered the forwarding process may be performed in whole or in part on one (or both) of hearingassist device500, and/or on the assisting local device (e.g., a smart phone, tablet computer, set top box, TV, etc., in communication with a hearing assist device500) and/or remote computing systems (e.g., at medical staff offices or as might be available through a cloud or portal service).
As shown inFIG. 5, forward IR/UV (ultraviolet)communication transceiver520,BTLE antenna522, microphone(s)524,telecoil526, tetheredsensor port528, WPT/NFC coil530,switch532,skin communication conductor534,glucose spectroscopy sensor536, aheart rate sensor538,volume controller540, and communication andpower delivery coil542 are located at different locations in/on thefirst portion504 of hearingassist device500. In alternative embodiments, one or more of these features may be located in/on different locations of hearingassist device500.
Forward IR/UV communication transceiver520 is a communication mechanism that may be present to enable communications with another device, such as a smart phone, computer, etc. Forward IR/UV communication transceiver520 may receive information/data from processing logic of hearingassist device500 to be transmitted to the other device in the form of modulated light (e.g., IR light, UV light, etc.), and may receive information/data in the form of modulated light from the other device to be provided to the processing logic of hearingassist device500. Forward IR/UV communication transceiver520 may enable low power communications for hearingassist device500, to reduce a load on a battery of hearingassist device500. In an embodiment, an emitter/receiver of forward IR/UV communication transceiver520 may be positioned onhousing502 to be facing forward in a direction a wearer of hearingassist device500 faces. In this manner, the forward IR/UV communication transceiver520 may communicate with a device held by the wearer, such as a smart phone, a tablet computer, etc., to provide text to be displayed to the wearer, etc.
BTLE antenna522 is a communication mechanism coupled to a Bluetooth™ transceiver in hearing assistdevice500 that may be present to enable communications with another device, such as a smart phone, computer, etc.BTLE antenna522 may receive information/data from processing logic of hearingassist device500 to be transmitted to the other device according to the Bluetooth™ specification, and may receive information/data transmitted according to the Bluetooth™ specification from the other device to be provided to the processing logic of hearingassist device500.
Microphone(s)524 is a sensor that may be present to receive environmental sounds, including voice of the user, voice of other persons, and other sounds in the environment (e.g., traffic noise, music, etc.). Microphone(s)524 may include any number of microphones, and may be configured in any manner, including being omni-directional (non-directional), directional, etc. Microphone(s)524 generates an audio signal based on the received environmental sound that may be processed and/or filtered by processing logic of hearingassist device500, may be stored in digital form in hearing assistdevice500, may be transmitted from hearingassist device500, and may be used in other ways.
Telecoil526 is a communication mechanism that may be present to enable communications with another device.Telecoil526 is an audio induction loop that enables audio sources to be directly coupled to hearing assistdevice500 in a manner known to persons skilled in the relevant art(s).Telecoil526 may be used with a telephone, a radio system, and induction loop systems that transmit sound to hearing aids.
Tethered sensor port528 is a port that a remote sensor (separate from hearing assist device500) may be coupled with to interface with hearingassist device500. For instance,port528 may be an industry standard or proprietary connector type. A remote sensor may have a tether (one or more wires) with a connector at an end that may be plugged intoport528. Any number of tetheredsensor ports528 may be present. Examples of sensor types that may interface with tetheredsensor port528 include brainwave sensors (e.g., electroencephalography (EEG) sensors that record electrical activity along the scalp according to EEG techniques) attached to the user's scalp, heart rate/arrhythmia sensors attached to a chest of the user, etc.
WPT/NFC coil530 is a communication mechanism coupled to a NFC transceiver in hearing assistdevice500 that may be present to enable communications with another device, such as a smart phone, computer, etc., as described above with respect to NFC transceiver110 (FIG. 1).
Switch532 is a switching mechanism that may be present onhousing502 to perform various functions, such as switching power on or off, switching between different power and/or operational modes, etc. A user may interact withswitch532 to switch power on or off, to switch between modes, etc.Switch532 may be any type of switch, including a toggle switch, a push button switch, a rocker switch, a three-(or greater) position switch, a dial switch, etc.
Skin communication conductor534 is a communication mechanism coupled to a transceiver in hearing assistdevice500 that may be present to enable communications with another device, such as a smart phone, computer, etc., through skin of the user. For instance,skin communication conductor534 may enable communications to flow between hearing assistdevice500 and a smart phone held in the hand of the user, a second hearing assist device worn on an opposite ear of the user, a pacemaker or other device implanted in the user, or other communications device in communication with skin of the user. A transceiver of hearingassist device500 may receive information/data from processing logic to be transmitted fromskin communication conductor534 through the user's skin to the other device, and the transceiver may receive information/data atskin communication conductor534 that was transmitted from the other device through the user's skin to be provided to the processing logic of hearingassist device500.
Glucose spectroscopy sensor536 is a sensor that may be present to measure a glucose level of the user using spectroscopy techniques in a manner known to persons skilled in the relevant art(s). Such a measurement may be valuable in determining whether a user has diabetes. Such a measurement can also be valuable in helping a diabetic user determine whether insulin is needed, etc. (e.g., hypoglycemia or hyperglycemia).Glucose spectroscopy sensor536 may be configured to monitor glucose in combination withsubcutaneous sensor544. As shown inFIG. 5,subcutaneous sensor544 is shown separate from, and proximate to hearing assistdevice500. In an alternative embodiment,subcutaneous sensor544 may be located in/on hearingassist device500.Subcutaneous sensor544 is a sensor that may be present to measure any attribute of a user's health, characteristics or status. For example,subcutaneous sensor544 may be a glucose sensor implanted under the skin behind the ear so as to provide a reasonably close mating location with communication andpower delivery coil542. When powered,glucose spectroscopy sensor536 may measure the user glucose level with respect tosubcutaneous sensor544, and may generate a sensor output signal (e.g., an electrical signal) that indicates a glucose level of the user.
Heart rate sensor538 is a sensor that may be present to measure a heart rate of the user. For instance, in an embodiment, upon receiving power,heart rate sensor538 may pressure changes with respect to a blood vessel in the ear, or may measure heart rate in another manner such as changes in reflectivity or otherwise as would be known to persons skilled in the relevant art(s). Missed beats, elevated heart rate, and further heart conditions may be detected in this manner.Heart rate sensor538 may generate a sensor output signal (e.g., an electrical signal) that indicates a measured heart rate. In addition,subcutaneous sensor544 might comprise at least a portion of an internal heart monitoring device which communicates via communication andpower delivery coil542 heart status information and data.Subcutaneous sensor544 could also be associated with or be part of a pacemaker or defibrillating implant, insulin pump, etc.
Volume controller540 is a user interface mechanism that may be present onhousing502 to enable a user to modify a volume at which sound is broadcast fromspeaker512. A user may interact withvolume controller520 to increase or decrease the volume.Volume controller540 may be any suitable controller type (e.g., a potentiometer), including a rotary volume dial, a thumb wheel, etc.
Instead of supporting both power delivery and communications, communication andpower delivery coil542 may be dedicated to one or the other. For example, such coil may only support power delivery (if needed to charge or otherwise deliver power to subcutaneous sensor544), and can be replaced with any other type of communication system that supports communication withsubcutaneous sensor544. It is noted that the coils/antennas of hearingassist device500 may be separately included in hearing assistdevice500, or in embodiments, two or more of the coils/antennas may be combined as a single coil/antenna.
The processing logic of hearingassist device500 may be operable to set up/configure and adaptively reconfigure each of the sensors of hearingassist device500 based on an analysis of the data obtained by such sensor as well as on an analysis of data obtained by other sensors. For example, a first sensor of hearingassist device500 may be configured to operate at one sampling rate (or sensing rate) which is analyzed periodically or continuously. Furthermore, a second sensor of hearingassist device500 can be in a sleep or power down mode to conserve battery power. When a threshold is exceeded or other triggering event occurs, such first sensor can be reconfigured by the processing logic of hearingassist device500 to sample at a higher rate or continuously and the second sensor can be powered up and configured. Additionally, multiple types of sensor data can be used to construct or derive single conclusions. For example, heart rate can be gathered multiple ways (via multiple sensors) and combined to provide a more robust and trustworthy conclusion. Likewise, a combination of data obtained from different sensors (e.g., pH plus temperature plus horizontal posture plus impact detected plus weak heart rate) may result in an ambulance being called or indicate a possible heart attack. Or, if glucose is too high, hyperglycemia may be indicated while if glucose it too low, hypoglycemia may be indicated. Or, if glucose and heart data is acceptable, then a stroke may be indicated. This processing can be done in whole or in part within hearing assistdevice500 with audio content being played to the wearer thereof to gather further voiced information from the wearer to assist in conclusions or to warn the wearer.
FIG. 6 shows ahearing assist device600 that is an example of hearingassist device102 according to an exemplary embodiment. Hearing assistdevice600 is configured to be at least partially inserted into the ear canal of a user (for example, an ear bud). A user may wear a singlehearing assist device600 on one ear, or may simultaneously wear first and second hearing assistdevices600 on the user's right and left ears, respectively.
As shown inFIG. 6, hearing assistdevice600 includes a case orhousing602 that has a generally cylindrical shape, and includes afirst portion604, asecond portion606, and athird portion608.First portion604 is shaped to be inserted at least partially into the ear canal of the user.Second portion606 extends coaxially fromfirst portion604.Third portion608 is a handle that extends fromsecond portion606. A user graspsthird portion608 to extract hearing assistdevice600 from the ear of the user.
As shown inFIG. 6, hearing assistdevice600 further includespH sensor510,speaker512, IR (infrared) orsonic distance sensor514, innerear temperature sensor516, and anantenna610.pH sensor510,speaker512, IR (infrared) orsonic distance sensor514, innerear temperature sensor516 may function and be configured similarly as described above.Antenna610 may be include one or more coils or other types of antennas to function as any one or more of the coils/antennas described above with respect toFIG. 5 and/or elsewhere herein (e.g., an NFC antenna, a Bluetooth™ antenna, etc.).
It is noted that antennas, such as coils, mentioned herein may be implemented as any suitable type of antenna, including a coil, a microstrip antenna, or other antenna type. Although further sensors, communication mechanisms, switches, etc., of hearingassist device500 ofFIG. 5 are not shown included in hearing assistdevice600, one or more further of these features of hearingassist device500 may additionally and/or alternatively be included in hearing assistdevice600. Furthermore, sensors that are present in a hearing assist device may all operate simultaneously, or one or more sensors may be run periodically, and may be off at other times (e.g., based on an algorithm in program code, etc.). By running fewer sensors at any one time, battery power may be conserved. Note that in addition to one or more of sensor data compression, analysis, encryption, and processing, sensor management (duty cycling, continuous operations, threshold triggers, sampling rates, etc.) can be performed in whole or in part in any one or both hear assist devices, the assisting local device (e.g., smart phone, tablet computer, set top box, TV, etc.), and/or remote computing systems (at medical staff offices or as might be available through a cloud or portal service).
Hearing assistdevices102,500, and600 may be configured in various ways with circuitry to process sensor information, and to communicate with other devices. The next section describes some example circuit embodiments for hearing assist devices, as well as processes for communicating with other devices, and for further functionality.
IV. Example Hearing Assist Device Circuit and Process EmbodimentsAccording to embodiments, hearing assist devices may be configured in various ways to perform their functions. For instance,FIG. 7 shows a circuit block diagram of ahearing assist device700 that is configured to communicate with external devices according to multiple communication schemes, according to an exemplary embodiment. Hearing assistdevices102,500, and600 may each be implemented similarly to hearing assistdevice700, according to embodiments.
As shown inFIG. 7, hearing assistdevice700 includes a plurality of sensors702a-702c,processing logic704, amicrophone706, anamplifier708, afilter710, an analog-to-digital (A/D)converter712, aspeaker714, anNFC coil716, anNFC transceiver718, anantenna720, aBluetooth™ transceiver722, acharge circuit724, abattery726, a plurality of sensor interfaces728a-728c, and a digital-to-analog (D/A)converter764.Processing logic704 includes a digital signal processor (DSP)730, a central processing unit (CPU)732, and amemory734. Sensors702a-702c,processing logic704,amplifier708,filter710, A/D converter712,NFC transceiver718,Bluetooth™ transceiver722,charge circuit724, sensor interfaces728a-728c, D/Aconverter764,DSP730,CPU732, may each be implemented in the form of hardware (e.g., electrical circuits, digital logic, etc.) or a combination of hardware and software/firmware. The features of hearingassist device700 shown inFIG. 7 are described as follows.
For instance, hearing aid functionality of hearingassist device700 is first described. InFIG. 7,microphone706,amplifier708,filter710, A/D converter712,processing logic704, D/Aconverter764, andspeaker714 provide at least some of the hearing aid functionality of hearingassist device700.Microphone706 is a sensor that receives environmental sounds, including voice of the user of hearingassist device700, voice of other persons, and other sounds in the environment (e.g., traffic noise, music, etc.).Microphone706 may be configured in any manner, including being omni-directional (non-directional), directional, etc., and may include one or more microphones.Microphone706 may be a miniature microphone conventionally used in hearing aids, as would be known to persons skilled in the relevant art(s), or may be another suitable type of microphone. Microphone(s)524 (FIG. 5) is an example ofmicrophone706.Microphone706 generates a receivedaudio signal740 based on the received environmental sound.
Amplifier708 receives and amplifies receivedaudio signal740 to generate an amplifiedaudio signal742.Amplifier708 may be any type of amplifier, including a low-noise amplifier for amplifying low level signals.Filter710 receives and processes amplifiedaudio signal742 to generate a filteredaudio signal744.Filter710 may be any type of filter, including being a filter configured to filter out noise, other high frequencies, and/or other frequencies as desired. A/D converter712 receives filteredaudio signal742, which may be an analog signal, and converts filteredaudio signal742 to digital form, to generate adigital audio signal746. A/D converter712 may be configured in any manner, including as a conventional A/D converter.
Processing logic704 receivesdigital audio signal746, and may processdigital audio signal746 in any manner to generate processeddigital audio signal762. For instance, as shown inFIG. 7,DSP730 may receivedigital audio signal746, and may perform digital signal processing ondigital audio signal746 to generate processeddigital audio signal762.DSP730 may be configured in any manner, including as a conventional DSP known to person skilled in the relevant art(s), or in another manner.DSP730 may perform any suitable type of digital signal processing to process/filterdigital audio signal746, including processingdigital audio signal746 in the frequency domain to manipulate the frequency spectrum of digital audio signal746 (e.g., according to Fourier transform/analysis techniques, etc.).DSP730 may amplify particular frequencies, may attenuate particular frequencies, and may otherwise modifydigital audio signal746 in the discrete domain.DSP730 may perform the signal processing for various reasons, including noise cancellation or hearing loss compensation. For instance,DSP730 may processdigital audio signal746 to compensate for a personal hearing frequency response of the user, such as compensating for poor hearing of high frequencies, middle range frequencies, or other personal frequency response characteristics of the user.
In one embodiment,DSP730 may be pre-configured to processdigital audio signal746. In another embodiment,DSP730 may receive instructions fromCPU732 regarding how to processdigital audio signal746. For instance,CPU732 may access one or more DSP configurations in stored in memory734 (e.g., in other data768) that may be provided toDSP730 to configureDSP730 for digital signal processing ofdigital audio signal746. For instance,CPU732 may select a DSP configuration based on a hearing assist mode selected by a user of hearing assist device700 (e.g., by interacting withswitch532, etc.).
As shown inFIG. 7, D/Aconverter764 receives processeddigital audio signal762, and converts processeddigital audio signal762 to digital form, generating processedaudio signal766. D/A converter764 may be configured in any manner, including as a conventional D/A converter.Speaker714 receives processedaudio signal766, and broadcasts sound generated based on processedaudio signal766 into the ear of the user. The user is enabled to hear the broadcast sound, which may be amplified, filtered, and/or otherwise frequency manipulated with respect to the sound received bymicrophone706.Speaker714 may be a miniature speaker conventionally used in hearing aids, as would be known to persons skilled in the relevant art(s), or may be another suitable type of speaker. Speaker512 (FIG. 5) is an example ofspeaker714.Speaker714 may include one or more speakers.
Hearing assistdevice700 ofFIG. 7 is further described as follows with respect toFIGS. 8-14.FIG. 8 shows aflowchart800 of a process for a hearing assist device that processes and transmits sensor data and receives a command from a second device, according to an exemplary embodiment. In an embodiment, hearing assist device700 (as well as any of hearing assistdevices102,500, and600) may performflowchart800. Further structural and operational embodiments will be apparent to persons skilled in the relevant art(s) based on the following description offlowchart800 and hearing assistdevice700.
Flowchart800 begins withstep802. Instep802, a sensor output signal is received from a medical sensor of the hearing assist device that senses a characteristic of the user. For example, as shown inFIG. 7, sensors702a-702cmay each sense/measure information about a health characteristic of the user of hearingassist device700. Sensors702a-702cmay each be one of the sensors shown inFIGS. 5 and 6, and/or mentioned elsewhere herein. Although three sensors are shown inFIG. 7 for purposes of illustration, other numbers of sensors may be present in hearing assistdevice700, including one sensor, two sensors, or greater numbers of sensors. Sensors702a-702ceach may generate a corresponding sensor output signal758a-758c(e.g., an electrical signal) that indicates the measured information about the corresponding health characteristic. For instance, sensor output signals758a-758cmay be analog or digital signals having levels or values corresponding to the measured information.
Sensor interfaces728a-728care each optionally present, depending on whether the corresponding sensor outputs a sensor output signal that needs to be modified to be receivable byCPU732. For instance, each of sensor interfaces728a-728cmay include an amplifier, filter, and/or A/D converter (e.g., similar toamplifier708,filter710, and A/D converter712) that respectively amplify (e.g., increase or decrease), reduces particular frequencies, and/or convert to digital form the corresponding sensor output signal. Sensor interfaces728a-728c(when present) respectively output modified sensor output signals760a-760c.
Instep804, the sensor output signal is processed to generate processed sensor data. For instance, as shown inFIG. 7,processing logic704 receives modified sensor output signals760a-760c.Processing logic704 may process modified sensor output signals760a-760cin any manner to generate processed sensor data. For instance, as shown inFIG. 7,CPU732 may receive modified sensor output signals760a-760c.CPU732 may process the sensor information in one or more of modified sensor output signals760a-760cto generate processed sensor data. For instance,CPU732 may manipulate the sensor information (e.g., according to an algorithm of code738) to convert the sensor information into a presentable form (e.g., scaling the sensor information, adding or subtracting a constant to/from the sensor information, etc.). Furthermore,CPU732 may transmit the sensor information of modified sensor output signals760a-760ctoDSP730 to be digital signal processed byDSP730 to generate processed sensor data, and may receive the processed sensor data fromDSP730. The processed and/or raw (unprocessed) sensor data may optionally be stored in memory734 (e.g., as sensor data736).
Instep806, the processed sensor data is wirelessly transmitted from the hearing assist device to a second device. For instance, as shown inFIG. 7,CPU732 may provide the sensor data (processed or raw) (e.g., from CPU registers, fromDSP730, frommemory734, etc.) to a transceiver to be transmitted from hearingassist device700. In the embodiment ofFIG. 7, hearing assistdevice700 includes anNFC transceiver718 and aBT transceiver722, which may each be used to transmit sensor data from hearingassist device700. In alternative embodiments, hearing assistdevice700 may include one or more additional and/or alternative transceivers that may transmit sensor data from hearingassist device700, including a Wi-Fi transceiver, a forward IR/UV communication transceiver (e.g.,transceiver520 ofFIG. 5), a telecoil transceiver (which may transmit via telecoil526), a skin communication transceiver534 (which may transmit via skin communication conductor534), etc. The operation of such alternative transceivers will become apparent to persons skilled in the relevant art(s) based on the teachings provided herein.
As shown inFIG. 7,NFC transceiver718 may receive aninformation signal740 fromCPU732 that includes sensor data for transmitting. In an embodiment,NFC transceiver718 may modulate the sensor data ontoNFC antenna signal748 to be transmitted from hearingassist device700 byNFC coil716 whenNFC coil716 is energized by an RF field generated by a second device.
Similarly,BT transceiver722 may receive aninformation signal754 fromCPU732 that includes sensor data for transmitting. In an embodiment,BT transceiver722 may modulate the sensor data ontoBT antenna signal752 to be transmitted from hearingassist device700 by antenna720 (e.g.,BTLE antenna522 ofFIG. 5), according to a Bluetooth™ communication protocol or standard.
In embodiments, a hearing assist device may communicate with one or more other devices to provide sensor data and/or other information, and to receive information. For instance,FIG. 9 shows acommunication system900 that includes a hearing assist device communicating with other communication devices, according to an exemplary embodiment. As shown inFIG. 9,communication system900 includes hearing assistdevice700, amobile computing device902, astationary computing device904, and aserver906.System900 is described as follows.
Mobile computing device902 (for example, a local supporting device) is a device capable of communicating with hearingassist device700 according to one or more communication techniques. For instance, as shown inFIG. 9,mobile computing device902 includes atelecoil910, one ormore microphones912, an IR/UV communication transceiver914, a WPT/NFC coil916, and aBluetooth™ antenna918. In embodiments,mobile computing device902 may include one or more of these features and/or alternative or additional features (e.g., communication mechanisms, etc.).Mobile computing device902 may be any type of mobile electronic device, including a personal digital assistant (PDA), a laptop computer, a notebook computer, a tablet computer (e.g., an Apple iPad™), a netbook, a mobile phone (e.g., a cell phone, a smart phone, etc.), a special purpose medical device, etc. The features ofmobile computing device902 shown inFIG. 9 are described as follows.
Telecoil910 is a communication mechanism that may be present to enablemobile computing device902 to communicate with hearingassist device700 via a telecoil (e.g., telecoil526 ofFIG. 5). For instance,telecoil910 and an associated transceiver may enablemobile computing device902 to couple audio sources and/or other communications to hearing assistdevice700 in a manner known to persons skilled in the relevant art(s).
Microphone(s)912 may be present to receive voice of a user ofmobile computing device902. For instance, the user may provide instructions formobile computing device902 and/or for hearingassist device700 by speaking into microphone(s)912. The received voice may be transmitted to hearing assist device700 (in digital or analog form) according to any communication mechanism, or may be converted into data and/or commands to be provided to hearing assistdevice700 to cause functions/actions in hearing assistdevice700. Microphone(s)912 may include any number of microphones, and may be configured in any manner, including being omni-directional (non-directional), directional, etc.
IR/UV communication transceiver914 is a communication mechanism that may be present to enable communications with hearingassist device700 via an IR/UV communication transceiver of hearing assist device700 (e.g., forward IR/UV communication transceiver520 ofFIG. 5). IR/UV communication transceiver914 may receive information/data from and/or transmit information/data to hearing assist device700 (e.g., in the form of modulated light, as described above).
WPT/NFC coil916 is an NFC antenna coupled to a NFC transceiver inmobile computing device902 that may be present to enable NFC communications with an NFC communication mechanism of hearing assist device700 (e.g., NFC transceiver110 ofFIG. 1,NFC coil530 ofFIG. 5). WPT/NFC coil916 may be used to receive information/data from and/or transmit information/data to hearing assistdevice700.
Bluetooth™ antenna918 is a communication mechanism coupled to a Bluetooth™ transceiver inmobile computing device902 that may be present to enable communications with hearing assist device700 (e.g.,BT transceiver722 andantenna720 ofFIG. 7).Bluetooth™ antenna918 may be used to receive information/data from and/or transmit information/data to hearing assistdevice700.
As shown inFIG. 9,mobile computing device902 and hearing assistdevice700 may exchange communication signals920 according to any communication mechanism/protocol/standard mentioned herein or otherwise known. According to step806, hearing assistdevice700 may wirelessly transmit sensor data tomobile computing device902.
Stationary computing device904 (for example, a local supporting device) is also a device capable of communicating with hearingassist device700 according to one or more communication techniques. For instance,stationary computing device904 may be capable of communicating with hearingassist device700 according to any of the communication mechanisms shown formobile computing device902 inFIG. 9, and/or according to other communication mechanisms/protocols/standards described elsewhere herein or otherwise known.Stationary computing device904 may be any type of stationary electronic device, including a desktop computer (e.g., a personal computer, etc.), a docking station, a set top box, a gateway device, an access point, special purpose medical equipment, etc.
As shown inFIG. 9,stationary computing device904 and hearing assistdevice700 may exchange communication signals922 according to any communication mechanism/protocol/standard mentioned herein or otherwise known. According to step806, hearing assistdevice700 may wirelessly transmit sensor data tostationary computing device904.
It is noted that mobile computing device902 (and/or stationary computing device904) may communicate with server906 (for example, a remote supporting device, a third device). For instance, as shown inFIG. 9, mobile computing device (and/or stationary computing device904) may be communicatively coupled withserver906 bynetwork908.Network908 may be any type of communication network, including a local area network (LAN), a wide area network (WAN), a personal area network (PAN), a phone network (e.g., a cellular network, a land based network), or a combination of communication networks, such as the Internet.Network908 may include wired and/or wireless communication pathway(s) implemented using any of a wide variety of communication media and associated protocols. For example, such communication pathway(s) may comprise wireless communication pathways implemented via radio frequency (RF) signaling, infrared (IR) signaling, or the like. Such signaling may be carried out using long-range wireless protocols such as WIMAX® (IEEE 802.16) or GSM (Global System for Mobile Communications), medium-range wireless protocols such as WI-FI® (IEEE 802.11), and/or short-range wireless protocols such as BLUETOOTH® or any of a variety of IR-based protocols. Such communication pathway(s) may also comprise wired communication pathways established over twisted pair, Ethernet cable, coaxial cable, optical fiber, or the like, using suitable communication protocols therefor. It is noted that security protocols (e.g., private key exchange, etc.) may be used to protect sensitive health information that is communicated by hearingassist device700 to and from remote devices.
Server906 may be any computer system, including a stationary computing device, a server computer, a mobile computing device, etc.Server906 may include a web service, an API (application programming interface), or other service or interface for communications.
Sensor data and/or other information may be transmitted (for example, relayed) toserver906 overnetwork908 to be processed. After such processing, in response,server906 may transmit processed data, instructions, and/or other information throughnetwork908 to mobile computing device902 (and/or stationary computing device904) to be transmitted to hearing assistdevice700 to be stored, to cause a function/action at hearingassist device700, and/or for other reason.
Referring back toFIG. 8, instep808, at least one command is received from the second device at the hearing assist device. For instance, referring toFIG. 7, hearing assistdevice700 may receive a command wirelessly transmitted in a communication signal from a second device atNFC coil716,antenna720, or other antenna or communication mechanism at hearingassist device700. In the example ofNFC coil716, the command may be transmitted fromNFC coil716 onNFC antenna signal748 toNFC transceiver718.NFC transceiver718 may demodulate command data from the received communication signal, and provide the command toCPU732. In the example ofantenna720, the command may be transmitted fromantenna720 onBT antenna signal752 toBT transceiver722.BT transceiver722 may demodulate command data from the received communication signal, and provide the command toCPU732.
CPU732 may execute the received command. The received command may cause hearing assistdevice700 to perform one or more functions/actions. For instance, in embodiments, the command may cause hearing assistdevice700 to turn on or off, to change modes, to activate or deactivate one or more sensors, to wirelessly transmit further information, to execute particular program code (e.g., stored ascode738 in memory734), to play a sound (e.g., an alert, a tone, a beeping noise, pre-recorded or synthesized voice, etc.) fromspeaker714 to the user to inform the user of information and/or cause the user to perform a function/action, and/or cause one or more additional and/or alternative functions/actions to be performed by hearingassist device700. Further examples of such commands and functions/actions are described elsewhere herein.
In embodiments, a hearing assist device may be configured to convert received RF energy into charge for storage in a battery of the hearing assist device. For instance, as shown inFIG. 7, hearing assistdevice700 includescharge circuit724 for chargingbattery726, which is a rechargeable battery (e.g., rechargeable battery114). In an embodiment,charge circuit724 may operate according toFIG. 10.FIG. 10 shows aflowchart1000 of a process for a wirelessly charging a battery of a hearing assist device, according to an exemplary embodiment.Flowchart1000 is described as follows.
Instep1002 offlowchart1000, a radio frequency signal is received. For example, as shown inFIG. 7,NFC coil716,antenna720, and/or other antenna or coil of hearingassist device700 may receive a radio frequency (RF) signal. The RF signal may be a communication signal that includes data (e.g., modulated on the RF signal), or may be an un-modulated RF signal.Charge circuit724 may be coupled to one or more ofNFC coil716,antenna720, or other antenna to receive the RF signal.
Instep1004, a charge current is generated that charges a rechargeable battery of the hearing assist device based on the received radio frequency signal. In an embodiment,charge circuit724 is configured to generate a charge current756 that is used to chargebattery726.Charge circuit724 may be configured in various ways to convert a received RF signal to a charge current. For instance,charge circuit724 may include an induction coil to take power from an electromagnetic field and convert it to electrical current. Alternatively,charge circuit724 may include a diode rectifier circuit that rectifies the received RF signal to a DC (direct current) signal, and may include one or more charge pump circuits coupled to the diode rectifier circuit used to create a higher voltage value from the DC signal. Alternatively,charge circuit724 may be configured in other ways to generate charge current756 from a received RF signal.
In this manner, hearing assistdevice700 may maintain power for operation, withbattery726 being charged periodically by RF fields generated by other devices, rather than needing to physically replace batteries.
In another embodiment, hearing assistdevice700 may be configured to generate sound based on received sensor data. For instance, hearing assistdevice700 may operate according toFIG. 11.FIG. 11 shows aflowchart1100 of a process for generating and broadcasting sound based on sensor data, according to an exemplary embodiment. For purposes of illustration,flowchart1100 is described as follows with reference toFIG. 7.
Flowchart1100 begins withstep1102. Instep1102, an audio signal is generated based at least on the processed sensor data. For instance, as described above with respect tosteps802 and804 of flowchart800 (FIG. 8), a sensor output signal may be processed to generate processed sensor data. The processed sensor data may be stored inmemory736 assensor data736, may be held in registers inCPU732, or may be present in another location. Audio data for one or more sounds (e.g., tones, beeping sounds, voice segments, etc.) may be stored in memory734 (e.g., as other data768) that may be selected for play to the user based on particular sensor data (e.g., particular values of sensor data, etc.).CPU732 orDSP730 may select the audio data corresponding to particular sensor data frommemory734. Alternatively,CPU732 may transmit a request for the audio data from another device using a communication mechanism (e.g.,NFC transceiver718,BT transceiver722, etc.).DSP730 may receive the audio data fromCPU732, frommemory734, or from another device, and may generate processeddigital audio signal762 based thereon.
Instep1104, sound is generated based on the audio signal, the sound broadcast from a speaker of the hearing assist device into the ear of the user. For instance, as shown inFIG. 7, D/Aconverter764 may be present, and may receive processeddigital audio signal762. D/A converter764 may convert processeddigital audio signal762 to digital form to generate processedaudio signal766.Speaker714 receives processedaudio signal766, and broadcasts sound generated based on processedaudio signal766 into the ear of the user.
In this manner, sounds may be provided to the user by hearingassist device700 based at least on sensor data, and optionally further based on additional information. The sounds may provide information to the user, and may remind or instruct the user to perform a function/action. The sounds may include one or more of a tone, a beeping sound, or a voice that includes at least one of a verbal instruction to the user, a verbal warning to the user, or a verbal question to the user. For instance, a tone or a beeping sound may be provided to the user as an alert based on particular values of sensor data (e.g., indicating a high glucose/blood sugar value), and/or a voice instruction may be provided to the user as the alert based on the particular values of sensor data (e.g., a voice segment stating “Blood sugar is low—Insulin is required” or “hey, your heart rate is 80 beats per minute, your heart is fine, your pacemaker has got 6 hours of battery left.”).
In another embodiment, hearing assistdevice700 may be configured to generate filtered environmental sound. For instance, hearing assistdevice700 may operate according toFIG. 12.FIG. 12 shows aflowchart1200 of a process for generating and broadcasting filtered sound from a hearing assist device, according to an exemplary embodiment. For purposes of illustration,flowchart1200 is described as follows with reference toFIG. 7.
Flowchart1200 begins withstep1202. Instep1202, an audio signal is generated based on environmental sound received by at least one microphone of the hearing assist device. For instance, as shown inFIG. 7,microphone706 may generate a receivedaudio signal740 based on received environmental sound. Receivedaudio signal740 may optionally be amplified, filtered, and converted to digital form to generatedigital audio signal746, as shown inFIG. 7.
In step1204, one or more frequencies of the audio signal are selectively favored to generate a modified audio signal. As shown inFIG. 7,DSP730 may receivedigital audio signal746, and may perform digital signal processing ondigital audio signal746 to generate processeddigital audio signal762.DSP730 may favor one or more frequencies by amplifying particular frequencies, attenuate particular frequencies, and/or by otherwise filteringdigital audio signal746 in the discrete domain.DSP730 may perform the signal processing for various reasons, including noise cancellation or hearing loss compensation. For instance,DSP730 may processdigital audio signal746 to compensate for a personal hearing frequency response of the user, such as compensating for poor hearing of high frequencies, middle range frequencies, or other personal frequency response characteristics of the user.
Instep1206, sound is generated based on the modified audio signal, the sound broadcast from a speaker of the hearing assist device into the ear of the user. For instance, as shown inFIG. 7, D/Aconverter764 may be present, and may receive processeddigital audio signal762. D/A converter764 may convert processeddigital audio signal762 to digital form to generate processedaudio signal766.Speaker714 receives processedaudio signal766, and broadcasts sound generated based on processedaudio signal766 into the ear of the user.
In this manner, environmental noise, voice, and other sounds may be tailored to a particular user's personal hearing frequency response characteristics. Furthermore, particular noises in the environment may be attenuated (e.g., road noise, engine noise, etc.) to be filtered from the received environmental sounds so that the user may better hear important or desired sounds. Furthermore, sounds that are desired to be heard (e.g., music, a conversation, a verbal warning, verbal instructions, sirens, sounds of a nearby car accident, etc.) may be amplified so that the user may better hear them.
In another embodiment, hearing assistdevice700 may be configured to transmit recorded voice of a user to another device. For instance, hearing assistdevice700 may operate according toFIG. 13.FIG. 13 shows aflowchart1300 of a process for generating an information signal in a hearing assist device based on a voice of a user, and for transmitting the information signal to a second device, according to an exemplary embodiment. For purposes of illustration,flowchart1300 is described as follows with reference toFIG. 7.
Flowchart1300 begins withstep1302. Instep1302, an audio signal is generated based on a voice of the user received at a microphone of the hearing assist device. For instance, as shown inFIG. 7,microphone706 may generate a receivedaudio signal740 based on received voice of the user. Receivedaudio signal740 may optionally be amplified, filtered, and converted to digital form to generatedigital audio signal746, as shown inFIG. 7.
The voice of the user may be any statement made by the user, including a question, a statement of fact, a command, or any other verbal sequence. For instance, the user may ask “what is my heart rate”. All such statements made by the user can be those intended for capture by one or more hearing assist devices, supporting local and remote systems. Such statements may also include unintentional sounds such as semi-lucid ramblings, moaning, choking, coughing, and/or other sounds. Any one or more of the hearing assist devices and the supporting local device can receive (via microphones) such audio and forward the audio from the hearing assist device(s) as needed for further processing. This processing may include voice and/or sound recognition, comparisons with command words or sequences, (video, audio) prompting for (gesture, tactile or audible) confirmation, carrying out commands, storage for later analysis or playback, and/or forwarding to an appropriate recipient system for further processing, storage, and/or presentations to others.
Instep1304, an information signal is generated based on the audio signal. As shown inFIG. 7,DSP730 may receivedigital audio signal746. In an embodiment,DSP730 and/orCPU732 may generate an information signal fromdigital audio signal746 to be transmitted to a second device from hearingassist device700.DSP730 and/orCPU732 may optionally perform voice/speech recognition ondigital audio signal746 to recognize spoken words included therein, and may include the spoken words in the generated information signal.
For instance, in an embodiment,code738 stored inmemory734 may include a voice recognition program that may be executed byCPU732 and/orDSP730. The voice recognition program may use conventional or proprietary voice recognition techniques. Furthermore, such voice recognition techniques may be augmented by sensor data. For instance, as described above, position/motion sensor518 may include a vibration sensor. The vibration sensor may detect vibrations of the user associated with speaking (e.g., jaw movement of the wearer during talking), and generates corresponding vibration information/data. The vibration information output by the vibration sensor may be received byCPU732 and/orDSP730, and may be used to aid in improving speech/voice recognition performed by the voice recognition program. For instance, the vibration information may be used by the voice recognition program to detect breaks between words, to identify the location of spoken syllables, to identify the syllables themselves, and/or to better perform other aspects of voice recognition. Alternatively, the vibration information may be transmitted from hearingassist device700, along with the information signal, to a second device to perform the voice recognition process at the second device (or other device).
Instep1306, the generated information signal is transmitted to the second device. For instance, as shown inFIG. 7,CPU732 may provide the information signal (e.g., from CPU registers, fromDSP730, frommemory734, etc.) to a transceiver to be transmitted from hearing assist device700 (e.g.,NFC transceiver718,BT transceiver722, or other transceiver).
Another device, such asmobile computing device902,stationary computing device904, orserver906, may receive the transmitted voice information, and may analyze the voice (spoken words, moans, slurred words, etc.) therein to determine one or more functions/actions to be performed. As a result, one or more functions/actions may be determined to be performed by hearingassist device700 or another device.
In another embodiment, hearing assistdevice700 may be configured to enable voice to be received and/or generated to be played to the user. For instance, hearing assistdevice700 may operate according toFIG. 14.FIG. 14 shows aflowchart1400 of a process for generating voice to be broadcast to a user, according to an exemplary embodiment. For purposes of illustration,flowchart1400 is described as follows with reference toFIG. 7.
Flowchart1400 begins withstep1402. Instep1402, a sensor output signal is received from a medical sensor of the hearing assist device that senses a characteristic of the user. Similarly to step802 ofFIG. 8, sensors702a-702ceach sense/measure information about a health characteristic of the user of hearingassist device700. For instance,sensor702amay sense a characteristic of the user (e.g., a heart rate, a blood pressure, a glucose level, a temperature, etc.).Sensors702ageneratessensor output signal758a, which indicates the measured information about the corresponding health characteristic.Sensor interface728a, when present, may convertsensor output signal758ato modifiedsensor output signal760a, to be received by processing logic.
Instep1404, processed sensor data is generated based on the sensor output signal. Similarly to step804 ofFIG. 8,processing logic704 receives modifiedsensor output signal760a, and may process modifiedsensor output signal760ain any manner. For instance, as shown inFIG. 7,CPU732 may receive modifiedsensor output signal760a, and may process the sensor information contained therein to generate processed sensor data. For instance,CPU732 may manipulate the sensor information (e.g., according to an algorithm of code738) to convert the sensor information into a presentable form (e.g., scaling the sensor information, adding or subtracting a constant to/from the sensor information, etc.), or may otherwise process the sensor information. Furthermore,CPU732 may transmit the sensor information of modifiedsensor output signal760atoDSP730 to be digital signal processed.
Instep1406, a voice audio signal generated based at least on the processed sensor data is received. In an embodiment, the processed sensor data generated instep1404 may be transmitted from hearingassist device700 to another device (e.g., as shown inFIG. 9), and a voice audio signal may be generated at the other device based on the processed sensor data. In another embodiment, the voice audio signal may be generated by processinglogic704 based on the processed sensor data. The voice audio signal contains voice information (e.g., spoken words) that relate to the processed sensor data. For instance, the voice information may include a verbal alert, verbal instructions, and/or other verbal information to be provided to the user based on the processed sensor data (e.g., based on a value of measured sensor data, etc.). The voice information may be generated by being synthesized, being retrieved from memory734 (e.g., a library of record spoken segments in other data768), or being generated from a combination thereof. It is noted that the voice audio signal may be generated based on processed sensor data from one or more sensors.DSP730 may output the voice audio signal as processeddigital audio signal762.
Instep1408, voice is broadcast from the speaker into the ear of the user based on the received voice audio signal. For instance, as shown inFIG. 7, D/Aconverter764 may be present, and may receive processeddigital audio signal762. D/A converter764 may convert processeddigital audio signal762 to digital form to generate processedaudio signal766.Speaker714 receives processedaudio signal766, and broadcasts voice generated based on processedaudio signal766 into the ear of the user.
In this manner, voice may be provided to the user by hearingassist device700 based at least on sensor data, and optionally further based on additional information. The voice may provide information to the user, and may remind or instruct the user to perform a function/action. For instance, the voice may include at least one of a verbal instruction to the user (“take an iron supplement”), a verbal warning to the user (“your heart rate is high”), a verbal question to the user (“have you fallen down, and do you need assistance?”), or a verbal answer to the user (“your heart rate is 98 beats per minute”).
V. Hearing Assist Device with External Operational SupportIn accordance with various embodiments, the performance of one or more functions by a hearing assist device is assisted or improved in some manner by utilizing resources of an external device and/or service to which the hearing assist device may be communicatively connected. Such performance assistance or improvement may be achieved, for example and without limitation, by utilizing power resources, processing resources, storage resources, sensor resources, and/or user interface resources of an external device or service to which the hearing assist device may be communicatively connected.
FIG. 15 is a block diagram of anexample system1500 that enables external operational support to be provided to a hearing assist device in accordance with an embodiment. As shown inFIG. 15,system1500 includes a firsthearing assist device1501, a secondhearing assist device1503, and a portableelectronic device1505. First and second hearing assistdevices1501 and1503 may each be implemented in a like manner to any of the hearing assist devices described above in Sections II-IV. However, first and second hearing assistdevices1501 and303 are not limited to those implementations. Furthermore, althoughFIG. 15 shows two hearing assist devices that can be worn by a user, it is to be understood that the external operational support techniques described herein can also be applied to a single hearing assist device worn by a user.
Portableelectronic device1505 is intended to represent an electronic device that may be carried by or is otherwise locally accessible to a wearer of first and second hearing assistdevices1501 and1503. By way of example and without limitation, portableelectronic device1505 may comprise a smart phone, a tablet computer, a netbook, a laptop computer, a remote control device, a personal media player, a handheld gaming device, or the like. It is noted that certain external operational support features described herein are premised on the ability of a wearer of a hearing assist device to hold portableelectronic device1505 and/or lift portableelectronic device1505 toward his/her ear. For these embodiments, it is to be understood that portableelectronic device1505 has a form factor that permits such actions to be taken. However, for embodiments that comprise other external operational support features that do not require such actions to be taken, it is to be understood that portableelectronic device1505 may have a larger form factor. For example, in accordance with certain embodiments, portableelectronic device1505 may comprise a desktop computer or television.
As further shown inFIG. 15, firsthearing assist device1501 and secondhearing assist device1503 are capable of communicating with each other via acommunication link1521.Communication link1521 may be established using, for example and without limitation, a wired communication link, a wireless communication link (wherein such wireless communication link may be established using NFC, BLUETOOTH® low energy (BTLE) technology, wireless power transfer (WPT) technology, telecoil, or the like), or skin-based signal transmission. Furthermore, firsthearing assist device1501 is capable of communicating with portableelectronic device1505 via acommunication link1523 and second hearing assist device303 is capable of communicating with portableelectronic device1505 via acommunication link1525. Each ofcommunication links1523 and1525 may be established using, for example and without limitation, a wireless communication link (wherein such wireless communication link may be established using NFC, BTLE technology, WPT technology, telecoil or the like), or skin-based signal transmission.
As also shown inFIG. 15, portableelectronic device1505 is capable of communicating with various other entities via one or more wired and/orwireless communication pathways1513. For example, portableelectronic device1505 may access one or more hearing assistdevice support services1511 via communication pathway(s)1513. Such hearing assist device support service(s)1511 may be executed or otherwise provided by a device such as but not limited to a set top box, a television, a wired or wireless access point, or a server that is accessed via communication pathway(s)1513. Such device may also comprise a gateway via which such hearing assist device support service(s)1511 may be accessed. As will be appreciated by persons skilled in the art, such hearing assist device support service(s)1511 may also comprise cloud-based services accessed via a network. Since portableelectronic device1505 can access such hearing assist device support service(s)1511 and can also communicate with first and second hearing assistdevices1501 and1503, portableelectronic device1505 is capable of making hearing assist device support service(s)1511 available to first and second hearing assistdevices1501 and1503.
Portableelectronic device1505 can also access one or more support personnel system(s)1515 via communication pathway(s)1513. Support personnel system(s)1515 are intended to generally represent systems that are owned and/or operated by persons having an interest (personal, professional, fiduciary or otherwise) in the health, well-being, or some other state of a wearer of first and second hearing assistdevices1501 and1503. By way of example only, support personnel system(s)1515 may include a system owned and/or operated by a doctor's office or medical practice with which a wearer of first and second hearing assistdevices1501 and1503 is affiliated. As another example, support personnel system(s)1515 may include systems or devices owned and/or operated by family members, friends, or caretakers of a wearer of first and second hearing assistdevices1501 and1503. Since portableelectronic device1505 can access such support personnel system(s)1515 and can also communicate with first and second hearing assistdevices1501 and1503, portableelectronic device1505 is capable of carrying out communication between first and second hearing assistdevices1501 and1503 and support personnel system(s)1515.
Wired and/or wireless communication pathway(s)1513 may be implemented using any of a wide variety of communication media and associated protocols. For example, communication pathway(s)1513 may comprise wireless communication pathways implemented via radio frequency (RF) signaling, infrared (IR) signaling, or the like. Such signaling may be carried out using long-range wireless protocols such as WIMAX® (IEEE 802.16) or GSM (Global System for Mobile Communications), medium-range wireless protocols such as WI-FI® (IEEE 802.11), and/or short-range wireless protocols such as BLUETOOTH® or any of a variety of IR-based protocols. Communication pathway(s)1513 may also comprise wired communication pathways established over twisted pair, Ethernet cable, coaxial cable, optical fiber, or the like, using suitable communication protocols therefor.
Communication links1523 and1525 respectively established between first and second hearing assistdevices1501 and1503 and portableelectronic device1505 enable first and second hearing assistdevices1501 and1503 to utilize resources of and/or services provided by portableelectronic device1505 to assist in performing certain operations and/or improve the performance of such operations. Furthermore, since portableelectronic device1505 can access hearing assist device support service(s)1511 and support personnel system(s)1515, portableelectronic device1505 can also make such system(s) and service(s) available to first and second hearing assistdevices1501 and1503 such that first and second hearing assistdevices1501 and1503 can utilize those system(s) and service(s) to assist in the performance of certain operations and/or improve the performance of such operations.
These concepts will now be further explained with respect toFIG. 16, which depicts asystem1600 comprising ahearing assist device1601 and a cloud/service/phone/portable device1603 that may be communicatively connected thereto.Hearing assist device1601 may comprise, for example and without limitation, either of hearingassist device1501 or1503 as described above in reference toFIG. 15 or any of the hearing assist devices described above in Sections II-IV. Although only a singlehearing assist device1601 is shown inFIG. 16, it is to be understood thatsystem1600 may include two hearing assist devices.Device1603 may comprise, for example and without limitation, portableelectronic device1505 or a device used to implement any of hearing assist device support service(s)1511 or support personnel system(s)1515 that are accessible to portableelectronic device1505 as described above in reference toFIG. 15. Thusdevice1603 may be local with respect to the wearer of hearingassist device1601 or remote with respect to the wearer of hearingassist device1601.
Hearing assist device1601 includes a number of processing modules that may be implemented as software or firmware running on one or more general purpose processors and/or digital signal processors (DSPs), as dedicated circuitry, or as a combination thereof. Such processors and/or dedicated circuitry are collectively referred to inFIG. 16 as general purpose (DSP) anddedicated processing circuitry1613. As shown inFIG. 16, the processing modules include aspeech generation module1623, a speech/noise recognition module1625, an enhancedaudio processing module1627, a clock/scheduler module1629, a mode select and reconfiguration module1631, and abattery management module1633.
As also shown inFIG. 16, hearingassist device1601 further includeslocal storage1635.Local storage1635 comprises one or more volatile and/or non-volatile memory devices or structures that are internal to hearing assistdevice1601. Such memory devices or structures may be used to store recorded audio information in anaudio playback queue1637 as well as to store information and settings1639 associated with hearingassist device1601, a user thereof, a device paired thereto, and to services (cloud-based or otherwise) accessed by or on behalf of hearingassist device1601.
Hearing assist device1601 further includes sensor components and associatedcircuitry1641. Such sensor components and associated circuitry may include but are not limited to one or more microphones, bone conduction sensors, temperature sensors, blood pressure sensors, blood glucose sensors, pulse oximetry sensors, pH sensors, vibration sensors, accelerometers, gyros, magnetos, or the like. Further sensor types that may be included in hearingassist device1601 and information regarding the structure, function and operation of such sensors is provided above in Sections II-Iv.
Hearing assist device1601 still further includes user interface (UI) components and associated circuitry1643. Such UI components may include buttons, switches, dials or other mechanical components by which a user may control and configure the operation of hearingassist device1601. Such UI components may also comprise capacitive sensing components to allow for touch-based or tap-based interaction with hearingassist device1601. Such UI components may further include a voice-based UI. Such voice-based UI may utilize speech/noise recognition module1625 to recognize commands uttered by a user of hearingassist device1601 and/orspeech generation module1623 to provide output in the form of pre-defined or synthesized speech. In an embodiment in whichhearing assist device1601 comprise an integrated part of a pair of glasses, visor or helmet, user interface component and associated circuitry1643 may also comprise a display integrated with or projected upon a portion of the glasses, visor or helmet for presenting information to a user.
Hearing assist device1601 also includes communication interfaces and associatedcircuitry1645 for carrying out communication over one or more wired, wireless, or skin-based communication pathways. Communication interfaces and associatedcircuitry1645 enable hearing assistdevice1601 to communicate withdevice1603. Communication interfaces and associatedcircuitry1645 may also enable hearing assistdevice1601 to communicate with a second hearing assist device worn by the same user as well as with other devices.
Generally speaking, cloud/service/phone/portable device1603 comprises power resources, processing resources, and storage resources that can be used by hearingassist device1601 to assist in performing certain operations and/or to improve the performance of such operations when a communication pathway has been established between the two devices.
In particular,device1603 includes a number of assist processing modules that may be implemented as software or firmware running on one or more general purpose processors and/or DSPs, as dedicated circuitry, or as a combination thereof. Such processors and/or dedicated circuitry are collectively referred to inFIG. 16 as general/dedicated processing circuitry (with hearing assist device support)1653. As shown inFIG. 16, the processing modules include a speech generation assistmodule1655, a speech/noise recognition assistmodule1657, an enhanced audioprocessing assist module1659, a clock/scheduler assist module1661, a mode select and reconfiguration assist module1663, and a batterymanagement assist module1665.
As also shown inFIG. 16,device1603 further includesstorage1667.Storage1667 comprises one or more volatile and/or non-volatile memory devices/structures and/or storage systems that are internal to or otherwise accessible todevice1603. Such memory devices/structures and/or storage systems may be used to store recorded audio information in anaudio playback queue1669 as well as to store information and settings1671 associated with hearingassist device1601, a user thereof, a device paired thereto, and to services (cloud-based or otherwise) accessed by or on behalf of hearingassist device1601.
Device1603 also includes communication interfaces and associatedcircuitry1677 for carrying out communication over one or more wired, wireless or skin-based communication pathways. Communication interfaces and associatedcircuitry1677 enabledevice1603 to communicate with hearingassist device1601. Such communication may be direct (point-to-point betweendevice1603 and hearing assist device1601) or indirect (through one or more intervening devices or nodes). Communication interfaces and associatedcircuitry1677 may also enabledevice1603 to communicate with other devices or access various remote services, including cloud-based services.
In an embodiment in whichdevice1603 comprises a device that is carried by or is otherwise locally accessible to a wearer of hearingassist device1601,device1603 may also comprise supplemental sensor components and associatedcircuitry1673 and supplemental user interface components and associated circuitry1675 that can be used by hearingassist device1601 to assist in performing certain operations and/or to improve the performance of such operations.
Further explanation and examples of how external operational support may be provided to a hearing assist device will now be provided with continued reference tosystem1600 ofFIG. 16.
A prerequisite for providing external operational support to hearing assistdevice1601 bydevice1603 may be the establishment of a communication pathway betweendevice1603 and hearing assistdevice1601. In one embodiment, the establishment of such a communication pathway is achieved by implementing a communication service on hearingassist device1601 that monitors for the presence ofdevice1603 and selectively establishes communication therewith in accordance with a predefined protocol. Alternatively, a communication service may be implemented ondevice1603 that monitors for the presence of hearingassist device1601 and selectively establishes communication therewith in accordance with a predefined protocol. Still other methods of establishing a communication pathway between hearing assistdevice1601 anddevice1603 may be used.
Battery Management.
Hearing assist device1601 includesbattery management module1633 that monitors a state of a battery internal to hearing assistdevice1601.Battery management module1601 may also be configured to alert a wearer of hearingassist device1601 when such battery is in a low-power state so that the wearer can recharge the battery. As discussed above, the wearer of hearingassist device1601 can cause such recharging to occur by bringing a portable electronic device within a certain distance of hearingassist device1601 such that power may be transferred via an NFC link, WPT link, or other suitable link for transferring power between such devices. In an embodiment in whichdevice1603 comprises such a portable electronic device, hearingassist device1601 may be said to be utilizing the power resources ofdevice1603 to assist in the performance of its operations.
As also noted above, when a communication pathway has been established between hearing assistdevice1601 anddevice1603, hearingassist device1601 can also utilize other resources ofdevice1603 to assist in performing certain operations and/or to improve the performance of such operations. Whether and when hearingassist device1601 so utilizes the resources ofdevice1603 may vary depending upon the designs of such devices and/or any user configuration of such devices.
For example, hearingassist device1601 may be programmed to only utilize certain resources ofdevice1603 when the battery power available to hearing assistdevice1601 has dropped below a certain level. As another example, hearingassist device1601 may be programmed to only utilize certain resources ofdevice1603 when it is determined that an estimated amount of power that will be consumed in maintaining a particular communication pathway between hearing assistdevice1601 anddevice1603 will be less than an estimated amount of power that will be saved by offloading functionality to and/or utilizing the resources ofdevice1603. In accordance with such an embodiment, an assistance feature ofdevice1603 may be provided when a very low power communication pathway can be established or exists between hearing assistdevice1601 anddevice1603, but that same assistance feature ofdevice1603 may be disabled if the only communication pathway that can be established or exists between hearing assistdevice1601 anddevice1603 is one that consumes a relatively greater amount of power.
Still other decision algorithms can be used to determine whether and when hearingassist device1601 will utilize resources ofdevice1603. Such algorithms may be applied bybattery management module1633 of hearingassist device1601 and/or by batterymanagement assist module1665 ofdevice1603 prior to activating assistance features ofdevice1603. Furthermore, a user interface provided by hearingassist device1601 and/ordevice1603 may enable a user to select which features of hearingassist device1601 should be able to utilize external operational support and/or under what conditions such external operational support should be provided. The settings established by the user may be stored as part of information and settings1639 inlocal storage1635 of hearingassist device1601 and/or as part of information and settings1671 instorage1667 ofdevice1603.
In accordance with certain embodiments, hearingassist device1601 can also utilize resources of a second hearing assist device to perform certain operations. For example, hearingassist device1601 may communicate with a second hearing assist device worn by the same user to coordinate distribution or shared execution of particular operations. Such communication may be carried out, for example, via a point-to-point link between the two hearing assist devices or via links between the two hearing assist devices and an intermediate device, such as a portable electronic device being carried by a user. The determination of whether a particular operation should be performed by hearingassist device1601 versus the second hearing assist device may be made bybattery management module1633, a battery management module of the second hearing assist device, or via coordination between both battery management modules.
For example, if hearingassist device1601 has more battery power available then the second hearing assist device, hearingassist device1601 may be selected to perform a particular operation, such as taking a blood pressure reading or the like. Such battery imbalance may result from, for example, one hearing assist device being used at a higher volume than the other over an extended period of time. Via coordination between the two hearing assist devices, a more balanced discharging of the batteries of both devices can be achieved. Furthermore, in accordance with certain embodiments, certain sensors may be present on hearingassist device1601 that are not present on the second hearing assist device and certain sensors may be present on the second hearing assist device that are not present on hearingassist device1601, such that a distribution of functionality between the two hearing assist devices is achieved by design.
Speech Generation.
Hearing assist device1601 comprises aspeech generation module1623 that enables hearingassist device1601 to generate and output verbal audio information (spoken words or the like) to a wearer thereof via a speaker of hearingassist device1601. Such verbal audio information may be used to implement a voice UI, to provide speech-based alerts, messages and reminders as part of a clock/scheduler feature implemented by clock/schedule module1629, or to provide emergency alerts or messages to a wearer of hearing assist device based on a detected medical condition of the wearer, or the like. The speech generated byspeech generation module1623 may be pre-recorded and/or dynamically synthesized, depending upon the implementation.
When a communication pathway has been established between hearing assistdevice1601 anddevice1603, speech generation assistmodule1655 ofdevice1603 may operate to perform all or part of the speech generation function that would otherwise be performed byspeech generation module1623 of hearingassist device1601. Such operation bydevice1603 can advantageously cause the battery power of hearingassist device1601 to be conserved. Any speech generated by speech generation assistmodule1655 may be communicated back to hearing assistdevice1601 for playback via at least one speaker of hearingassist device1601. Any of a wide variety of well-known speech codecs may be used to carry out such transmission of speech information in an efficient manner. Additionally or alternatively, any speech generated by speech generation assistmodule1655 can be played back via one or more speakers ofdevice1603 ifdevice1603 is local with respect to the wearer of hearingassist device1601.
Furthermore, speech generation assistmodule1655 may provide a more elaborate set of features than those provided byspeech generation module1623, asdevice1603 may have access to greater power, processing and storage resources than hearing assistdevice1601 to support such additional features. For example, speech generation assistmodule1655 may provide a more extensive vocabulary of pre-recorded words, terms and sentences or may provide a more powerful speech synthesis engine.
Speech and Noise Recognition.
Hearing assist device1601 includes a speech/noise recognition module1625 that is operable to apply speech and/or noise recognition algorithms to audio input received via one or more microphones of hearingassist device1601. Such algorithms can enable speech/noise recognition module1625 to determine when a wearer of hearingassist device1601 is speaking and further to recognize words that are spoken by such wearer, while rejecting non-speech utterances and noise. Such algorithms may be used, for example, to enable hearing assistdevice1601 to provide a voice-based UI by which a wearer of hearingassist device1601 can exercise voice-based control over the device.
When a communication pathway has been established between hearing assistdevice1601 anddevice1603, speech/noise recognition assistmodule1657 ofdevice1603 may operate to perform all or part of the speech/noise recognition functions that would otherwise be performed by speech/noise recognition module1625 of hearingassist device1601. Such operation bydevice1603 can advantageously cause the battery power of hearingassist device1601 to be conserved.
Furthermore, speech/noise recognition assistmodule1657 may provide a more elaborate set of features than those provided by speech/noise recognition module1625, asdevice1603 may have access to greater power, processing and storage resources than hearing assistdevice1601 to support such additional features. For example, speech/noise recognition assistmodule1657 may include a training program that a wearer of hearingassist device1601 can use to train the speech recognition logic to better recognize and interpret his/her own voice. As another example, speech/noise recognition assistmodule1657 may include a process by which a wearer of hearingassist device1601 can add new words to the dictionary of words that are recognized by the speech recognition logic. Such additional features may be included in an application that can be installed by the wearer ondevice1603. Such additional features may also be supported by a user interface that forms part of supplemental user interface components and associated circuitry1675. Of course, such features may be included in speech/noise recognition module1625 in accordance with certain embodiments.
Enhanced Audio Processing.
Hearing assist device1601 includes an enhancedaudio processing module1627. Enhancedaudio processing module1627 may be configured to process an input audio signal received by hearingassist device1601 to achieve a desired frequency response prior to playing back such input audio signal to a wearer of hearingassist device1601. For example, enhancedaudio processing module1627 may selectively amplify certain frequency components of an input audio signal prior to playing back such input audio signal to the wearer. The frequency response to be achieved may specified by or derived from a prescription for the wearer that is provided to hearing assistdevice1601 by an external device or system. With reference to the example components ofFIG. 15, such external device or system may include any of portableelectronic device1505, hearing assist device support service(s)1511, or support personnel system(s)1515. In certain embodiments, such prescription may be formatted in a standardized manner in order to facilitate use thereof by any of a variety of hearing assistance devices and audio reproduction systems.
In accordance with a further embodiment in whichhearing assist device1601 is worn in conjunction with a second hearing assist device, enhancedaudio processing module1627 may modify a first input audio signal received by hearingassist device1601 prior to playback of the first input audio signal to one ear of the wearer, while an enhanced audio processing module of the second hearing assist device modifies a second input audio signal received by the second hearing assist device prior to playback of the second input audio signal to the other ear of the wearer. Such modification of the first and second input audio signals can be used to achieve enhanced spatial signaling for the wearer. That is to say, the enhanced audio signals provided to both ears of the wearer will enable the wearer to better determine the spatial origin of sounds. Such enhancement is desirable for persons who have a poor ability to detect the spatial origin of sound, and therefore a poor ability to responds to spatial cues. To determine the appropriate modifications for the left and right ear of the wearer, an appropriate user-specific “head transfer function” can be determined through testing of a user. The results of such testing may then be used to calibrate the spatial audio enhancement function applied at each ear.
FIG. 17 is a block diagram of an enhancedaudio processing module1700 that may be utilized by hearingassist device1601 to provide such enhanced spatial signaling. Enhancedaudio processing module1700 is configured to process an audio signal produced by a microphone of a left ear hearing assist device (denoted MIC L) and an audio signal produced by a microphone of a right ear hearing assist device (denoted MIC R) to produce an audio signal for playback to the left ear of a user (denoted LEFT).
In particular, enhancedaudio processing module1700 includes anamplifier1702 that amplifies the MIC L signal. Such signal may also be converted from analog to digital form by an analog-to-digital (A/D) converter (not shown inFIG. 17). The output ofamplifier1702 is passed to alogic block1704 that applies a head transfer function (HTF) thereto. The output oflogic block1704 is passed to amultiplier1706 that applies a scaling function thereto. The output ofmultiplier1706 is passed to amixer1720. Enhancedaudio processing module1700 also includes anamplifier1712 that amplifies the MIC R signal. Such signal may also be converted from analog to digital form by an A/D converter (not shown inFIG. 17). The output ofamplifier1712 is passed to alogic block1714 that applies a HTF thereto. The output oflogic block1714 is passed to amultiplier1716 that applies a scaling function thereto. The output ofmultiplier1716 is passed tomixer1720.Mixer1720 combines the output ofmultiplier1706 and the output ofmultiplier1716. The audio signal output bymixer1720 is passed to anamplifier1722 that amplifies it to produce the LEFT audio signal. Such signal may also be converted from digital to analog form by a digital-to-analog (D/A) converter (not shown inFIG. 17) prior to playback.
It is noted that to operate in such a manner, enhancedaudio processing module1700 must have access to both the MIC L signal obtained by the left ear hearing assist device (which is assumed to be hearingassist device1601 in this example) and the MIC R signal obtained by the right ear hearing assist device. Thus, the left ear hearing assist device must be capable of communicating with the right ear hearing device in order to obtain the MIC R signal therefrom. Likewise, the right ear hearing assist device must be capable of communicating with the left ear hearing device in order to obtain the MIC L signal therefrom. Such communication may be carried out, for example, via a point-to-point link between the two hearing assist devices or via links between the two hearing assist devices and an intermediate device, such as a portable electronic device being carried by a user.
Thus, in accordance with the foregoing, enhancedaudio processing module1627 may modify an input audio signal received by hearingassist device1601 to achieve a desired frequency response and/or spatial signaling prior to playback of the input audio signal. Both the desired frequency response and spatial signaling may be specified by or derived from a prescription associated with a wearing of hearingassist device1601.
When a communication pathway has been established between hearing assistdevice1601 anddevice1603, enhanced audioprocessing assist module1659 ofdevice1603 may operate to perform all or part of the enhanced audio processing functions that would otherwise be performed by enhancedaudio processing module1627 of hearingassist device1601, provided that there is a sufficiently fast communication pathway between hearing assistdevice1601 anddevice1603. A sufficiently fast communication pathway is required so as not to introduce an inordinate amount of lag between the receipt and playback of audio signals by hearingassist device1601. Such operation bydevice1603 can advantageously cause the battery power of hearingassist device1601 to be conserved.
Thus, for example, audio content collected by one or more microphones of hearingassist device1601 may be transmitted todevice1603. Enhanced audioprocessing assist module1659 ofdevice1603 may apply enhanced audio processing to such audio content, thereby producing enhanced audio content. The application of enhanced audio processing may comprise, but is not limited to, modifying the audio content to achieve a desired frequency response and/or spatial signaling as previously described.Device1603 may then transmit the enhanced audio content back to hearing assistdevice1601, where it may be played back to a wearer thereof. The foregoing transmission of audio content between the devices may utilize well-known audio and speech compression techniques to achieve improved transmission efficiency. Additionally or alternatively, any enhanced audio content generated by enhanced audioprocessing assist module1659 can be played back via one or more speakers ofdevice1603 ifdevice1603 is local with respect to the wearer of hearingassist device1601.
Clock/Scheduler.
A clock/scheduler module1629 of hearingassist device1601 is configured to provide a wearer thereof with alerts or messages concerning the date and/or time, upcoming appointments or events, or other types of information typically provided by, recorded in, or otherwise associated with a personal calendar and scheduling service or tool. Such alerts and messages may be conveyed on demand, such as in response to the wearer uttering the words “time” or “date” or performing some other action that is recognizable to a user interface associated with clock/scheduler module1629. Such alerts and messages may also be conveyed automatically, such as in response to clock/scheduler module1629 determining that an appointment or event is currently occurring or is scheduled to occur within a predetermined time frame. The alerts or messages may comprise certain sounds or words that are played back via one or more speakers of hearingassist device1601. Where the alerts or messages comprise speech, such speech may be generated byspeech generation module1623 and/or speech generation assistmodule1655.
As shown inFIG. 16,device1603 includes a clock/scheduler assist module1661. In an embodiment, clock/scheduler assist module1661 comprises a personal calendar and scheduling service or tool that a user may interact with via a personal electronic device, such as personalelectronic device1505 ofFIG. 15. For example and without limitation, the personal calendar and scheduling service or tool may comprise MICROSOFT OUTLOOK®, GOOGLE CALENDAR™, or the like. When a communication pathway has been established between hearing assistdevice1601 anddevice1603, information concerning the current date/time, scheduled appointments and events, and the like, may be transferred from clock/scheduler assist module1661 to clock/scheduler module1629. Clock/scheduler module1629 may then store such information locally where it can be used for alert and message generation as previously described.
Clock/scheduler module1629 within hearingassist device1601 may be configured to store only a subset (for example, one week's worth) of scheduled appointments and events maintained by clock/scheduler assist module1661 to conserve local storage space. Clock/scheduler module1629 may further be configured to periodically synchronize its record of appointments and events with that maintained by clock/scheduler assist module1661 ofdevice1603 when a communication pathway has been established between hearing assistdevice1601 anddevice1603.
When a communication pathway has been established between hearing assistdevice1601 anddevice1603, clock/scheduler assist module1661 may also be utilized to perform all or a portion of the time/date reporting and alert/message generation functions that would normally be performed by clock/scheduler module1629. Such operation bydevice1603 can advantageously cause the battery power of hearingassist device1601 to be conserved. Any alerts or messages generated by clock/scheduler assist module1661 may be communicated back to hearing assistdevice1601 for playback via at least one speaker of hearingassist device1601. Any of a wide variety of well-known speech or audio codecs may be used to carry out such transmission of alerts and messages in an efficient manner. Additionally or alternatively, any alerts or messages generated by clock/scheduler assist module1655 can be played back via one or more speakers ofdevice1603 ifdevice1603 is local with respect to the wearer of hearingassist device1601.
Mode Select and Reconfiguration.
Mode select and reconfiguration module1631 comprises a module that enables selection and reconfiguration of various operating modes of hearingassist device1601. As will be made evident by the discussion provided below, hearingassist device1601 may operate in a wide variety of modes, wherein each mode may specify certain operating parameters such as: (1) from which microphones audio input is to be obtained from (for example, audio input may be captured by one or more microphones of hearingassist device1601 and/or by one or more microphones of device1603); (2) where audio input is processed (for example, audio input may be processed by hearingassist device1601 and/or bydevice1603; (3) how audio input is processed (for example, certain audio processing features such as noise suppression, personalized frequency response processing, selective audio boosting, customized equalization, or the like may be utilized); and (4) where audio output is delivered (for example, audio output may be played back by one or more speakers of hearingassist device1601 and/or by one or more speakers of device1603).
The selection and reconfiguration of a particular mode of operation may be made by a user via interaction with a user interface of hearingassist device1601. Furthermore,device1603 includes a mode select and reconfiguration assist module1663 that enables a user to select and reconfigure a particular mode of operation through interaction with a user interface ofdevice1603. Any mode selection or reconfiguration information input todevice1603 may be passed to hearing assistdevice1601 when a communication pathway between the two devices is established. As will be discussed below, in certain embodiments,device1603 may be capable of providing a more elaborate, intuitive and user-friendly user interface by which a user can select and reconfigure operational modes of hearingassist device1601.
Mode select and reconfiguration module1631 and/or mode select and reconfiguration assist module1663 may each be further configured to enable a user to define contexts and circumstances in which a particular mode of operation of hearingassist device1601 should be activated or deactivated.
Local Storage: Audio Playback Queue.
Local storage1635 of hearingassist device1601 includes anaudio playback queue1637.Audio playback queue1637 is configured to store a limited amount of audio content that has been received by hearingassist device1601 so that it can be selectively played back by a wearer thereof. This feature enables the wearer to selectively play back certain audio content (such as words spoken by another or the like). For example, the last 5 seconds of audio may be played back. Such playback may be carried out at a higher volume depending upon the configuration. Such playback may be deemed desirable, for example, if the wearer did not fully comprehend something that was just said to him/her.
Audio playback queue1637 may comprise a first-in-first-out (FIFO) queue such that only the last few seconds or minutes of audio received by hearingassist device1601 will be stored therein at any time. The audio signals stored inaudio playback queue1637 may comprise processed audio signals (such as audio signals that have already been processed by enhanced audio processing module1627) or unprocessed audio signals. In the latter case, the audio signals stored inaudio playback queue1637 may be processed by enhancedaudio processing module1627 before being played back to a wearer of hearingassist device1601. In an embodiment in which a user is wearing two hearing assist devices, a left ear queue and a right ear queue may be maintained.
When a communication pathway has been established between hearing assistdevice1601 anddevice1603,audio playback queue1669 ofdevice1603 may also operate to perform all or part of the audio storage operation that would otherwise be performed byaudio playback queue1637 of hearingassist device1601. Thus,audio playback queue1669 may also support the aforementioned audio playback functionality by storing a limited amount of audio content received by hearingassist device1601 and transmitted todevice1603. By so doing, power and storage resources of hearingassist device1601 may be conserved. Furthermore, sincedevice1603 may have greater storage resources than hearing assistdevice1601,audio playback queue1669 provided bydevice1603 may be capable of storing more and/or higher quality audio content than can be stored byaudio playback queue1637.
In an alternate embodiment in whichdevice1603 is carried by or otherwise locally accessible to a wearer of hearingassist device1601,device1603 may independently record ambient audio via one or more microphones thereof and store such audio inaudio playback queue1669 for later playback to the wearer. Such playback may occur via one or more speakers of hearingassist device1601 or, alternatively, via one or more speakers ofdevice1603. Playback bydevice1603 may be opted for, for example, in a case where hearingassist device1601 is in a low power state, or is missing or fully discharged.
Various user interface techniques may be used to initiate playback of recorded audio in accordance with different embodiments. For example, in an embodiment, pressing a button on or tappinghearing assists device1601 may initiate playback of a limited amount of audio. In an embodiment in whichdevice1603 is carried by or is otherwise locally accessible to the wearer of hearingassist device1601, playback may be initiated by interacting with a user interface ofdevice1603, such as by pressing a button or tapping an icon on a touchscreen ofdevice1603. Furthermore, uttering certain words or sounds may trigger playback, such as “repeat” or “playback.” This feature can be implemented using the speech recognition functionality of hearingassist device1601 ordevice1603.
In certain embodiments, recording of audio may be carried out over extended period of times (for example, minutes, tens of minutes, or hours). In accordance with such embodiments,audio playback queue1669 may be relied upon to store the recorded audio content, asdevice1603 may have access to greater storage resources than hearing assistdevice1601. Audio compression may be used in any of the aforementioned implementations to reduce consumption of storage.
It is noted that audio may be recorded for purposes other than playing back recently received audio. For example, recording may be used to capture the content of meetings, concerts, or other events that a wearer of hearingassist device1601 attends so that such audio can be replayed at a later time or shared with others. Recording may also be used for health reasons. For example, a wearer's breathing noises may be recorded while the wearer is sleeping and later analyzed to determine whether or not the wearer suffers from sleep apnea. However, these are examples only, and other uses may exist for such recording functionality.
To help further illustrate the audio playback functionality,FIG. 18 depicts aflowchart1800 of a method for providing audio playback support to a hearing assist device, such as hearingassist device1601. As shown inFIG. 18, the method offlowchart1800 begins atstep1802, in which an audio signal obtained via at least one microphone of the hearing assist device is received. Atstep1804, a copy of the received audio signal is stored in an audio playback queue. Atstep1806, the copy of the received audio signal is retrieved from the audio playback queue for playback to a wearer of the hearing assist device. In accordance with one embodiment, each ofsteps1802,1804 and1806 is performed by a hearing assist device, such as hearingassist device1601. In accordance with an alternate embodiment, each ofsteps1802,1804 and1806 is performed by a device or service that is external to the hearing assist device and communicatively connected thereto via a communication pathway, such asdevice1603 or a service implemented bydevice1603. The method offlowchart1800 may further include playing back the copy of the received audio signal to the wearer of the hearing assist device via at least one speaker of the hearing assist device or via at least one speaker of a portable electronic device that is carried by or otherwise accessible to the wearer of the hearing assist device.
Local Storage: Information and Settings.
Local storage1635 also stores information and settings1639 associated with hearingassist device1601, a user thereof, a device paired thereto, and to services accessed by or on behalf of hearingassist device1601. Such information and settings may include, for example, owner information (which may be used, for example, to recognize and/or authenticate an owner of hearing assist device1601), security information (including but not limited to passwords, passcodes, encryption keys or the like) used to facilitate private and secure communication with external devices (such as device1603), and account information useful for signing in to various services available on certain external computer systems. Such information and settings may also include personalized selections and controls relating to user-configurable aspects of the operation of hearingassist device1601 and/or to user-configurable aspects of the operation of any device with which hearingassist device1601 may be paired, or any services (cloud-based or otherwise) that may be accessed by or on behalf of hearingassist device1601.
As shown inFIG. 16,storage1667 ofdevice1603 also includes information and settings1671 associated with hearingassist device1601, a user thereof, a device paired thereto, and to services accessed by or on behalf of hearingassist device1601. Information and settings1671 may comprise a backup copy of information and settings1639 stored on hearingassist device1601. Such a backup copy may be updated periodically when hearingassist device1601 anddevice1603 are communicatively linked. Such a backup copy may be maintained ondevice1603 in order to ensure that important data is not lost or otherwise rendered inaccessible if hearingassist device1601 is lost or runs out of power. In a further embodiment, information and settings439 stored on hearingassist device1601 may be temporarily or permanently moved todevice1603 to free up storage space on hearingassist device1601, in which case information and settings1671 may comprise the only copy of such data. In a still further embodiment, information and settings1671 stored ondevice1603 may comprise a superset of information and settings1639 stored on hearingassist device1601. In accordance with such an embodiment, hearingassist device1601 may selectively retrieve necessary information and settings fromdevice1603 on an as-needed basis and cache only a subset of such data inlocal storage1635.
Sensor Components and Associated Circuitry.
As noted above, sensor components and associatedcircuitry1641 of hearingassist device1601 may include any number of sensors including but not limited to one or more microphones, bone conduction sensors, temperature sensors, blood pressure sensors, blood glucose sensors, pulse oximetry sensors, pH sensors, vibration sensors, accelerometers, gyros, magnetos, or the like. In an embodiment in whichdevice1603 comprises a portable electronic device that is carried by or otherwise locally accessible to a wearer of hearing assist device1601 (such as portable electronic device1505), sensor components and associatedcircuitry1641 ofdevice1603 may also include all or some subset of the foregoing sensors. For example, in an embodiment,device1603 may comprise a smart phone that includes one or more microphones, accelerometers, gyros, or magnetos.
In accordance with such an embodiment, when a communication pathway has been established between hearing assistdevice1601 anddevice1603, one or more of the sensors included indevice1603 may be used to perform all or a portion of the functions performed by corresponding sensor(s) in hearingassist device1601. By utilizing such sensor(s) ofdevice1603, battery power of hearingassist device1601 may be conserved.
Furthermore, data provided by the sensors included withindevice1603 may be used to augment or verify information provided by the sensors within hearingassist device1601. For example, information provided by any accelerometers, gyros or magnetos included withindevice1603 may be used to provide enhanced information regarding a current body position (for example, standing up, leaning over or lying down) and/or orientation of the wearer of hearingassist device1601.Device1603 may also include a GPS device that can be utilized to provide enhanced location information regarding the wearer of hearingassist device1601. Furthermore,device1603 may include its own set of health monitoring sensors that can produce data that can be combined with data produced by health monitoring sensors of hearingassist device1601 to provide a more accurate or complete picture of the state of health of the wearer of hearingassist device1601.
User Interface Components and Associated Circuitry.
Depending upon the implementation, hearing assistdevice1601 may have a very simple user interface or a user interface that is more elaborate. For example, in an embodiment in whichhearing assist device1601 comprises an ear bud, the user interface thereof may comprise very simple mechanical elements such as switches, buttons or dials. This may be due to the very limited surface area available for supporting such an interface. Even with a small form factor device, however, a voice-based user interface or a simple touch-based or tap-based user interface based on the use of capacitive sensing is possible. Also, head motion sensing, local or remote voice activity detection (VAD), or audio monitoring may be used to place a hearing assist device into a fully active state. In contrast, in an embodiment in whichhearing interface device1601 comprises an integrated part of a pair of glasses, a visor, or a helmet, a more elaborate user interface comprising one or more displays and other features may be possible.
In an embodiment in whichdevice1603 comprises a portable electronic device that is carried by or otherwise locally accessible to a wearer of hearing assist device1601 (such as portable electronic device1505), supplemental user interface components and associated circuitry1675 ofdevice1603 may provide a means by which a user can interact with hearingassist device1601, thereby extending the user interface of that device. For example,device1603 may comprise a phone or tablet computer having a touch screen display that can be used to interact with and manage the features of hearingassist device1601. For example, in accordance with such an embodiment, an application may be downloaded to or otherwise installed ondevice1603 that enables a user thereof to interact with and manage the features of hearingassist device1601 by interacting with a touch screen display or other user interface element ofdevice1603. This can enable a more elaborate, intuitive and user-friendly user interface to be designed for hearingassist device1601. Such user interface may be made accessible to a user only when a communication pathway is established betweendevice1603 and hearing assistdevice1601 so that changes to the configuration of hearingassist device1601 can be applied to that device in real time. Alternatively, such user interface may be made accessible to user even when there is no communication pathway established betweendevice1603 and hearing assistdevice1601. In this case, any changes made to the configuration of hearingassist device1601 via the user interface provided bydevice1603 may be stored ondevice1603 and then later transmitted to hearing assistdevice1601 when a suitable communication pathway becomes available.
VI. Hearing Assist Device with External Audio Quality SupportIn accordance with the embodiments described above in reference toFIGS. 15 and 16, the quality of audio content received by hearing assist device may be improved by utilizing an external device or service to process such audio content when such external device or service is communicatively connected to the hearing assist device. For example, as discussed above in reference toFIG. 16, enhanced audioprocessing assist module1659 ofdevice1603 may process audio content received from hearingassist device1601 to achieve a desired frequency response and/or spatial signaling and then return the processed audio content to hearing assistdevice1601 for playback thereby. Furthermore, any other audio processing technique that may have the effect of improving audio quality may be applied by such external device or service, including but not limited to any of a variety of noise suppression or speech intelligibility enhancement techniques, whether presently known or hereinafter developed. Whether or not such connected external device or service is utilized to perform such enhanced processing may depend on a variety of factors, including a current state of a battery of the hearing assist device, a current selected mode of operation of the hearing assist device, or the like.
In addition to processing audio content received from a hearing assist device, an external device (such as portable electronic device1505) may forward audio content to another device to which it is communicatively connected (for example, any device used to implement hearing assist device support service(s)1511 or support personnel system(s)1515) so that such audio content may be processed by such other device.
In a further embodiment, the audio that is remotely processed and returned to the hearing assist device is audio that is captured by one or more microphones of an external device rather than by the microphone(s) of the hearing assist device itself. This enables the hearing assist device to avoid having to capture, package and transmit audio, thereby conserving battery power and other resources. For example, with continued reference tosystem1600 ofFIG. 16, in an embodiment in whichdevice1603 comprises a portable electronic device carried by or otherwise locally accessible to a wearer of hearingassist device1601, one or more microphones ofdevice1603 may be used to capture audio content from an environment in which the wearer is located. In this case, any enhanced audio processing may be performed bydevice1603 or by a device or service accessible thereto. The processed audio content may then be delivered bydevice1603 to hearing assistdevice1601 for playback thereby. Additionally or alternatively, such processed audio content may be played back via one or more speakers ofdevice1603 itself. The foregoing approach to audio processing may be deemed desirable, for example, if hearingassist device1601 is in a very low power or even non-functioning state. In certain embodiments in whichdevice1603 comprises a device having more, larger and/or more sensitive microphones than those available on hearingassist device1601, the foregoing approach to audio enhancement may actually produce higher quality audio than would be produced using only the microphone(s) of hearingassist device1601.
The foregoing example assumes that audio content is processed for the purpose of enhancing the quality thereof. However, such audio content may also be processed for speech recognition purposes. In the case of speech recognition, the audio content may comprise one or more voice commands that are intended to initiate or provide input to a process executing outside of hearingassist device1601. In such a case, like principles apply in that the audio content may be captured by microphone(s) ofdevice1603 and processed bydevice1603 or by a device or service accessible thereto. However, in this case, what is returned to the wearer may comprise something other than a processed version of the original audio content captured bydevice1603. For example, if the voice commands were intended to initiate an Internet search, then what is returned to the wearer may comprise the results of such a search. The search results may be presented to a display ofdevice1603, for example. Alternatively, if hearingassist device1601 comprises an integrated part of a pair of glasses, visor or helmet having a display, then the search results may be presented to such display. Still further, such search results could be played back via one or more speakers ofdevice1603 or hearing assistdevice1601 using text-to-speech conversion.
In a further embodiment, a wearer of hearingassist device1601 may initiate operation in a mode in which audio content is captured by one or more microphone(s) ofdevice1603 and processed by device1603 (or by a device or service accessible to device1603) to achieve desired audio effects, such as custom equalization, emphasized surround sound effects, or the like. For example, in the case of surround sound, sensors included in hearingassist device1601 and/ordevice1603 may be used to determine a position of the wearer's head relative to one or more audio sources and then to modify audio content to achieve an appropriate surround sound effect given the position of the wearer's head and the location of the audio source(s). The processed audio may then be delivered to the wearer via one or more speakers of hearingassist device1601, a second hearing assist device, and/ordevice1603. To support surround sound implementations, each hearing assist device may include multiple speakers (such as piezoelectric speakers) to deliver a surround sound effect.
In an embodiment, the desired audio effects described above may be defined by a user and stored as part of a profile associated with the user and/or with a particular operational mode of a hearing assist device, wherein the operational mode may be further associated with certain contexts or conditions in which the mode should be utilized. Such profile may be formatted in a standardized manner such that it can be used by a variety of hearing assist devices and audio reproduction systems.
A wearer of hearingassist device1601 may define and initiate any of the foregoing operational modes by interacting with a user interface of hearingassist device1601 or a user interface ofdevice1603 depending upon the implementation.
The improvement of audio quality as described herein may include suppressing audio components generated by certain audio sources and/or boosting audio components generated by certain other audio sources. Such suppression or boosting may be performed by device1603 (and/or a device or service accessible thereto), with processed audio being returned to hearing assistdevice1601 for playback thereby. Additionally or alternatively, processed audio may be played back bydevice1603 in scenarios in whichdevice1603 is local with respect to the wearer of hearingassist device1601. In accordance with the foregoing scenarios, the original audio may be captured by one or more microphones of hearingassist device1601, a second hearing assist device, and/ordevice1603 whendevice1603 is local with respect to the wearer of hearingassist device1601.
With respect to noise suppression, the noise suppression function may utilize not only audio signal(s) captured by the microphones of the hearing assist device(s) worn by a user but also the audio signal(s) captured by the microphone(s) of a portable electronic device carried by or otherwise accessible to the user. As is known to persons skilled in the art of audio processing, by adding additional and diverse microphone reference signals, the ability of a noise suppression algorithm to identify and suppress noise can be improved.
For example,FIG. 19 is a block diagram of anoise suppression system1900 that may be utilized by a hearing assist device or a device/service communicatively connected thereto in accordance with an embodiment.Noise suppression system1900 is configured to process an audio signal produced by a microphone of a left ear hearing assist device (denoted MIC L), an audio signal produced by a microphone of a right ear hearing assist device (denoted MIC R), and an audio signal produced by a microphone of an external device (denoted MIC EXT) to produce a noise-suppressed audio signal for playback to the left ear of a user (denoted LEFT).
In particular,noise suppression system1900 includes anamplifier1902 that amplifies the MIC L signal. Such signal may also be converted from analog to digital form by an A/D converter (not shown inFIG. 19). The output ofamplifier1902 is passed to anoise suppressor1908.Noise suppression system1900 further includes anamplifier1904 that amplifies the MIC R signal. Such signal may also be converted from analog to digital form by an A/D converter (not shown inFIG. 19). The output ofamplifier1904 is passed tonoise suppressor1908.Noise suppression system1900 still further includes anamplifier1906 that amplifies the MIC EXT signal. Such signal may also be converted from an analog to digital form by an A/D converter (not shown inFIG. 19). The output ofamplifier1908 is passed to noise suppressor.Noise suppressor1908 applies a noise suppression algorithm that utilizes all three amplified microphone signals to generate a noise-suppressed version of the MIC L signal. The noise-suppressed audio signal generated bynoise suppressor1908 is passed to anamplifier1910 that amplifies it to produce the LEFT audio signal. Such signal may also be converted from digital to analog form by a D/A converter (not shown inFIG. 19) prior to playback.
It is noted that to operate in such a manner,noise suppression system1900 must have access to the MIC L signal obtained by the left ear hearing assist device, the MIC R signal obtained by the right ear hearing device, and the MIC EXT signal obtained by the external device. This can be achieved by establishing suitable communication pathways between such devices. For example, in an embodiment in whichnoise suppression system1900 is implemented in a portable electronic device carried by a user, the MIC L and MIC R signals may be obtained through skin-based communication and/or BLE communication between the portable electronic device and one or both of the two hearing assist devices, while the MIC EXT signal can be obtained directly from a microphone of the portable electronic device. Still other microphone signals other than those shown inFIG. 19 may be used to improve the performance of a noise suppressor.
In further embodiments, a selection may be made between using audio input provided by the microphone(s) of the hearing assist device(s) and using audio input provided by the microphone(s) of the portable electronic device. Such selection may be made manually by the wearer of the hearing assist device(s) or may be made automatically by the hearing assist device(s) and/or the portable electronic device based on a variety of factors, including but not limited to the state of the battery of the hearing assist device(s), the quality of the audio signals being captured by each device, the environment in which the wearer is located, or the like.
In accordance with further embodiments, improving audio quality may also comprise selectively applying a boosting or amplification function to certain types of audio signals (for example, music or speech), to components of an audio signal emanating from a certain source, and/or to components of an audio signal emanating from a particular direction, while not amplifying or actively suppressing other audio signal types or components. Such processing may occur responsive to the user initiating a particular mode of operation or may occur automatically in response to detecting the existence of certain predefined conditions.
For example, in one embodiment, a user may activate a “forward only” mode in which audio signals emanating from in front of the user are boosted and signals emanating from other directions are not boosted or are actively attenuated. Such mode of operation may be desired when the user is engaging in conversation with a person that is directly in front of him/her. Additionally, such mode of operation may automatically be activated if it can be determined from sensor data obtained by the hearing assist device(s) worn by the user and/or by a portable electronic device carried by the user that the user is engaging in conversation with a person that is directly in front of him/her. In a like manner, a user may activate a “television” mode in which audio signals emanating from a television are boosted and signals emanating from other sources are not boosted or are actively attenuated. Additionally, such mode of operation may automatically be activated if it can be determined from sensor data obtained by the hearing assist device(s) worn by the user and/or by a portable electronic device carried by the user that the user is watching television.
In accordance with further embodiments, the audio processing functionality may be designed, programmed or otherwise configured such that certain sounds or noises should never be suppressed. For example, the audio processing functionality may be configured to always pass certain sounds such as extremely elevated sounds, a telephone or doorbell ringing, the honking of a car horn, an alarm or siren sounding, repeated sounds, or the like, to ensure that the wearer is made aware of important events. Likewise, the audio processing functionality may utilize speech recognition to ensure that certain uttered words are passed to the wearer, such as the wearer's name, the word “help” or other words.
In accordance with further embodiments, the types of audio that are boosted, passed or suppressed may be determined based on detecting prior and/or current activities of the user, inactivity of the user, time of day, or the like. For example, if it is determined from sensor data and from information derived therefrom that a user is sleeping, then all audio input may be suppressed with certain predefined exceptions. Likewise, certain sounds or verbal instructions may be injected at certain times, such as an alarm or morning wakeup music in the morning.
For each of the modes of operation described above, the required audio processing may be performed either by a hearing assist device, such as hearingassist device1601, or by an external device, such asdevice1603, with which hearingassist device1601 is communicatively connected. By utilizing an external device, power, processing and storage resources of the hearing assist device may advantageously be conserved.
The foregoing describes the use of an external device or service to provide improved audio quality to a hearing assist device. It should be noted, however, that in a scenario in which a user is wearing two hearing assist devices that are capable of communicating with each other, one hearing assist device may be selected to perform any of the audio processing tasks described herein on behalf of the other. Such selection may be by design in that one hearing assist device is equipped with more audio processing capabilities than the other. Alternatively, such selection may be performed dynamically based on a variety of factors including the comparative battery levels of each hearing assist device, a processing load currently assigned to each hearing assist device, or the like. Any audio that is processed by a first hearing assist device on behalf of a second hearing assist device may originate from one or more microphones of the first hearing assist device, from one or more microphones of the second hearing assist device, or from one or more microphones of portable electronic device that is carried by or otherwise locally accessible to a wearer of the first and second hearing assist devices.
To help further illustrate the foregoing concepts,FIG. 20 depicts aflowchart2000 of a method for providing external operational support to a hearing assist device worn by a user, such as hearingassist device1601. As shown inFIG. 20, the method offlowchart2000 begins atstep2002 in which a communication pathway is established to the hearing assist device. Atstep2004, an audio signal obtained by the hearing assist device is received via the communication pathway. Atstep2006, the audio signal is processed to obtain processing results. Atstep2008, the processing results are transmitted to the hearing assist device via the communication pathway.
Depending upon the implementation, each of the establishing, receiving, processing and transmitting steps may be performed by one of a second hearing assist device worn by the user, a portable electronic device carried by or otherwise accessible to the user, or a device or service that is capable of communicating with the hearing assist device via a portable electronic device carried by or otherwise accessible to the user. As noted above,device1603 may represent both a portable electronic device carried by or otherwise accessible to the user, or a device or service that is capable of communicating with the hearing assist device via a portable electronic device carried by or otherwise accessible to the user.
In accordance with certain embodiments,step2002 offlowchart2000 may comprise establishing the communication pathway to the hearing assist device comprises establishing a communication link with the hearing assist device using one of NFC, BTLE technology, WPT technology, telecoil, or skin-based communication technology.
In one embodiment,step2006 offlowchart2000 comprises processing the audio signal to generate an enhanced audio signal having a desired frequency response associated with the user andstep2008 comprises transmitting the enhanced audio signal to the hearing assist device via the communication pathway for playback thereby.
In another embodiment,step2006 offlowchart2000 comprises processing the audio signal to generate an enhanced audio signal having a desired spatial signaling characteristic associated with the user andstep2008 comprises transmitting the enhanced audio signal to the hearing assist device via the communication pathway for playback thereby.
In a further embodiment,step2006 offlowchart2000 comprises applying noise suppression to the audio signal to generate a noise-suppressed audio signal andstep2008 comprises transmitting the noise-suppressed audio signal to the hearing assist device via the communication pathway for playback thereby. In further accordance with such an embodiment, applying noise suppression to the audio signal may comprise processing the audio signal and at least one additional audio signal obtained by a portable electronic device carried by or otherwise accessible to the user.
In a still further embodiment,step2006 offlowchart2000 comprises applying speech recognition to the audio signal to identify one or more recognized words.
FIG. 21 depicts aflowchart2100 that illustrates steps that may be performed in addition to those shown inflowchart2000 to provide external operational support to a hearing assist device worn by a user, such as hearingassist device1601. As shown inFIG. 21, the first additional step isstep2102, which comprises receiving a second audio signal obtained by a portable electronic device that is carried by or otherwise accessible to the user. Atstep2104, the second audio signal is processed to obtain processing results. Atstep2106, the processing results are transmitted to the portable electronic device. These additional steps encompass the scenario wherein at least the audio capturing and playback tasks are offloaded from the hearing assist device to the portable electronic device.
FIG. 22 depicts aflowchart2200 that illustrates steps that may be performed in addition to those shown inflowchart2000 to provide external operational support to a hearing assist device worn by a user, such as hearingassist device1601. As shown inFIG. 22, the first additional step isstep2202, which comprises receiving a second audio signal obtained by a portable electronic device that is carried by or otherwise accessible to the user. Atstep2204, the second audio signal is processed to obtain processing results. Atstep2206, the processing results are transmitted to the hearing assist device. These additional steps encompass the scenario wherein audio capturing tasks are offloaded from the hearing assist device to the portable electronic device, but the audio playback task is retained by the hearing assist device.
FIG. 23 depicts aflowchart2300 that illustrates steps that may be performed in addition to those shown inflowchart2000 to provide external operational support to a hearing assist device worn by a user, such as hearingassist device1601. As shown inFIG. 23, the first additional step isstep2302, which comprises receiving a second audio signal obtained by the hearing assist device. Atstep2304, the second audio signal is processed to obtain processing results. Atstep2306, the processing results are transmitted to a portable electronic device that is carried by or otherwise accessible to the user. These additional steps encompass the scenario wherein audio capturing tasks are retained by the hearing assist device while audio playback tasks are offloaded to the portable electronic device.
VII. Hearing Assist Device with Active Audio Filtering Supporting Substitute Audio InputIn accordance with further embodiments, an audio signal received by one or more microphones of a hearing assist device may be suppressed or blocked while a substitute audio input signal may be delivered to the wearer. For example, a language translation feature may be implemented in which an audio signal received by one or more microphones of a hearing assist device is transmitted to an external device or service. The external device or service applies a combination of speech recognition and translation thereto to synthesize a substitute audio signal. The substitute audio signal comprises a translated version of the speech included in the original audio signal. The substitute audio signal is then transmitted back to the hearing assist device for playback thereby. While this is occurring, the hearing assist device utilizes active filtering to suppress the original audio signal or blocks it entirely, so that the wearer can clearly hear the substitute audio signal being played back through a speaker of the hearing assist device.
As another example, an audio signal generated by a television, a DVD player, a compact disc (CD) player, a set top box, a portable media player, a handheld gaming device, or other entertainment device may be routed to a hearing assist device worn by a user for playback thereby. Such entertainment devices may also include smart phones, tablet computers, and other computing devices capable of running entertainment applications. While the user is listening to the audio being generated by the entertainment device, the hearing assist device may operate to suppress ambient background noise using an active filtering function, thereby providing the user with an improved listening experience. The delivery of the audio signal from the entertainment device to the hearing assist device and suppression of ambient background noise may occur in response to the establishment of a communication link between the hearing assist device and the entertainment device, or in response to other detectable factors, such as the hearing assist device being within a certain range of the entertainment device or the like. Conversely, the delivery of the audio signal from the entertainment device to the hearing assist device and suppression of ambient background noise may be discontinued in response to the breaking of a communication link between the hearing assist device and the entertainment device, or in response to other detectable factors, such as the hearing assist device passing outside of a certain range of the entertainment device or the like.
For safety reasons as well as certain practical reasons, there may be certain sounds or noises should never be suppressed. Accordingly, the functionality described above for suppressing ambient audio in favor of a substitute audio stream could be configured to always pass certain sounds such as extremely elevated sounds, a telephone or doorbell ringing, the honking of a car horn, an alarm or siren sounding, repeated sounds, or the like, to ensure that the wearer is made aware of important events. Likewise, such functionality may utilize speech recognition to ensure that certain uttered words are always passed to the wearer, such as the wearer's name, the word “help” or other words. The functionality that monitors for such sounds and words may be present in the hearing assist device or in a portable electronic device that is communicatively connected thereto. When such sounds and words are passed to the hearing assist device, the substitute audio stream may be paused or discontinued (for example, a song the wearer was listening to may be paused or discontinued or a movie the wearer was viewing may be paused or discontinued). Furthermore, when such sounds and words are passed to the hearing assist device, the suppression of ambient noise may also be discontinued.
Generally speaking, a hearing assist device in accordance with an embodiment can receive any number of audio signals and selectively pass one or a mixture of some or all of the audio signals for playback to a wearer thereof. Additionally, a hearing assist device in accordance with such an embodiment can selectively amplify or suppress any one of the aforementioned audio signals. This is illustrated by the block diagram ofFIG. 24, which shows anaudio processing module2400 that may be implemented in a hearing assist device in accordance with an embodiment.
As shown inFIG. 24,audio processing module2400 is capable of receiving at least four different audio signals. These include an audio signal captured by a microphone of the hearing assist device (denoted MIC), an audio signal received via an NFC interface of the hearing assist device (denoted NFC), an audio signal received via a BLE interface of the hearing assist device (denoted BLE), and an audio signal received via a skin-based communication interface of the hearing assist device (denoted SKIN).Audio processing module2400 is configured to process these audio signals to generate an output audio signal for playback via aspeaker2412.
As shown inFIG. 24, each of the MIC, NFC, BLE and SKIN signals is amplified by a correspondingamplifier2402,2412,2422 and2432. Each of these signals may also be converted from analog to digital form by a corresponding A/D converter (not shown inFIG. 24). The amplified signals are then passed to acorresponding multiplier2404,2414,2424 and2434, each of which applies a certain scaling function thereto, wherein such scaling function can be used to determine a relative degree to which each signal will contribute to a final output signal. Furthermore, switches2406,2416,2426 and2436 can be used to selectively remove the output of any ofmultipliers2404,2414,2424, and2434 from the final output signal. Any signals passed throughswitches2406,2416,2426 and2436 are received by a mixer408 which combines such signals to produce a combined audio signal. The combined audio signal is then passed to anamplifier2410 which amplifies it to produce the output audio signal that will be played back byspeaker2412. The output audio signal may also be converted from a digital to analog form by a D/A converter (not shown inFIG. 24) prior to playback.
In further embodiments,audio processing module2400 may include additional logic that can apply active filtering, noise suppression, speech intelligibility enhancement, or any of a variety of audio signal processing functions to any of the audio signals received by the hearing assist device. Such functionality can be used to emphasize certain sounds, for example. Additionally,audio processing module2400 may also include an output path by which the MIC signal can be passed to an external device for remote processing thereof. Such remotely-processed signal may then be returned via any of the NFC, BLE or skin-based communication interfaces discussed above.
FIG. 24 thus illustrates that different audio streams may be picked up by the same hearing assist device. Whether one audio stream is exposed or not may depend on the circumstances, which can change from time to time. Consequently, each audio stream is delivered or filtered in varying dB intensities with prescribed equalization as managed by the hearing assist device or any one or more of the devices or services to which the hearing assist device may be communicatively connected.
VIII. Hearing Assist System with a Backup Hearing Assist TerminalIn an embodiment, a portable electronic device (such as portable electronic device1505) carried or otherwise locally accessible to a wearer of hearing assist device (such as hearingassist device1501 or1503) is configured to detect when the hearing assist device is missing from the wearer's ear or discharged. In such a scenario, the portable electronic device responds by entering a hearing assist mode in which it captures ambient audio and processes it in accordance with a prescription associated with the wearer. As discussed above, such prescription may specify, for example, a desired frequency response or other desired characteristics of audio to be played back to the wearer. Such hearing assist mode may also be manually triggered by the wearer through interaction with a user interface of the portable electronic device. In an embodiment in which the portable electronic device comprises a telephone, the foregoing hearing assist mode may also be used to equalize and amplify incoming telephone audio as well. The functionality of the hearing assist mode may be included in an application that can be downloaded or otherwise installed on the portable electronic device.
In accordance with certain embodiments, the activation and use of the hearing assist mode of the portable electronic device may be carried out in a way that is not immediately discernible to others who may be observing the user. For example, in an embodiment in which the portable electronic device comprises a telephone, the telephone may be programmed to enter the hearing assist mode when the user raises the telephone to his/her ear and utters a particular activation word or words. Such a feature enables a user to make it look as if he or she is simply using his/her phone.
In an embodiment, the portable electronic device may be configured to use one or more sensors (for example, a camera and/or microphone) to determine who the current user of the portable electronic device and to automatically select the appropriate prescription for that user when entering hearing assist mode. Alternatively, the user may interact with a user interface of the portable electronic device to select an appropriate volume level and prescription.
In accordance with further embodiments, the hearing assist device may be capable of issuing a warning message to the wearer thereof when it appears that the battery level of the hearing assist device is low. In response to receiving such warning message, the wearer may utilize the portable electronic device to perform a recharging operation by bringing the portable electronic device within a range of the hearing assist device that is suitable for wirelessly transferring power thereto as was previously described. Additionally or alternatively, the wearer may activate a mode of operation in which certain operations normally performed by the hearing assist device are performed instead by the portable electronic device or by a device or service that is communicatively connected to the portable electronic device.
IX. Hearing Assist Device Configuration Using Hand-Held Terminal SupportIn an embodiment, a personal electronic device (such as personal electronic device1505) may be used to perform a hearing test on a wearer of hearing assist device (such as hearingassist device1501 or1503). The hearing test may involve causing the hearing assist device to play back sounds having certain frequencies at certain volumes and soliciting feedback from the wearer regarding whether such sounds were heard or not. Still other types of hearing tests may be performed. For example, a hearing test designed to determine a head transfer function useful in achieving desired spatial signaling for a particular user may also be administered. The test results may be analyzed to generate a personalized prescription for the wearer. Sensors within the hearing assist device may be used to measure distance to the ear drum or other factors that may influence test results so that such factors can be accounted for in the analysis. The personalized prescription may then be downloaded or otherwise transmitted to the hearing assist device for implementation thereby. Such personalized prescription may be formatted in a standardized manner such that it may be used by a variety of hearing assist devices or audio reproduction systems. Sensors within the hearing assist device may be used to measure distance to the ear drum or other factors that may impact test results and the analysis thereof.
In certain embodiments, the test results are processed locally by the portable electronic device to generate a prescription. In alternate embodiments, the test results are transmitted from the portable electronic device to a remote system for automated analysis and/or analysis by a clinician or other qualified party and a prescription is generated via such remote analysis.
X. Hearing Assist Device TypesThe hearing assist devices described herein may comprise devices such as those shown inFIGS. 2-6 and15. However, it is noted that the hearing assist devices described herein may comprise a part of any structure or article that may cover an ear of a user or that may be proximally located to an ear of a user. For example, the hearing assist devices described herein may comprise a part of a headset, a pair of glasses, a visor, or a helmet worn by a user or may be designed to be connected or tethered to such headset, pair of glasses, visor, or helmet.
XI. ConclusionWhile various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be understood by those skilled in the relevant art(s) that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined in the appended claims.