REFERENCE TO PRIOR APPLICATIONThis application claims the benefit of U.S. Provisional Application No. 60/705,219, filed Aug. 3, 2005, which is incorporated by reference in its entirety.
BACKGROUND1. Field of the Invention
The invention relates to somatic, auditory or cochlear communication to a user, and more particularly, to somatic, auditory or cochlear communication using phonemes.
2. Description of the Related Art
Phonemes are the speech sounds that form the words of a language, when used alone or when combined. More precisely, a phoneme is the smallest phonetic unit, or part of speech that distinguishes one word from another. Various nomenclatures have been developed to describe words in terms of their constituent phonemes. The nomenclature of the International Phonetic Association (IPA) will be used here. Unless otherwise noted, examples of speech, speech sounds, phonetic symbols, phonetic spellings, and conventional spellings will be with respect to an American dialect of English, hereto forth referred to simply as English. The principles can be extended to other languages.
FIG. 1 illustrates severalexemplary plots100 to introduce several spectral and temporal features of human speech through the examination of the English word, “fake”,105, and its component phonemes. The phonetic spelling (per the IPA) of the English word, “fake”,105, is “faik”,110. In English, the word comprises three separate phonemes: the consonant, “f”,142; the diphthong vowel, “ai”,191; and the consonant, “k”,107. Because phonemes are language and dialect dependent, an English speaker will hear “ai” as a single sound, “long A”,191, a diphthong (a sound combining two vowel sounds), while speakers of other languages may hear two different vowels, “a”,113, and “i”,114, each a monophthong (a single vowel sound). The phoneme, “k”,107, also comprises two parts: a short period of relative silence,117; followed by the abrupt appearance of sound frequencies in a range of about 2500 to 7000 Hz,118.
Spectral and temporal features of the individual phonemes are partially observable when viewing a plot of thewaveform140 of the spoken word. Here, pressure is shown on the vertical axis and time is shown on the horizontal axis. Aspectrogram120 reveals greater detail and structure. Here, frequency is shown on the vertical axis, time on the horizontal axis, and power is represented as a grey scale, with darker shades corresponding to higher power (sound intensity) levels. The consonants “f”,142, and “k”,107, primarily consist of sound frequencies above approximately 3000 Hz, while the vowel “ai”,191, primarily consists of sound frequencies below approximately 3500 Hz. The highlighted areas of thespectrogram132,134,138 reveal additional features of human speech.
An early portion of the phoneme “f”,132, magnified in panel (A),133, comprises sound frequencies predominantly above 3000 Hz. The distribution of power is irregular over time and frequency giving rise to a sound quality resembling rushing air, and creating the granular pattern on thespectrogram132,133.
The highlighted portion of the phoneme “ai”,134, magnified in panel (B),135, shows a bimodal distribution of relatively low sound frequencies. Characteristic of diphthongs, one or more dominant frequencies, called “formants”, shift in frequency over time. Aportion136 of panel (B),135, magnified further in panel (D),137, reveals a waxing and waning of power in all frequencies, a characteristic of the human voice. Unvoiced phonemes such as “f”,142,132,133, and “k”,107,118,138,139, do not exhibit these cyclical amplitude fluctuations.
Some phonemes increase or decrease in power or intensity over their duration. This is evident in the highlighted portion of the phoneme “k”,138, magnified in panel (C),139. Here, sound energy decreases continually during a period of about 70 milliseconds.
Another important feature of human speech is the period of relative silence preceding some consonants. In the current example, the phoneme “k”,107, comprises approximately 70 milliseconds of quiet117 followed by theaudible portion118 of the phoneme “k”,107. Without this period of relative silence, some phonemes, including “k” would be unintelligible. Also, intervals of relative silence or power shifts are important for syllabification.
FIG. 2 is a table200 of AmericanEnglish phonemes225 shown in three nomenclatures: the International Phonics Association (IPA), s{mpA (a phonetic spelling of SAMPA, the abbreviation for Speech Assessment Methods Phonetic Alphabet, a computer readable phonetic alphabet), and the Merriam Webster Online Dictionary (m-w). Examples226 of each phoneme (bold underlined letters) as used in an American English word are provided, along with themanner237 andplace247 ofarticulation227.
The manner ofarticulation237 refers primarily to the way in which the speech organs, such as the vocal cords, tongue, teeth, lips, nasal cavity, etc. are used.Plosives201,204,207,211,214,217 are consonants pronounced by completely closing the breath passage and then releasing air.Fricatives242,243,244,245,250,252,253,254,255 are consonants pronounced by forcing the breath through a narrow opening. Between the plosives, and the fricatives are twoaffricates224,234 composite speech sounds that begin as a plosive and end as a fricative. Nasals261,264,267 are consonants pronounced with breath escaping mainly through the nose rather than the mouth. Approximants274,275,276,271 are sounds produced while the airstream is barely disturbed by the tongue, lips, or other vocal organs. Vowels are speech sounds produced by the passage of air through the vocal tract, with relatively little obstruction, including themonoplithong vowels280,281,282,283,284,285,286,287,288,289 and thediphthong vowels291,292,293,294,295.
The place ofarticulation247 refers largely to the position of the tongue, teeth, and lips. Bilabials, are pronounced by bringing both lips into contact with each other or by rounding them. Labiodentals are pronounced with the upper teeth resting on the inside of the lower lip. Dentals are formed by placing the tongue against the back of the top front teeth. Alveolars are sounded with the tongue touching or close to the ridge behind the teeth of the upper jaw. Palato-alveolars are produced by raising the tongue to or near the forward-most portion of the hard palate. Palatals are produced by raising the tongue to or near the hard palate. Velars are spoken with the back of the tongue close to, or in contact with, the soft palate (velum).
Other speech characteristics228 include voice, dominant sound frequencies above about 3000 Hz (3 kHz+), and stops. In English, eight phonemes comprise a period of relative silence followed by a period of relatively high sound energy. These phonemes, calledstops228 are the plosives and theaffricates201,204,207,211,214,217,224,234. Stops are not recognizable from their audible portion alone. Recognition of these phonemes requires that they begin with silence. Phonemes may be voiced or unvoiced. For example, “b”,211, is the voiced version of “p”,201, and “z”,254, is the voiced version of “s”,244. Most English consonants, the plosives, affricates, and fricatives201,204,207,211,214,217,224,234,242,243,244,245,250,252,253,254,255 comprise sound frequencies above 3000 Hz. In order for an individual to be able to discriminate between these phonemes, he/she must be able to hear their higher frequencies.Unvoiced phonemes201,204,207,224,242,243,244,245,250 in particular tend to be dominated by the higher sound frequencies.
SUMMARY OF CERTAIN EMBODIMENTSIn another embodiment there is a method of transforming a sequence of symbols representing phonemes into a sequence of arrays of nerve stimuli, the method comprising establishing a correlation between each member of a phoneme symbol set and an assignment of one or more channels of a multi-electrode array, accessing a sequence of phonetic symbols corresponding to a message, and activating a sequence of one or more electrodes corresponding to each phonetic symbol of the message identified by the correlation. The phonetic symbols may belong to one of SAMPA, Kirshenbaum, or IPA Unicode digital character sets. The symbols may belong to the cmudict phoneme set. The correlation may be a one to one correlation. Activating a sequence of one or more electrodes may include an energizing period for each electrode, wherein the energizing period comprises a begin time parameter and an end time parameter. The begin time parameter may be representative of a time from an end of components of a previous energizing period of a particular electrode. The electrodes may be associated with a hearing prosthesis. The hearing prosthesis may comprise a cochlear implant.
In one embodiment there is a method of processing a sequence of spoken words into a sequence of sounds, the method comprising converting a sequence of spoken words into electrical signals, digitizing the electrical signals representative of the speech sounds, transforming the speech sounds into digital symbols representing corresponding phonemes, transforming the symbols representing the corresponding phonemes into sound representations, and transforming the sound representations into sounds.
Transforming the symbols representing the phonemes into sound representations may comprise accessing a data structure configured to map phonemes to sound representations, locating the symbols representing the corresponding phonemes in the data structure, and mapping the phonemes to sound representations. The method additionally may comprise creating the data structure, comprising identifying phonemes corresponding to a language used by a user of the method, establishing a set of allowed sound frequencies, generating a correspondence mapping the identified phonemes to the set of allowed sound frequencies such that each constituent phoneme of the identified phonemes is assigned a subset of one or more frequencies from the set of allowed sound frequencies, and mapping each constituent phoneme of the identified phonemes to a set of one or more sounds. Establishing a set of allowed sound frequencies may comprise selecting a set of sound frequencies that are in a hearing range of the user. Each sound of the set of one more sounds may comprise an initial frequency parameter. Each sound of the set of one more sounds may comprise a begin time parameter. The begin time parameter may be representative of a time from an end of components of a previous sound representation. Each sound of the set of one more sounds may comprise an end time parameter. Each sound of the set of one more sounds may comprise a power parameter. Each sound of the set of one more sounds may comprise a power shift parameter. Each sound of the set of one more sounds may comprise a frequency shift parameter. Each sound of the set of one more sounds may comprise a pulse rate parameter. Each sound of the set of one more sounds may comprise a duty cycle parameter.
In another embodiment there is a method of processing a sequence of spoken words into a sequence of nerve stimuli, the method comprising converting a sequence of spoken words into electrical signals, digitizing the electrical signals representative of the speech sounds, transforming the speech sounds into digital symbols representing corresponding phonemes, transforming the symbols representing the corresponding phonemes into stimulus definitions and transforming the stimulus definitions into a sequence of nerve stimuli.
The nerve stimuli may be associated with a hearing prosthesis. The hearing prosthesis may comprise a cochlear implant. The nerve stimuli may be associated with a skin interface. The skin interface may be located on the wrist and/or hand of the user. Alternatively, the skin interface may be located on the ankle and/or foot of the user. The nerve stimuli may be mechanical and/or electrical. Transforming the symbols representing the phonemes into stimulus definitions may comprise accessing a data structure configured to map phonemes to stimulus definitions, locating the symbols representing the corresponding phonemes in the data structure, and mapping the phonemes to stimulus definitions. The stimulus definitions may comprise sets of one or more stimuli. The sets of one or more stimuli may correspond to one or more locations on the skin or one or more locations in the cochlea. Each stimulus of the sets of one or more stimuli may comprise a begin time parameter. The begin time parameter may be representative of a time from an end of components of a previous stimulus definition. Each stimulus of the sets of one or more stimuli may comprise an end time parameter.
In another embodiment there is a method of transforming a sequence of symbols representing phonemes into a sequence of arrays of nerve stimuli, the method comprising establishing a correlation between each member of a phoneme symbol set and an assignment of one or more channels of a multi-stimulator array, accessing a sequence of phonetic symbols corresponding to a message, and activating a sequence of one or more stimulators corresponding to each phonetic symbol of the message identified by the correlation. The stimulators may be vibrators affixed to the user's skin. The phonetic symbols may belong to one of SAMPA, Kirshenbaum, or IPA Unicode digital character sets. The symbols may belong to the cmudict phoneme set. The correlation may be a one to one correlation. Activating a sequence of one or more stimulators may include an energizing period for each stimulator, wherein the energizing period comprises a begin time parameter and an end time parameter. The begin time parameter may be representative of a time from an end of components of a previous energizing period of a particular stimulator.
In another embodiment there is a method of training a user, the method comprising providing a set of somatic stimulations to a user, wherein the set of somatic stimulations is indicative of a plurality of phonemes, and wherein the phonemes are based at least in part on an audio communication; providing the audio communication concurrently to the user with the plurality of phonemes; and selectively modifying at least portions of the audio communication to the user during the providing of the set of somatic stimulations to the user.
Selectively modifying at least portions of the audio communication may comprise reducing an audio property of the audio communication. The audio property may comprise a volume of the audio. The audio property may comprise omitting selected words from the audio. The audio property may comprise attenuating a volume of selected words from the audio. The audio property may comprise omitting selected phonemes from the audio. The audio property may comprise attenuating a volume of selected phonemes from the audio. The audio property may comprise omitting selected sound frequencies from the audio. The audio property may comprise attenuating a volume of selected sound frequencies from the audio.
In another embodiment there is a method of training a user, the method comprising providing a set of somatic stimulations to a user, wherein the set of somatic stimulations is indicative of a plurality of phonemes, and wherein the phonemes are based at least in part on an audiovisual communication; providing the audiovisual communication concurrently to the user with the plurality of phonemes; and selectively modifying at least portions of the audiovisual communication to the user during the providing of the set of somatic stimulations to the user.
Selectively modifying at least portions of the audiovisual communication may comprise reducing an audio or video property of the audiovisual communication. The audio property may comprise a volume of the audio. The audio property may comprise omitting selected words from the audio. The audio property may comprise attenuating a volume of selected words from the audio. The audio property may comprise omitting selected phonemes from the audio. The audio property may comprise attenuating a volume of selected phonemes from the audio. The audio property may comprise omitting selected sound frequencies from the audio. The audio property may comprise attenuating a volume of selected sound frequencies from the audio. The video property may comprise a presence or brightness of the video.
In another embodiment there is a system for processing a sequence of spoken words into a sequence of sounds, the system comprising a first converter configured to digitize electrical signals representative of a sequence of spoken words, a speech recognizer configured to receive the digitized electrical signals and generate a sequence of phonemes representative of the sequence of spoken words, a mapper configured to assign sound sets to phonemes utilizing an audiogram so as to generate a map, a transformer configured to receive the sequence of phonemes representative of the sequence of spoken words and the map and to generate a sequence of sound representations corresponding to the sequence of phonemes, and a second converter configured to convert the sequence of sound representations into a sequence of audible sounds. The map may be a user-specific map based on a particular user's audiogram.
In another embodiment there is a system for processing a sequence of spoken words into a sequence of sounds, the system comprising a first converter configured to digitize electrical signals representative of a sequence of spoken words, a speech recognizer configured to receive the digitized electrical signals and generate a sequence of phonemes representative of the sequence of spoken words, a data structure comprising sound sets mapped to phonemes, a transformer configured to receive the sequence of phonemes representative of the sequence of spoken words and the data structure and to generate a sequence of sound representations corresponding to the sequence of phonemes, and a second converter configured to convert the sequence of sound representations into a sequence of audible sounds. The data structure may be generated utilizing a user's audiogram.
In another embodiment there is a system for processing a sequence of spoken words into a sequence of nerve stimuli, the system comprising a converter configured to digitize electrical signals representative of a sequence of spoken words, a speech recognizer configured to receive the digitized electrical signals and generate a sequence of phonemes representative of the sequence of spoken words, a mapper configured to assign nerve stimuli arrays to phonemes utilizing an audiogram so as to generate a map, and a transformer configured to receive the sequence of phonemes representative of the sequence of spoken words and the map and to generate a sequence of stimulus definitions corresponding to the sequence of phonemes. The system may additionally comprise a receiver configured to convert the sequence of stimulus definitions into electrical waveforms and an electrode array configured to receive the electrical waveforms. The electrode array may be surgically placed in the user's cochlea. The sequence of stimulus definitions may comprise digital representations of nerve stimulation patterns.
In another embodiment there is a system for processing a sequence of spoken words into a sequence of nerve stimuli, the system comprising a converter configured to digitize electrical signals representative of a sequence of spoken words, a speech recognizer configured to receive the digitized electrical signals and generate a sequence of phonemes representative of the sequence of spoken words, a data structure comprising nerve stimuli arrays mapped to phonemes, and a transformer configured to receive the sequence of phonemes representative of the sequence of spoken words and the data structure and to generate a sequence of stimulus definitions corresponding to the sequence of phonemes. The data structure may be generated utilizing a user's audiogram. The system may additionally comprise a receiver configured to convert the sequence of stimulus definitions into electrical waveforms and an electrode array configured to receive the electrical waveforms. The electrode array may be surgically placed in the user's cochlea. The sequence of stimulus definitions may comprise digital representations of nerve stimulation patterns.
In another embodiment there is a system for processing a sequence of spoken words into a sequence of nerve stimuli, the system comprising a processor configured to generate a sequence of phonemes representative of a sequence of spoken words and to transform the sequence of phonemes using a data structure comprising nerve stimuli arrays mapped to phonemes to produce a sequence of stimulus definitions corresponding to the sequence of phonemes, and an electrode array configured to play the sequence of stimulus definitions. The data structure may be generated utilizing a user's audiogram. The electrode array may comprise a converter configured to convert the sequence of stimulus definitions into electrical waveforms. The electrode array may be surgically placed in the user's cochlea. The electrode array may comprise a plurality of mechanical stimulators or a plurality of electrodes. The sequence of stimulus definitions may comprise digital representations of nerve stimulation patterns.
In another embodiment there is a system for processing a sequence of spoken words into a sequence of sounds, the system comprising a processor configured to generate a sequence of phonemes representative of the sequence of spoken words and to transform the sequence of phonemes using a data structure comprising sound sets mapped to phonemes to produce sound representations corresponding to the sequence of phonemes, and a converter configured to convert the sound representations into audible sounds. The data structure may be generated utilizing a user's audiogram.
In another embodiment there is a system for processing a sequence of text into a sequence of sounds, the system comprising, a first converter configured to receive a sequence of text and generate a sequence of phonemes representative of the sequence of text, a mapper configured to assign sound sets to phonemes utilizing a hearing audiogram so as to generate a map, a transformer configured to receive the sequence of phonemes representative of the sequence of text and the map and to generate sound representations corresponding to the sequence of phonemes, and a second converter configured to convert the sound representations into audible sounds. The hearing audiogram may be representative of a normal human hearing range. The hearing audiogram may be representative of a hearing range for a specific individual.
In another embodiment there is a system for processing a sequence of text into a sequence of sounds, the system comprising a text converter configured to receive a sequence of text and generate a sequence of phonemes representative of the sequence of text, a data structure comprising sound sets mapped to phonemes, a transformer configured to receive the sequence of phonemes representative of the sequence of text and the data structure and to generate sound representations corresponding to the sequence of phonemes, and a second converter configured to convert the sound representations into audible sounds. The data structure may be generated utilizing a user's audiogram.
In another embodiment there is a system for processing a sequence of text into a sequence of nerve stimuli, the system comprising a converter configured to receive a sequence of text and generate a sequence of phonemes representative of the sequence of text, a data structure comprising nerve stimuli arrays mapped to phonemes, and a transformer configured to receive the sequence of phonemes representative of the sequence of text and the data structure and to generate a sequence of stimulus definitions corresponding to the sequence of phonemes. The data structure may be generated utilizing a user's abilities. The user's abilities may comprise useable channels of a cochlear implant of the user. The user's abilities may comprise the ability to distinguish between two or more unique stimuli.
In another embodiment there is a method of processing a sequence of text into a sequence of sounds, the method comprising transforming the sequence of text into digital symbols representing corresponding phonemes, transforming the symbols representing the corresponding phonemes into sound representations, and transforming the sound representations into a sequence of sounds.
In another embodiment there is a method of processing a sequence of text into a sequence of nerve stimuli, the method comprising transforming the sequence of text into digital symbols representing corresponding phonemes, transforming the symbols representing the corresponding phonemes into stimulus definitions, and transforming the stimulus definitions into a sequence of nerve stimuli. The nerve stimuli may be associated with a cochlear implant. The nerve stimuli may be associated with a skin interface, where the skin interface may be located on the wrist and/or hand of the user. Transforming the symbols representing the phonemes into stimulus definitions may comprise accessing a data structure configured to map phonemes to stimulus definitions, locating the symbols representing the corresponding phonemes in the data structure, and mapping the phonemes to stimulus definitions.
In yet another embodiment there is a method of creating a data structure configured to transform symbols representing phonemes into sound representations, the method comprising identifying phonemes corresponding to a language utilized by a user, establishing a set of allowed sound frequencies, generating a correspondence mapping the identified phonemes to the set of allowed sound frequencies such that each constituent phoneme of the identified phonemes is assigned a subset of one or more frequencies from the set of allowed sound frequencies, and mapping each constituent phoneme of the identified phonemes to a set of one or more sounds. Establishing a set of allowed sound frequencies may comprise selecting a set of sound frequencies that are in a hearing range of the user. Each sound of the set of one more sounds may comprise an initial frequency parameter. Each sound of the set of one more sounds may comprise a begin time parameter. The begin time parameter may be representative of a time from an end of components of a previous sound representation. Each sound of the set of one more sounds may comprise an end time parameter. Each sound of the set of one more sounds may comprise a power parameter. Each sound of the set of one more sounds may comprise a power shift parameter. Each sound of the set of one more sounds may comprise a frequency shift parameter. Each sound of the set of one more sounds may comprise a pulse rate parameter. Each sound of the set of one more sounds may comprise a duty cycle parameter.
BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a diagram showing a spectrogram, waveform and phonemes for an English word.
FIG. 2 is a table of English phonemes shown in three nomenclatures.
FIG. 3A is a plot of sound intensity and sound frequency showing normal human hearing.
FIG. 3B is a plot of sound intensity and sound frequency showing hearing loss such as caused by chronic exposure to loud noise.
FIG. 3C is a plot of hearing level and sound frequency, as would appear in the form of a clinical audiogram, showing normal human hearing, and is analogous to the plot ofFIG. 3A.
FIG. 3D is a plot of hearing level and sound frequency, as would appear in the form of a clinical audiogram, showing hearing loss such as caused by chronic exposure to loud noise, and is analogous to the plot ofFIG. 3B.
FIGS. 4A and 4B are diagrams showing conventional physical configurations of body-worn and in-the-ear hearing aids, respectively.
FIGS. 4C and 4D are diagrams showing functional components of low-complexity and medium-complexity hearing aids, respectively.
FIG. 4E is a diagram of a phoneme substitution based hearing aid.
FIG. 5A is a diagram showing a spectrogram, waveform and phonemes for an English word “chew”
FIG. 5B is a diagram similar to that ofFIG. 5A but showing use of amplification in the spectrogram and waveform.
FIG. 5C is a diagram similar to that ofFIG. 5A but showing use of speech processing in the spectrogram and waveform.
FIG. 5D is a diagram similar to that ofFIG. 5A but showing use of phoneme substitution in the spectrogram and waveform.
FIG. 6 is a diagram of an embodiment of the components associated with a hearing aid using phoneme substitution.
FIG. 7 is a flowchart of an embodiment of an assignment of sound sets to phonemes process shown inFIG. 6.
FIG. 8 is a diagram of an example of a phoneme substitution data structure such as resulting from the assignment of sound sets to phonemes process shown inFIG. 7.
FIG. 9 is a plot of a spectrogram for the English word “jousting” as a result of phoneme substitution such as performed using the data structures shown inFIG. 8.
FIG. 10A is a diagram of physical components of an example of a cochlear implant hearing device.
FIG. 10B is a diagram of a functional configuration of the example cochlear implant hearing device shown inFIG. 10A.
FIG. 11A is a diagram showing a spectrogram, waveform and phonemes for an English word “chew”
FIG. 11B is a diagram similar to that ofFIG. 11A but showing use of conventional sound processing in the spectrogram.
FIG. 11C is a diagram similar to that ofFIG. 11A but showing use of phoneme substitution in the spectrogram.
FIG. 12 is a diagram of an embodiment of the components associated with a hearing implant using phoneme substitution.
FIG. 13 is a diagram showing an embodiment of an implanted electrode array and an example structure of potential electrode assignments, such as stored in the database of nerve stimuli arrays to phonemes shown inFIG. 12.
FIG. 14A is a diagram of an embodiment of a skin interface, used with phoneme substitution, having mechanical or electrical stimulators fitted about a person's hand and wrist.
FIG. 14B is a diagram of an embodiment of a skin interface, used with phoneme substitution, having mechanical or electrical stimulators fitted about a person's wrist.
FIG. 15 is a table providing examples of mapping English phonemes to tactile symbols, such as for the skin interfaces shown inFIGS. 14A and 14B.
FIG. 16A is a diagram of various ways of representing the English word “chew”.
FIG. 16B is a diagram showing embodiments of transmitters and receivers for implementing phoneme substitution communication, such as shown inFIGS. 6,12 and14A and14B.
DETAILED DESCRIPTION OF CERTAIN EMBODIMENTSThe following detailed description of certain embodiments presents various descriptions of specific embodiments of the invention. However, the invention can be embodied in a multitude of different ways as defined and covered by the claims. In this description, reference is made to the drawings wherein like parts are designated with like numerals throughout.
The terminology used in the description presented herein is not intended to be interpreted in any limited or restrictive manner, simply because it is being utilized in conjunction with a detailed description of certain specific embodiments of the invention. Furthermore, embodiments of the invention may include several novel features, no single one of which is solely responsible for its desirable attributes or which is essential to practicing the inventions herein described.
The system is comprised of various modules, tools, and applications as discussed in detail below. As can be appreciated by one of ordinary skill in the art, each of the modules may comprise various sub-routines, procedures, definitional statements and macros. Each of the modules are typically separately compiled and linked into a single executable program. Therefore, the following description of each of the modules is used for convenience to describe the functionality of the preferred system. Thus, the processes that are undergone by each of the modules may be arbitrarily redistributed to one of the other modules, combined together in a single module, or made available in, for example, a shareable dynamic link library.
The system modules, tools, and applications may be written in any programming language such as, for example, C, C++, BASIC, Visual Basic, Pascal, Ada, Java, HTML, XML, or FORTRAN, and executed on an operating system, such as variants of Windows, Macintosh, UNIX, Linux, VxWorks, or other operating system. C, C++, BASIC, Visual Basic, Pascal, Ada, Java, HTML, XML and FORTRAN are industry standard programming languages for which many commercial compilers can be used to create executable code.
A computer or computing device may be any processor controlled device, which may permit access to the Internet, including terminal devices, such as personal computers, workstations, servers, clients, mini-computers, main-frame computers, laptop computers, a network of individual computers, mobile computers, palm-top computers, hand-held computers, set top boxes for a television, other types of web-enabled televisions, interactive kiosks, personal digital assistants, interactive or web-enabled wireless communications devices, mobile web browsers, or a combination thereof. The computers may further possess one or more input devices such as a keyboard, mouse, touch pad, joystick, pen-input-pad, and the like. The computers may also possess an output device, such as a visual display and an audio output. One or more of these computing devices may form a computing environment.
These computers may be uni-processor or multi-processor machines. Additionally, these computers may include an addressable storage medium or computer accessible medium, such as random access memory (RAM), an electronically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), hard disks, floppy disks, laser disk players, digital video devices, compact disks, video tapes, audio tapes, magnetic recording tracks, electronic networks, and other techniques to transmit or store electronic content such as, by way of example, programs and data. In one embodiment, the computers are equipped with a network communication device such as a network interface card, a modem, or other network connection device suitable for connecting to the communication network. Furthermore, the computers execute an appropriate operating system such as Linux, UNIX, any of the versions of Microsoft Windows, Apple MacOS, IBM OS/2 or other operating system. The appropriate operating system may include a communications protocol implementation that handles all incoming and outgoing message traffic passed over the Internet. In other embodiments, while the operating system may differ depending on the type of computer, the operating system will continue to provide the appropriate communications protocols to establish communication links with the Internet.
The computers may contain program logic, or other substrate configuration representing data and instructions, which cause the computer to operate in a specific and predefined manner, as described herein. A computer readable medium can store the data and instructions for the processes and methods described hereinbelow. In one embodiment, the program logic may be implemented as one or more object frameworks or modules. These modules may be configured to reside on the addressable storage medium and configured to execute on one or more processors. The modules include, but are not limited to, software or hardware components that perform certain tasks. Thus, a module may include, by way of example, components, such as, software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
The various components of the system may communicate with each other and other components comprising the respective computers through mechanisms such as, by way of example, interprocess communication, remote procedure call, distributed object interfaces, and other various program interfaces. Furthermore, the functionality provided for in the components, modules, and databases may be combined into fewer components, modules, or databases or further separated into additional components, modules, or databases. Additionally, the components, modules, and databases may be implemented to execute on one or more computers. In another embodiment, some of the components, modules, and databases may be implemented to execute on one or more computers external to a website. In this instance, the website may include program logic, which enables the website to communicate with the externally implemented components, modules, and databases to perform the functions as disclosed herein.
Theplots100 ofFIG. 1 illustrate one word in one language. Each language and dialect has its own set or sets of phonemes (different classification systems may define different sets of phonemes for the same language or dialect). The scope of this description encompasses all phonemes, both those currently defined and those not yet defined, for all languages.
As previously described,FIG. 2 is a table200 ofAmerican English phonemes225 shown in three nomenclatures: the International Phonics Association (IPA), s{mpA (a phonetic spelling of SAMPA, the abbreviation for Speech Assessment Methods Phonetic Alphabet), and the Merriam Webster Online Dictionary (m-w). Other nomenclatures, such as the Carnegie Mellon University pronouncing dictionary (cmudict), can be used in certain embodiments. Examples226 of each phoneme as used in an American English word are provided, along with themanner237 andplace247 ofarticulation227.
Some embodiments relate to recoding phonemes to sets of sound frequencies that can be perceived by the user lacking the ability to hear the full range of human speech sounds.
FIG. 3A, plot300a, shows a range of human hearing that is considerednormal region310a, on a plot of the sound frequency in Hertz (horizontal axis) versus the sound intensity in watts/m2(vertical axis). The threshold of perception is the bottom (low intensity)boundary312a,314a, which varies as a function of frequency. Human hearing is most sensitive to sound frequencies around 3000 Hz. At these frequencies, the threshold ofperception314acan be less than 10−12watts/m2(0 dB)341afor some individuals. The threshold of discomfort is the top (high intensity) boundary,316a. The low frequency limit of human hearing is defined as the frequency that is both the threshold of perception and the threshold ofdiscomfort318a. The high frequency limit ofhuman hearing319ais defined in the same manner. For reference, the OSHA limit for safe long term exposure to noise in the work environment, 90 dB, is equivalent to 10−3watts/m2343a. Sound frequencies and intensities required for speech perception are generally between about 300 Hz and 9000 Hz and about 10−10to 10−7watts/m2(20 dB to 50 dB)region320a.Lower frequencies area323aare most important for the recognition of vowel sounds, while highersound frequencies area326a, are more important for the recognition of consonants (also seeFIGS. 1 and 2).
Five to ten percent of people have a more limited hearing range shown in region310 than that shown inFIG. 3A. Many different types of hearing impairments exist. For example, one or both ears may be affected in their sensitivities to different sound frequencies. Hearing impairments may be congenital or acquired later in life, and may result from, or be influenced by, genetic factors, disease processes, medical treatments, and/or physical trauma.
Exposure to loud noise causes irreversible damage to the human hearing apparatus.FIG. 3B,plot300b, illustrates a reduced range of hearingregion310bas might result from chronic exposure to noise levels above 90dB343b. Although the threshold of perception for low frequency sounds312bis only slightly affected, the ability to hear higher frequency sounds314bis significantly impaired. A person with a hearing range as shown inFIG. 3B atregion310b, would be able to hear and recognize most low frequency vowel sounds atregion320b, but would find it difficult or impossible to hear and recognize many high frequency consonant sounds330b. As a result, this person would be able to hear when people are speaking, but would be unable to understand what they are saying. For reference, the normal threshold of perception, 0 dB or 10−12watts/m2is indicated by thearrow341band the OSHA limit for safe long term exposure to noise in the work environment, 90 dB or 10−3watts/m2, is indicated by thearrow343b. Often, the threshold ofdiscomfort316bis relatively unaffected by a rise in the threshold of perception.
Often, hearing aids can improve speech recognition by amplifying speech sounds above the threshold of perception for hearing impaired persons. One embodiment is a device that recodes speech sounds to frequencies in a range of sensitive hearing rather than amplifying them at the frequencies where hearing is impaired. For example, an individual with a hearing range similar to that shown inFIG. 3B,region300b, would not hear most speech sounds at frequencies above around 1500 Hz inregion330b, but could hear sounds recoded to sound frequencies around 400 Hz inarea350b.
Audiometry provides a practical and clinically useful measurement of hearing by having the subject wear earphones attached to the audiometer. Pure tones of controlled intensity are delivered to one ear at a time. The subject is asked to indicate when he or she hears a sound. The minimum intensity (volume) required to hear each tone is graphed versus frequency. The objective of audiometry is to plot an audiogram, a chart of the weakest intensity of sound that a subject can detect at various frequencies.
Although an audiogram presents similar information to the graphs inFIGS. 3A and 3B, it differs in several aspects. Although the human ear can detect frequencies from 20 to 20,000 Hz, hearing threshold sensitivity is usually measured only for the frequencies needed to hear the sounds of speech, 250 to 8,000 Hz. The sound intensity scale of an audiogram is inverted compared with the graphs inFIGS. 3A and 3B, and measured in decibels, dB, a log scale where zero has been arbitrarily defined as 10−12watts/m2. Also, the audiogram provides an individual assessment of each ear.
FIG. 3C,plot300c, shows an audiogram for an individual with normal hearing, similar to that shown inFIG. 3A, plot300a. A shadedarea320crepresents the decibel levels and frequencies where speech sounds are generally perceived (the so-called “speech banana”, similar to the shaded area ofFIG. 3A,region320a, but inverted). Hearing in the right ear is represented by circles connected by aline362cand in the left ear by crosses connected by aline364c. The symbols (circle for the right ear and cross for the left) indicate the person's hearing threshold at particular frequencies, e.g., the loudness (intensity) point where sound is just audible. Thresholds of perception from zero dB, shown byarrow341cto 15 dB (1.0×10−2to 3.2×10−10watts/m2) are considered to be within the normal hearing range. The OSHA limit for safe long-term exposure to noise, 90 dB, or 10−3watts/m2, shown byarrow343c, is also provided for reference. An area designated323cindicates the range most important for hearing vowel sounds, and an area designated326cindicates the range most important for hearing consonants.
FIG. 3D,plot300d, shows an exemplary audiogram of an individual with bilaterally symmetrical hearing loss (similar hearing losses in both ears) similar to that shown inFIG. 3B,plot300b). Hearing in the right ear is represented by circles connected by aline362dand in the left ear by crosses connected by aline364d. At the lower frequencies (250 to 500 Hz), little hearing loss has occurred,area350d. However, at the mid-range of frequencies (500 to 1000 Hz) hearing loss is moderate,area320d, and at the higher frequencies (>2000 Hz), hearing loss is severe,area330d. A person with this degree of hearing loss would able to hear and recognize most low frequency vowel sounds,area320d, but would find it difficult or impossible to hear and recognize many high frequency consonant sounds,area330d. As a result, this person would be able to hear when people are speaking, but would be unable to understand what they are saying. Again, the normal threshold of perception, 0 dB or 10−12watts/m2, shown byarrow341d, and the OSHA limit for safe long term exposure to noise in the work environment, 90 dB or 10−3watts/m2, shown byarrow343d, are provided for reference.
Often, hearing aids can improve speech recognition by amplifying speech sounds above the threshold of perception for hearing impaired persons. An embodiment is a device that recodes speech sounds to frequencies in a range of sensitive hearing rather than amplifying them at the frequencies where hearing is impaired. For example, an individual with an audiogram similar to that shown inFIG. 3D,plot300d, would not hear most speech sounds at frequencies above around 1500 Hz,area330d, but could hear sounds recoded to sound frequencies around 400 Hz,area350d.
There are many types of hearing aids, which vary in physical configuration, power, circuitry, and performance. They all aid sound and speech perception by amplifying sounds that would otherwise be imperceptible to the user; however, their effectiveness is often limited by distortion and the narrow range in which the amplified sound is audible, but not uncomfortable. Certain embodiments described herein overcome these limitations.
FIGS. 4A and 4B, diagrams400a,400b, illustrate some of the basic physical configurations found in hearing aid designs. A bodyworn aid420amay comprise acase412acontaining a power supply and components of amplification; and anear mold416acontaining an electronic speaker, connected to the case by acord414a. Behind-the-ear aids410b,420bmay consist of asmall case412bcontaining a power supply, components of amplification and an electronic speaker, which fits behind anear404b; anear mold416b; and aconnector414b, which conducts sound to theear404bthrough theear mold416b. In-the-ear aids430bcomprise a power supply, components of amplification, and an electronic speaker, fit entirely within anouter ear406b.
Operational principles of hearing aids may vary among devices, even if they share the same physical configuration.FIGS. 4C and 4D, diagrams400cand400d, illustrate some of the functional components found in hearing aid designs. The leastcomplex device420ccomprises amicrophone413c, which converts sounds such as speech from anotherperson408cinto an electronic signal. The electronic signal is then amplified by anamplifier415cand converted back into sound by anelectronic speaker417cin proximity to the user'sear404c.
Moresophisticated devices420dcomprise amicrophone413dand aspeaker417d, which perform the same functions as theircounterparts413c,417crespectively. However, sound andspeech processing circuitry415dcan function differently fromsimple amplification circuitry415c. Sound andspeech processing circuitry415dmay be either digital or analog in nature. Unlike thesimple amplifier415c, sound andspeech processing circuitry415dcan amplify different portions of the sound spectrum to different degrees. These devices might incorporate electronic filters that reduce distracting noise and might be programmed with different settings corresponding to the user's needs in different environments (e.g., noisy office or quiet room).
An embodiment is shown inFIG. 4E, diagram400e. Adevice420ediffers in its principle of operation from the hearing aids420cand420din that itscircuitry415ecan substitute the phonemes of speech sounds with unique sets of sounds (acoustic symbols). By substituting some or all of the phonemes in a given language with simple acoustic symbols, it is possible to utilize portions of the sound spectrum where a user may have relatively unimpaired hearing. The symbols themselves may represent phonemes, sets of phonemes, portions of phonemes, or types of phonemes. For an individual with an audiogram similar to that shown inFIG. 3D, the acoustic symbols could, for example, comprise sound frequencies between 200 Hz and 600 Hz, which would be audible to that person.
In
FIG. 5, the English word, “chew”,
505a, is used to compare and contrast certain embodiments described herein to conventional technologies.
FIG. 5A, plots
500aprovides a
spectrogram520aand
waveform540afor the word, “chew”
505a. When spoken, “chew” comprises two phonemes,
524a, and
u,
586a, which are visible as two
distinctive regions542aand
544aof the
waveform540a. However, as with the example for the English word, “fake”,
FIG. 1,
plots100, the waveform is too complex to expose much informative detail via visual inspection. The
spectrogram520areveals a greater level of relevant detail. Here it is seen that the phoneme,
524acomprises a complex set of
sound frequencies521abroadly distributed largely above 3000 Hz. Most of the power for the phoneme, u,
586a, is contained in relatively tight frequency ranges around 500 Hz,
523a, and 2500 Hz,
522a. Additionally, u,
586a, is a voiced phoneme, exhibiting characteristic waxing and waning of power over many frequencies, observable as faint vertical stripes within the bands labeled
522aand
523a. The waxing and waning itself has a frequency of approximately 250 Hz (≈25 stripes per 100 milliseconds on the time axis).
An individual with an audiogram similar to that shown in
FIG. 3D, plots
300d, might be able to hear the phoneme, u,
586a, reasonably well because its frequencies are in the lower range of speech. However, this individual would not hear
524abecause this person's hearing is impaired at higher frequencies. A hearing aid using simple amplification can help to some extent by increasing the sound pressure (a.k.a. volume, a.k.a. power) at all frequencies as illustrated in
FIG. 5B, plots
500b. As seen in the waveform,
540b, sound pressure has been increased for the phonemes,
542band
u,
544brelative to corresponding portions of the
waveform540a,
542aand
544a,
FIG. 5A. The spectrogram reveals that low frequency sounds,
523b, have been amplified even though there is little or no need for amplification at these frequencies. This can result in distorted perception and discomfort for the user. Extraneous ambient noise is also amplified, as seen in
area528b, interfering with speech recognition and comfort.
FIG. 5C, plots500c, illustrates aspectrogram520candwaveform540cobtained when the word, “chew” is spoken into a hearing aid with speech/sound processing capability. Increased amplitude is observed in thewaveform area542cbut less so in thearea544crelative to corresponding portions of thewaveform540a,542aand544a,FIG. 5A. Thespectrogram520creveals that most amplification occurs at thehigher frequencies521cand522cbut less so at thelower frequencies523c. Therefore thelow frequency components523cof the phoneme u,586care not too loud. Noise problems are also reduced. However, the sound at521cand522cmay be so loud that it is uncomfortable and could damage remaining hearing.
FIG. 5D, plots
500d, provides an example of a waveform,
540d, and spectrogram,
520d, as might result from recoding the word “chew” using the phoneme substitution method described herein. The
waveform540dand
spectrogram520dhave been simplified relative to those in
FIGS. 5A,
5B, and
5C,
540a,
520a,
540b,
520b,
540c,
520c, and all sound energy has been redirected to frequencies easily audible for an individual having an audiogram, plots
300d, similar to that shown in
FIG. 3D. The portion of the waveform,
540d, corresponding to the phoneme,
524a, is shown in
waveform portion542d, and that of the phoneme, u,
586a, is shown in
waveform portion544d. The
spectrogram520dshows a simple frequency distribution in a narrow range. All
frequencies531d,
532d,
533d,
536dand
537dare below 1000 Hz. Power at
frequencies536dand
537drepresenting the phoneme, u,
586a, is pulsed at a frequency of approximately 12 Hz.
FIG. 6, diagram600, provides an overview of how one embodiment transforms speech609 (exemplified by the waveform illustrated inFIG. 5A, plots500a) from a person speaking608 in simple acoustic symbols605 (exemplified in the waveform,540d, illustrated inFIG. 5D, plots500d) for auser604 by use of ahearing aid620. The components of thehearing aid620 are described below.
Thehearing aid620 includes amicrophone613 to transformspeech sound609 into electronic analog signals which are then digitized by an analog todigital converter622. The embodiment illustrated here provides auser interface619 that allows the selection of one of two operating modes depending upon whether or not speech recognition is of primary interest to the user,604, in any given setting. Other embodiments need not provide this option.
When speech recognition is of primary interest to theuser604, the value atdecision state624 will be true. Aspeech recognition process630 transforms digitized speech sounds into digital symbols representing phonemes of thespeech609 produced by the person speaking608. Characters representing phonemes are then exchanged for digital sound representations by atransformation process650. The transformation process oftransformer650 can be performed by software, hardware or by combinations of software and hardware.
Thetransformation process650 comprises a correspondence from a set of phonemes to a set of sound representations held in a database orother data structure652 and away654 of generating sound representations corresponding to phonemes from thespeech recognizer630. The sounds representations held in thedatabase652 may be wav files, mp3 files, aac files, aiff files, MIDI files, characters representing sounds, characters representing sound qualities, and the like.
The sound files are then converted to analog signals by a digital toanalog process626 amplified by anamplification process628 and converted into audible sounds by aspeaker617.
When speech recognition is not of primary interest to theuser604, the value atdecision state624 will be false. The device will function as a digital hearing aid with conventional speech/sound processing functions615, digital toanalog signal conversion626,amplification628, andsound generation617.
Although certain embodiments do not relate to the field of speech recognition technology, some embodiments utilize speech recognition. A number of strategies and techniques for building devices capable of recognizing and translating human speech into text are known to those skilled in such arts. For reference and background, a generic diagram of the inner workings of the speech recognizer,630, as might be employed by some embodiments is provided inFIG. 6.
Within thespeech recognizer630, the digitized acoustic signal may be processed by adigital filter632 in order to reduce the complexity of the data. Next, asegmentation process634 parses the data into overlapping temporal intervals called frames.Feature extraction636 involves computing a spectral representation (somewhat like a spectrogram) of the incoming speech data, followed by identification of acoustically relevant parameters such as energy, spectral features, and pitch information. Adecoder638 can be a search algorithm that may usephone models644,lexicons647, andgrammatical rules648, for computing a match between aspoken utterance609 and a corresponding word string. While phonemes are the smallest phonetic units of speech, more fundamental units, phones, are the basic sounds of speech. Unlike phonemes, phones vary widely from individual to individual, depending on gender, age, accent, etc., and even over time for a single individual depending on sentence structure, word structure, mood, social context, etc. Therefore,phone models644 may use adatabase642, comprising tens of thousands of samples of speech from different individuals. Alexicon647 contains the phonetic spellings for the words that are expected to be observed by thespeech recognizer630. Thelexicon647 serves as a reference for converting the phone sequences determined by the search algorithm into words. The grammar network orrules648 defines the recognition task in terms of legitimate word combinations at the level of phrases and sentences. Some speech recognizers employ more sophisticated language models (not shown) that predict the most likely continuation of an utterance on the basis of statistical information about the frequency in which word sequences occur on average in the language. Thelexicon647 andgrammar network648 use atask database646 comprising words and their various pronunciations, common phrases, grammar, and usage.
Referring again to thetransformation process650, becausedifferent users604 may have different hearing requirements and abilities, thephonic symbol database652 can be created and customized in consideration of eachindividual user604. In some embodiments, acomputer660 can be used to aid in the creation of user specific phonic symbol databases, which are then downloaded to thedatabase652 of thehearing aid620. Thecomputer660 comprises software allowing the input of data (e.g., audiogram)664 from a user's hearing tests, auser interface662, and a process ormapper670 for creating a map (for database652) to transform symbols representing phonemes into sets of sounds. In one embodiment, themapper670 can be performed by hardware circuits.
For some embodiments, each unique phoneme maps to a unique acoustic symbol. Each acoustic symbol comprises a unique set of sounds, each sound being audible the user, and each acoustic symbol, or sound set, having a distinctive perceived sound. The function of the Assignment Of Sound Sets toPhonemes process670 inFIG. 6 is to build such a map.Process670, further described in conjunction withFIG. 7, outlines one method for constructing the map. This and other methods can be performed manually or in an automated fashion using a computer or other computational device such as a table.
Acoustic symbols or sound sets may comprise one or more sounds. Sounds may differ in a number of qualities including but not limited to frequency, intensity, duration, overtones (harmonics and partials), attack, decay, sustain, release, tremolo, and vibrato. Although any or all of these differences can be employed, theexample process670 shown inFIG. 7 places a primary emphasis on variations in frequency. Therefore, theexample process670 provides acoustic symbols (sound sets) that are unique with respect to the sound frequencies they comprise. For simplicity, this example will employ only combinations of pure tones (no overtones). Sounds having harmonic content could be employed in a similar fashion.
Referring toFIG. 7, following thestart state705 ofprocess670,state710 calls for a value, i, the input intensity limit. The input intensity limit, i, is an intensity or power density level, above which the user should be able to perceive each and every sound present in the set of acoustic symbols. As the value for i is increased, the range of available sounds to construct acoustic symbols will increase.
Based upondata716 from the user's hearing tests,state715 determines a range of sound frequencies, [f1, fh], such that each sound frequency in the range [f1, fh], is perceptible to the user at power densities at or below i.
Human hearing is receptive to sound frequency changes in an approximately logarithmic fashion. Therefore, for some embodiments, it may be desirable to establish rules constraining the choices of sound frequencies used to construct phonic symbols. An example of such a rule could be that the set of allowed sound frequencies must not contain any two frequencies f1and f2such that |(f2−f1)/(f2+f1)|≦j, where j is a constant between 0.02 and 0.1. To illustrate, if [fl, fh]=[1000 Hz, 2500 Hz] and j=0.038, there would be 13 allowed frequencies. The closest any two frequencies could be at the low frequency end of the range would be 79 Hz, and the closest any two frequencies could be at the high frequency end of the range would be 183 Hz. More sophisticated rules can be used to factor in non-logarithmic and other components of the human hearing response to sound frequency.
Mathematical functions can be used to generate lists of allowed frequencies. For example, an equation, f(z), where f(z)/f(z+1)=f(z+1)/f(z+2) for all integers, z, (zEZ) would generate a set of values evenly separated on a log scale. An example of such an equation is f(z)=(x·ŷ(z/v))/sec, where v, x, and y are real numbers greater than one. For illustration purposes, if x=2, y=10, v=2, and zεZ, the equation, f(z)=(x·ŷ(z/v))/sec, would generate the set { . . . 63 Hz, 200 Hz, 632 Hz, 2 kHz, . . . }. It may be noted that for f(z)=(x·ŷ(z/v))/sec, values for y that are powers of 2 such as 2, 4, 8, etc. and values for v such as 3, 4, 6, 12, and 24 would yield frequencies separated by intervals approximating naturally occurring overtones and partials. Such sets of frequencies may give rise to sets of acoustic symbols more pleasing and perhaps discernable to the human ear.
Proceeding tostate720,process670 calls for values of v, x, and y. Using the values of v, x, and y fromstate720 and integer values for z,state725 finds all sound frequencies that satisfy the equation and are greater than, flbut less than fh. Stated symbolically,state725 returns the set, F={f(z∩[fl, fh)]: f(z)=(x·ŷ(z/v))/sec, zεZ}. This equation is provided only as an example.
A database ordata structure731 comprises a list of phonemes that the user is likely to require. A person who uses only the English language might need approximately 39 phonemes as listed inFIG. 2, table200. Someone who uses only the Hawaiian language would require approximately 13 phonemes while a person using two European and two Asian languages might require approximately 200 phonemes.
In this example, each symbol comprises a unique set of sound frequencies. Therefore, the composition of a given symbol either contains a particular sound frequency, or it doesn't. Therefore the maximum number of acoustic symbols that can be constructed from n frequencies is 2n−1. For example, three different frequencies could yield up to seven unique symbols, while eleven frequencies could yield up to 2047 unique symbols. Conversely, the minimum number, m, of frequencies, f, needed to create a unique symbol for each phoneme, p, of a set of phonemes, P, is at least 2̂log2|P|, where |P| is the number of phonemes, p, in the set of phonemes, P.
State730 determines the value of |P| from the user'sphoneme database731, and returns a solution, m, for the above equation. Proceeding to adecision state735process670 determines if the number of solutions, IFS, fromstate725 is sufficient to create a unique acoustic symbol, or set of frequencies, for each element, p, in the user's phoneme set, P, fromdatabase731. A value of false atdecision state735 returns theprocess670 to thestate710. From there, the value for i may be increased thereby expanding the interval [fl, fh], determined bystate715. Additionally, or alternatively, values for v, x, and y may be changed atstate720, to increase the number of solutions to the equation, f(z)=(x·ŷ(z/v))/sec, that are within the range, [fl, fh], determined bystate715. Decreasing the value for y, and/or increasing the value for v will tend to increase the number of solutions to f(z)=(x·ŷ(z/v))/see within [fl, fh]. Adjusting the value for x in either direction may or not alter the number of solutions to f(z)=(x·ŷ(z/v))/sec within [fl, fh]. When a change in the value of x does result in a change to the number of solutions to f(z)(x·ŷ(z/v))/sec within [fl, fh], that number will increase or decrease by one solution (one allowed frequency).
A value of true at the decision state,735, movesprocess670 tostate740.State740 is the first of twostates740 and745 that assigns acoustic symbols, (sets of sounds) to phonemes.
In the
first state740,
process670 assigns to each phoneme a set of one or more allowed sound frequencies. More precisely, each phoneme, p, of the set of phonemes, P, is assigned a set, Q, of frequencies, f, each frequency, f, being an element of the set of allowed frequencies, F. Stated symbolically,
state740 returns a set, M={(p, Q): pεP, Q
F}.
In thesecond state745,process670 assigns additional qualities to be associated with each frequency element, f, of each frequency set, Q, of each element (p, Q) of the set, M. Seven variables are assigned in this example. In other embodiments, a different number of variables can be assigned.
- b “begin” Sound at frequency, f, will start being produced b milliseconds after the end of the preceding acoustic symbol. If there is no preceding acoustic symbol, zero will be used in place of b. The variable, b, may have a value that is positive, negative, or zero.
- e “end” Sound at frequency, f, will stop being produced e milliseconds after the end of the preceding acoustic symbol. If there is no preceding acoustic symbol, f, sound will stop being produced e milliseconds after it starts being produced.
- w “power” Power at sound frequency, f, will be w decibels (dB) upon its initiation. 0 db≡10−12watts/m2
- d “Δw” Power at sound frequency, f, will smoothly transition toward d·w decibels (dB) and will be d·w at the end of its duration. The variable, d, may have a value that is positive, negative, or zero.
- h “Δf” Cycles per second at frequency, f, will smoothly transition from f Hertz (Hz) at its initiation to d·f Hz at the end of its duration. The variable, h, may have any value that is greater than zero; however values between 0.1 and 10 are most practical.
- r “pulse rate” Power at sound frequency, f, will be reduced by at least 20 dB and restored to wdB r times each second.
- c “duty cycle” The duty cycle variable, c, is the time within each pulse cycle that the power is equal to w divided by the time that the power is equal to or less than w-20 dB. A c value of 50% would produce a square wave.
At the conclusion ofstate745, adata structure752 is constructed mapping each phoneme to a set of sounds, each sound having eight parameters, f, b, e, w, d, h, r, c as described above. The completion of thedata structure752 allows progression to theend state755.
In the above example, the various elements of the acoustic symbols were assembled about each phoneme. The order of these steps is not critical to the practice of certain embodiments described herein, and acoustic symbols may be predefined and later assigned to phonemes. The parameters, f, b, e, w, d, h, r, c are given only as examples.
To illustrate how theprocess670 can operate, providing an intensity limit, i, value of 30 dB (10−9watts/m2), and anaudiogram716 similar to that shown inFIG. 3D, plots300d, would result instate715, returning an interval of [80 Hz, 800 Hz]. If the values tostate720 are v=12, x=200, and y=2,state740 would return the set of allowed frequencies, F, {84, 89, 94, 100, 106, 112, 119, 126, 133, 141, 150, 159, 168, 178, 189, 200, 212, 224, 238, 252, 267, 283, 300, 317, 336, 356, 378, 400, 424, 449, 476, 504, 534, 566, 599, 635, 673, 713, 755, 800}. If the user's phoneme set, P, comprises a minimal set of phonemes needed for American English, the number of elements, |P|, in the set, P, will be 39.State730 would return the value, 2̂log239, which is 5.3. The number of elements, |F|, in the set, F, is 40. Because 40≧5.3, the Boolean value atdecision state735 is true, andprocess670 would proceed tostate740. To simplify this example, the choice of frequencies will be further restricted to just nine of the 40 allowed frequencies, {300, 317, 336, 400, 424, 449, 504, 534, 566}.
In one embodiment, the symbols are unique combinations of one or more sound frequencies.
In another embodiment, the symbols are unique frequency intervals. A frequency interval is the absolute value log difference of two frequencies. Constructing acoustic symbols as frequency intervals has advantages as most people, including trained musicians, lack the ability to recognize individual sound frequencies but are able to recognize intervals.
In another embodiment, the combination of frequencies and their temporal modifications are unique for each symbol.
In another embodiment, the combination of frequency intervals and the temporal modifications for each frequency are unique for each symbol.
In another embodiment, the combination of frequencies and their timbre, which may comprise overtones (harmonics and partials), tremolo, and vibrato, is unique for each symbol.
In another embodiment, the combination of frequency intervals and the timbre of each frequency is unique for each symbol.
In another embodiment, phonemes are placed into groups of like phonemes (e.g., plosive, fricative, diphthong, monophthong, etc.). Such a placement of phonemes into groups of like phonemes is known to linguists and others skilled in such arts. All phonemes are then assigned a sound frequency (the root), all phonemes being given the same root. Each member of each group of like phonemes is given a second frequency unique to that group. Once all phonemes have been assigned a second sound frequency, the most frequently used phoneme of each group is not assigned additional sound frequencies. Therefore, the most frequently used phonemes are represented by single frequency intervals. One or more additional sound frequencies are then assigned to the remaining phonemes to create a unique combination of frequencies for each phoneme.
In another embodiment, phonemes are placed into groups of like phonemes (e.g., plosive, fricative, diphthong, monophthong, etc.). Such a placement of phonemes into groups of like phonemes is known to linguists and others skilled in such arts, All phonemes are then assigned a sound frequency (the root), all phonemes being given the same root. Each member of each group of like phonemes is given a second frequency unique to that group. Once all phonemes have been assigned a second sound frequency, the most frequently used phoneme of each group is not assigned additional sound frequencies. Therefore, the most frequently used phonemes are represented by single frequency intervals. One or more additional sound frequencies are then assigned to the remaining phonemes to create a unique combination of frequencies for each phoneme. Next, every frequency of every phoneme in one group of like phonemes is shifted up or down by multiplying every frequency of every phoneme in one group of like phonemes by a constant. Additional groups of like phonemes may or may not be adjusted in a similar fashion using the same constant or a different constant.
In another embodiment, the acoustic symbol's frequencies, intervals, temporal modifiers, and/or timbre, may be selected to resemble features of the phoneme from which it was derived. For example, the fricative, s, might be assigned a higher frequency or frequencies, than the vowel, 3; plosives might all have the modifier, g=2; voiced phonemes might have the modifier, b=2; and unvoiced phonemes might have the modifier, b=1. Frequencies, intervals, temporal modifiers, timbre, and other qualities may be applied methodically, arbitrarily, or randomly.
FIG. 8 illustrates an exemplaryexample data structure752 as might be returned bystate745,FIG. 7. Thedata structure752 contains examples of the use of sound qualities listed above. Not all of the sound qualities in the example are required to practice certain embodiments described herein, and other qualities not listed here may be employed.
In this example, the data structure comprises ordered sets, each ordered set matching a phoneme, p, to one or more sounds. Each sound is defined by an ordered set comprising values for the variables f, b, e, w, d, h, r, c. To facilitate cross-referencing, the last two digits of each callout or reference label inFIG. 8 are the same as the last two digits of corresponding phonemes inFIGS. 1,2,5,9,11,15, and16. The time scale as well as nature of the symbols does however vary from figure to figure.
Referring to
FIGS. 8 and 9, the word jousting will used in the next example. The IPA representation of the word, “jousting”, is
and comprises seven phonetic symbols,
a,
, s, t, i, and
. However, the monophthong, “a”,
996 (
FIG. 9), is not used as a sole vowel sound in American English words or syllables, but exists only as part of the diphthongs, ai, and a
. Therefore, in
English920, actually comprises just six phonemes,
, s, t, i, and
.
Whenstate654,FIG. 6, searches thedata structure652 or752,FIG. 8, it finds the ordered sets;
(
(449,20,90,50,0,1,100,100),(504,20,90,50,0,1,90,50))
(a
,(317,0,150,50,0,1,84,67),(400,0,150,50,0,0.75,84,67))
(s,(534,0,100,50,0,1,100,100),(566,0,100,50,0,1,100,100))
(t,(317,20,90,50,−30,1,100,100),(566,20,90,50,−30,1,100,100))
(i,(336,0,100,50,0,1,100,67),(566,0,100,50,0,1,100,67))
(
,(336,0,100,50,0,1,100,100),(449,0,100,50,0,1,100,100),(534,0,100,60,0,1,126,80))
(FIG. 8callouts834,894,844,804,880,867, respectively).
and returns the sets of sound definitions;
[(449,20,90,50,0,1,100,100),(504,20,90,50,0,1,90,50)]
[(317,0,150,50,0,1,84,67),(400,0,150,50,0,0.75,84,67)]
[(534,0,100,50,0,1,100,100),(566,0,100,50,0,1,100,100)]
[(317,20,90,50,−30,1,100,100),(566,20,90,50,−30,1,100,100)]
[(336,0,100,50,0,1,100,67),(566,0,100,50,0,1,100,67)]
[(336,0,100,50,0,1,100,100),(449,0,100,50,0,1,100,100),(534,0,100,60,0,1,126,80)]
which are converted into 630 milliseconds of analog signal by the digital toanalog state626, amplified by theanalog amplifier628, and converted intosound605 by thespeaker617.
FIG. 9 provides aschematic representation900 of aspectrogram999 of thesound605 emitted by the speaker617 (FIG. 6), aftertransformation650 of the word “jousting” via theassignment state654, drawing upon thedata structure652 or752 (FIG. 8). To show detail, the vertical axis spans 300 Hz to 600 Hz rather than 0 Hz to 5000 Hz as inFIGS. 1 and 5. Also, power is depicted through line thickness rather than color intensity, thicker lines representing greater power.
As stated above, the IPA representation of the English word, “jousting”,
910, is
920, and comprises seven phonetic symbols,
934, a,
996,
,
997, s,
944, t,
904, i,
980, and
,
967. In American English the phonemes are,
934, a
,
994, s,
944, t,
904, i,
980, and
,
967.
The first phoneme,
934, is represented by an acoustic symbol defined by an ordered set of two ordered sets of eight elements, each defining a sound components of the acoustic symbol, [(449,20,90,50,0,1,100,100),(504,20,90,50,0,1,90,50)]. This definition calls for two
sounds925 and
923. The
first sound925 defined by the ordered set (449,20,90,50,0,1,100,100), has a constant frequency, h=0, of 449 Hz, f=449, a constant power, d=0, of 50 dB, w=50, starting after a 20 ms, b=20,
delay902 and
922 from the end of the previous acoustic symbol, and ending 90 ms, e=90, after the end of the previous acoustic symbol, and not pulsed, c=100. The value for r, pulse rate, is 100, but may be any positive value in this instance because a 100% duty cycle, c=100, obviates pulse rate. Read in the same manner, the second ordered set, (504,20,90,50,0,1,90,50), defines a
sound923 having a constant frequency of 504 Hz, a constant power of 50 dB, starting 20 ms after the end of the previous acoustic symbol, and ending 90 ms after the end of the previous acoustic symbol, and pulsed, at a frequency of 90 Hz r=90, and a 50% duty cycle, c=50.
The next ordered set of ordered sets, [(317,0,150,50,0,1,84,67), (400,0,150,50,0,0.75,84,67)] defines an acoustic symbol comprising two
sounds929 and
928 representing
994. The
first sound929, defined by the ordered set, (317,0,150,50,0,1,84,67), has a constant frequency of 317 Hz, a constant power of 50 dB, starting immediately, b=0, after the end of the previous
acoustic symbol923 and
925, and ending 150 ms after the end of the previous
acoustic symbol923 and
925, and pulsed, at a frequency of 84 Hz r=84, and a 67% duty cycle, c=67. The second ordered set, (400,0,150,50,0,0.75,84,67), defines a
sound928 having an initial frequency of 400 Hz, f=400, a final frequency of 300 Hz, h=0.75, and 400·0.75=300, a constant power of 50 dB, starting 0 ms after the end of the previous acoustic symbol, ending 150 ms after the end of the previous acoustic symbol, and pulsed, at a frequency of 84 Hz r=84, with a 67% duty cycle, c=67.
The next phoneme, s,944, is represented by two un-pulsed sounds, one at 534 Hz,927, and the other at 566 Hz,926, each having a constant power of 50 dB, lasting 100 ms.
The phoneme, t,904, is represented by two un-pulsed sounds,933 and932, starting 20 ms,908 and931, after the acoustic symbol representing the phoneme, s. Initial power for each is 50 dB, w=50, and final power for each to 20 dB, d=−30, 50−30=20.
The phoneme, i,980, is represented by two pulsed sounds,937 and936.
The final acoustic symbol, defined by the ordered set of ordered sets, [(336,0,100,50,0,1,100,100),(449,0,100,50,0,1,100,100),(534,0,100,60,0,1,126,80)], comprises three sounds. Onesound948 is pulsed, and twosounds947 and946 are not. Also, the sound at 534 Hz,948, is 10 dB louder that than the other twosounds947 and946.
FIG. 10aillustrates aconfiguration1000aof a cochlear implant hearing aid device andFIG. 10bshows aschematic representation1000bof this device. Amicrophone1013a,1013btransforms speech and other sounds into electrical signals that are conveyed to a sound andspeech processor1020a,1020bvia anelectrical cable1023a,1023b. The sound andspeech processor unit1020a,1020balso houses a power supply forexternal components1013a,1013b,1031a,1031band implantedcomponents1045a,1045bof the cochlear implant hearing aid device. The sound andspeech processor1020a,1020bcan contain bandpass filters to divide the acoustic waveforms into channels and convert the sounds into electrical signals. These signals go back through acable1024a,1024bto atransmitter1031a,1031battached to the head by a magnet, not shown, within a surgically implantedreceiver1045a,1045b.
Thetransmitter1031a,1031bsends the signals and power from the sound andspeech processing unit1020a,1020bvia a combined signal andpower transmission1033b(and similarly for1000a) across theskin1036a,1036bto the implantedreceiver1045a,1045b. Using the power from the combined signal andpower transmission1033b, thereceiver1045a,1045bdecodes the signal component of thetransmission1033band sends corresponding electrical waveforms through acable1049a,1049bto anelectrode array1088a,1088bsurgically placed in the user'scochlea1082a,1082b. The electrical waveforms stimulate local nerve tissue creating the perception of sound. Individual electrodes, not shown, are positioned at different locations along thearray1088a,1088b, allowing the device to deliver different stimuli representing sounds having different pitches, and importantly, having the sensation of different pitch to the user.
The effectiveness of a cochlear prosthesis depends to a large extent on the stimulation algorithm used to generate the waveforms sent to the individual electrodes of theelectrode array1088a,1088b. Stimulation algorithms are generally based on two approaches. The first places an emphasis on temporal aspects of speech and involves transforming the speech signal into different signals that are transmitted directly to the concerned regions of the cochlea. The second places an emphasis on spectral speech qualities and involves extracting features, such as formants, and formatting them according to the cochlea's tonotopy (the spatial arrangement of where sound is perceived).
Certain embodiments apply to novel stimulation algorithms for a cochlear prosthesis. These algorithms substitute some or all temporal and spectral features of natural speech for a small number (such as in a range of 10 to 500) of symbols, comprising the waveforms to be sent to the electrode array,1088a,1088b.
InFIG. 11, the English word, “chew”1105ais used to compare and contrast certain embodiments described herein to conventional stimulation algorithms. InFIG. 11A,plots1100aprovide aspectrogram1120aandwaveform1140afor the word “chew”1105a.
For a person with normal hearing, the cochlea provides the brain with detailed information about the speech signal shown bywaveform1140a. Within the cochlea theoriginal sound waveform1140ais lost in the process of being transformed into nerve impulses. These nerve impulses actually contain little information describing theactual waveform1140a, but instead, convey detailed information about power as a function of time and frequency. Therefore, a spectrogram such asspectrogram1120a, but not a waveform, is a convenient representation of the information conveyed through the auditory nerve to the auditory cortex of the brain.
A cochlear prosthesis (seeFIG. 10) can restore a level of hearing to a person whose cochlea is not functional, but still has a functional auditory cortex and auditory nerve innervating the cochlea. The cochlear prosthesis electrically stimulates nervous tissue in the cochlea, resulting in nerve impulses traveling along the auditory nerve to the auditory cortex of the brain. Although hearing can often be successfully restored to deafened individuals, speech recognition often remains challenging.
Limitations in speech perception arise from limitations of the implanted portion of the prosthesis. Normally, the cochlea divides the speech signal into several thousand overlapping frequency bands that the auditory cortex uses to extract speech information. Prior cochlear implants are able to provide a speech signal divided into just a dozen or so frequency bands. As a result, much of the fine spectral detail is lost as many frequency bands are blended into a few frequency bands. The auditory cortex is thereby deprived of much of the speech information it normally uses to identify features of spoken language.
In
FIG. 11B,
plots1100bschematically illustrate the spectral resolution and detail of a speech signal shown by a
spectrogram1120bgenerated by a conventional cochlear prosthesis. Gross temporal and spectral features are similar to that of natural speech shown by the
spectrogram1120a. However, spectrally
important portions1121b,
1122b,
1123bof the
phonemes1124aand
u,
1186alack the fine detail seen in the natural speech example shown at
portions1121a,
1122a,
1123a.To ameliorate this problem, stimulation algorithms are used to help convey speech information through the limited number of frequency bands or channels. Stimulation algorithms are generally based on two approaches. The first places an emphasis on temporal aspects of speech and involves transforming the speech signal into different signals that are transmitted directly to the concerned regions of the cochlea. The second places an emphasis on spectral speech qualities and involves extracting features, such as formants, and formatting them according to the cochlea's tonotopy (the spatial arrangement of where sound is perceived). Current stimulation algorithms do help, but are unable to provide most users with speech recognition comparable to that of those with normal hearing.
Certain embodiments apply to novel stimulation algorithms for cochlear prostheses. These algorithms substitute some or all temporal and spectral features of natural speech for a small number (approximately 20 to 100) of symbols, comprising the waveforms to be sent to theelectrode array1088a,1088bas shown inFIG. 10. The symbols themselves may represent phonemes, sets of phonemes, or types of phonemes.
In
FIG. 11C, plots
1100C schematically illustrate a speech signal shown by
spectrogram1120cas might result from recoding for the word “chew”
1105ausing a phoneme substitution method of certain embodiments described herein. The symbols may, but do not need to, preserve some spectral and temporal features of the natural speech signal shown by the
spectrogram1120a. The conventional stimulation algorithm shown by
plots1100bapproximates spectral features
1121aof the phoneme,
1124a, and
spectral features1122a,
1123aof the phoneme, u,
1186ain
corresponding areas1121b,
1122b,
1123b. In contrast, a speech signal generated using a stimulation algorithm employing phoneme substitution does not approximate spectral features
1121a, of the phoneme,
1124a, and
spectral features1122a,
1123aof the phoneme, u,
1186ain its
corresponding areas1172c,
1174c,
1176c,
1178c.An advantage of certain embodiments described herein is that, in principle, the speech signal will not vary from speaker to speaker and location to location. Another advantage is that the speech signal is no longer more complicated than the language based information it contains. Both features result in speech signals that are easier to learn and recognize than those generated using current state-of-the-art stimulation algorithms.
FIG. 12 provides an overview diagram1200 of how one embodiment transforms speech1209 (exemplified by the waveform illustrated inFIG. 1A byspectrogram1100a) from a person speaking1208 into simple symbols (exemplified in the speech signal illustrated inFIG. 11C byspectrogram1120c) that are delivered to an electrode array of a user'scochlear implant1288. The transformation is performed by external components of a cochlear implant system such as sound andspeech processing unit1220.
The sound and speech processing unit orprocessor1220 includes amicrophone1213 to transformspeech sounds1209 into electronic analog signals that are then digitized by an analog todigital converter1222. The embodiment illustrated here provides auser interface1219 that allows the selection of one of at least of two operating modes depending upon whether or not speech recognition is of primary interest to the user, in any given setting. Other embodiments need not provide this option.
When speech recognition is of primary interest to the user, the value atdecision state1224 will be true. Aspeech recognition process1230 transforms digitized speech sounds into digital characters representing phonemes of thespeech1209 produced by the person speaking1208. Characters representing phonemes are then exchanged for digital representations of stimulation patterns by atransformation process1250. The transformation process ortransformer1250 can be performed by software, by hardware, or by combinations of software and hardware.
Thetransformation process1250 comprises a correspondence from a set of phonemes to stimulation patterns held in a database orother data structure1252 and aprocess1254 for generating a sequence of representations of stimulation patterns corresponding to a sequence of phonemes from thespeech recognizer1230.
The digital representations are sent to a data andpower transmitter1231 and1232 attached to the user's head by a magnet, not shown, within a surgically implantedreceiver1245.
Thetransmitter1231 and1232 sends the signals and power from the sound andspeech processing unit1220 via a combined signal andpower transmission1233 across theskin1236 to the implantedreceiver1245. Using the power from the combined signal andpower transmission1233, thereceiver1245 decodes the signal component of thetransmission1233 and sends corresponding electrical waveforms through acable1249 to theelectrode array1288 surgically placed in the user'scochlea1282.
When speech recognition is not of primary interest to the user, the value atdecision state1224 will be false, and the device will function usingother stimulation algorithms1215.
Although certain embodiments do not relate to the field of speech recognition technology, some embodiments utilize speech recognition. A number of strategies and techniques for building devices capable of recognizing and translating human speech into text are known to those skilled in such arts. For reference,FIG. 6 provides a generic diagram600 of the inner workings of aspeech recognizer630 as might be employed by some embodiments.
Because different users may have different requirements and abilities, thedatabase1252 of representations of stimulation patterns can be created and customized in consideration of each individual user. In some embodiments, acomputer1260 can be used to aid in the creation of user databases, which are then downloaded to thedatabase memory1252 of the sound andspeech processing unit1220.
Thecomputer1260 comprises software allowing the input ofdata1264 from a user's hearing tests, auser interface1262 and a process ormapper1270 for creating a map to be stored in thedatabase1252 to transform symbols representing phonemes into digital representations of stimulation patterns.
Theprocess1270 for creating the map to transform symbols representing phonemes into digital representations of stimulation patterns is similar to theprocess670 shown inFIG. 6, and defined inFIG. 7. Theprocess1270 can be considered a modified version ofprocess670 in which the interval [fl, fh], is replaced with a set, G, of functional electrodes, {gn, gn+1, gn+2, . . . } of theelectrode array1288. The set, F, then becomes the subset of G, its elements representing electrodes rather than frequencies.
FIG. 13 is a diagram1300 showing an example structure ofpotential electrode assignments1352, such as stored indatabase1252, for one embodiment in which the user wishes to comprehend American English speech. The upper portion of the figure shows the middle andinner ear1360 including thecochlea1365. Within thecochlea1365 is an implantedelectrode array1320 of a cochlear prosthesis.
For illustration purposes, it is assumed that theelectrode array1320 comprises 16 electrodes, nine of which,1303,1304,1305,1306,1307,1308,1309,1310,1311, are functional and able to produce unique sound sensations for the user. In this example, 39 American English phonemes are mapped using the exemplary data structure1352 (stored in1252,FIG. 12) to stimulation patterns (symbols) comprising electrical waveforms being sent to different combinations of one, two, or three electrodes.
For simplicity, other qualities used in the preceding examples of hearing aids are not contained in thestructure1352. However, analogs of each are envisioned for embodiments relating to hearing prosthesis including cochlear implants. These analogs and others include, but are not limited to, pauses between some phonemes, duration, intensity, low frequency pulsations or higher frequency signals, stimulus rates, and shifts in the values of such parameters as a function of time, or context.
The symbols themselves may represent phonemes, sets of phonemes, portions of phonemes, or types of phonemes.
In one embodiment, the symbols are unique combinations of stimuli at one or more electrodes. In another embodiment, the symbols are unique physical spacings of stimuli. In another embodiment, the combination of electrodes used and other qualities including, but are not limited to, pauses between some phonemes, duration, intensity, low frequency pulsations or higher frequency signals, stimulus rates, and shifts in the values of such parameters as a function of time, are unique, for each symbol.
In another embodiment, phonemes are placed into groups of like phonemes (e.g., plosive, fricative, diphthong, monophthong, etc.). Such a placement of phonemes into groups of like phonemes is known to linguists and others skilled in such arts. All phonemes are then assigned a common electrode or channel (the root), all phonemes being given the same root. Each member of each group of like phonemes is assigned a second channel unique to that group. Once all phonemes have been assigned a second channel, the most frequently used phoneme of each group is not assigned additional channels. Therefore, the most frequently used phonemes are represented by unique combinations of two channels. One or more additional channels are then assigned to the remaining phonemes to create a unique combination of channels for each phoneme.
In another embodiment, phonemes are placed into groups of like phonemes (e.g., plosive, fricative, diphthong, monophthong, etc.). Such a placement of phonemes into groups of like phonemes is known to linguists and others skilled in such arts. All phonemes are then assigned a common electrode or channel (the root), all phonemes being given the same root. Each member of each group of like phonemes is assigned a second channel unique to that group. Once all phonemes have been assigned a second channel, the most frequently used phoneme of each group is not assigned additional channels. Therefore, the most frequently used phonemes are represented by unique combinations of two channels. One or more additional channels are then assigned to the remaining phonemes to create a unique combination of channels for each phoneme. Next, every channel assignment for every phoneme in one group of like phonemes is shifted up or down along the electrode array. Additional groups of like phonemes may or may not be adjusted in a similar fashion.
The concept of phoneme substitution can be applied to sensory tissues other than the cochlea. These can include but are not limited to pressure, pain, stretch, temperature, photo and olfactory receptor tissue as well as innervating nerves tissue and corresponding central nervous system tissue.
For example, phonic symbols may be delivered to sensory tissue of the skin, by a number of means, including electrical and mechanical means.FIGS. 14A and 14B provide schematic examples1400aand1400bofskin interfaces1410aand1410bof some embodiments.
FIG. 14A, example1400a, shows aninterface1410afitted about the hand and wrist of a person'sleft arm1450afor example. Theinterface1410acomprises sixstimulators1401a,1402a,1403a,1404a,1405a,1406apositioned against the person'sskin1440a. In this example, the stimulators have been placed as to assure that no two are close to being positioned over the same receptive field, the smallest area of skin capable of allowing the recognition of two different but similar stimuli. In one embodiment thestimulators1405aand1406bare located under the wrist of the user.
FIG. 14B, example1400b, shows aninterface1410bfitted about the wrist of a person'sleft arm1450bfor example. Theinterface1410bcomprises sixstimulators1401b,1402b,1403b,1404b,1405b,1406bpositioned against the person'sskin1440b, some close enough to each other to be on the outer threshold of occupying same receptive field. In one embodiment thestimulators1405band1406bare located under the wrist of the user.
Creating a correspondence mapping phonemes to sets of tactile stimuli, symbols, is not fundamentally different from mapping phonemes to acoustic symbols of hearing aid embodiments or electrical stimulation patterns of cochlear prosthesis embodiments.FIG. 15, table1500, provides three examples for mapping English phonemes to tactile symbols suitable for use with thetactile interfaces1410aand1410bpresented inFIG. 14. To better illustrate concepts not yet described, each of the three maps uses the same channel assignments, and each stimulator generates a motion of vibration perpendicular to the skin.
These maps were created using methods previously described but not illustrated. The first step for all three examples is to place phonemes into a group of like phonemes (e.g., plosive, fricative, diphthong, monophthong, etc.). These groups are known to linguists and others skilled in such arts.
For example 1, each group is then assigned a channel, for example plosive=1, nasal=2, fricative=3, approximant=4, monophthong=5, diphthong=6. Affricates, being both plosive and fricative like, are assigned both
channels 3 and 4. No further channel assignments are made to the most frequently used member of each set, t, n, s,
and
These assignments can be made by linguists and others skilled in such arts. Additional channels are assigned to other phonemes creating a unique combination of channel assignments corresponding to each. An advantage in this approach is that training can begin with the use of only six symbols, each comprising a vibration at a single location on the skin.
In example 2, the channel assignments for each phoneme are the same as in example 1. However for each tactile symbol representing a phoneme, the channel common to all members of its group of related phonemes is vibrated at a different frequency than the other channels comprising that symbol. These stimulators are indicated by boxes in the column for example 2. The advantage in this approach is that phonemes that sound most alike will feel most alike, and thereby enhance the learning process, and reduce errors.
In example 3, even numbered stimulators vibrate at one frequency, and odd numbered stimulators vibrate at a different frequency. Odd numbered channels are highlighted with a box for better visualization of the figure. The advantage in this approach is that adjacent stimulators have a different feel, and therefore may be placed in closer proximity to one another, while maintaining the ability to create a sensation unique to each channel. A logical extension of this approach is to use only three stimulators, each having three states, off, onfrequency1, and onfrequency2.
For simplicity, other qualities used in the preceding examples of hearing aids and implants are not contained in the three data structures shown inFIG. 15. However, analogs of each quality are envisioned for embodiments relating to skin and interfaces. These analogs and others include, but are not limited to, pauses between some phonemes, duration, intensity, low frequency pulsations or higher frequency signals, stimulus rates, and shifts in the values of such parameters as a function of time, or context.
In one embodiment, the symbols are unique combinations of stimuli at one or more electrodes. In another embodiment, the symbols are unique physical spacings of stimuli. In another embodiment, the combination of electrodes used, and other qualities including, but are not limited to, pauses between some phonemes, duration, intensity, low frequency pulsations or higher frequency signals, stimulus rates, and shifts in the values of such parameters as a function of time, are unique for each symbol.
In another embodiment, phonemes are placed into groups of like phonemes (e.g., plosive, fricative, diphthong, monophthong, etc.). Such a placement of phonemes into groups of like phonemes is known to linguists and others skilled in such arts. All phonemes are then assigned a common electrode or channel (the root), all phonemes being given the same root. Each member of each group of like phonemes is assigned a second channel unique to that group. Once all phonemes have been assigned a second channel, the most frequently used phoneme of each group is not assigned additional channels. Therefore, the most frequently used phonemes are represented by unique combinations of two channels. One or more additional channels are then assigned to the remaining phonemes to create a unique combination of channels for each phoneme.
In another embodiment, phonemes are placed into groups of like phonemes (e.g., plosive, fricative, diphthong, monophthong, etc.). Such a placement of phonemes into groups of like phonemes is known to linguists and others skilled in such arts. All phonemes are then assigned a common electrode or channel (the root), all phonemes being given the same root. Each member of each group of like phonemes is assigned a second channel unique to that group. Once all phonemes have been assigned a second channel, the most frequently used phoneme of each group is not assigned additional channels. Therefore, the most frequently used phonemes are represented by unique combinations of two channels. One or more additional channels are then assigned to the remaining phonemes to create a unique combination of channels for each phoneme. Next, every channel assignment for every phoneme in one group of like phonemes is shifted up or down along the electrode array. Additional groups of like phonemes may or may not be adjusted in a similar fashion.
FIG. 16A, via
plots1600a, shows the word “chew”
1605a; its component phonemes,
1624a; and u,
1686a; a
waveform1635aobtained when “chew” is spoken; “chew” written in
machine shorthand1645a; “chew” as it appears in acoustic symbols generated by the phoneme substitution method described herein
1655a; “chew” as it might be encoded by phoneme substitution and then transmitted to electrodes in a
cochlear implant1665a; “chew” as it might be transmitted to electrodes on a
skin interface1675a; and “chew” as it might be perceived in the form of its component phonemes by the user.
FIG. 16B, diagram
1600b, illustrates embodiments as
transmitters1605b,
1635b,
1645band
receivers1655b,
1665b,
1675b. A
computer1605bis shown transmitting the typed word “chew” to a
hearing aid1655b;
cochlear implant1665b; or skin interface
1675b. The waveform produced by a person speaking
1635bis shown being transmitted to
1655b,
1665b, and
1675b. The
shorthand machine1645bis shown transmitting a signal to
1655b,
1665b, and
1675b. There are embodiments that do not require mapping of phonemes to unique symbols or sets of stimuli. Simply mapping each phoneme to a symbol or set of stimuli unique to it and similar phonemes may be helpful to hearing impaired individuals. For example, many people with hearing impairments have some proficiency in lip reading, or speech reading. Others maybe relatively proficient in vowel recognition, but have a difficult time with the recognition of consonants. The phonetic structure of the six words, two, do, sue, zoo, and new, is tu, du, su, zu, and nu, respectively. These five words differ appreciably only in their first phoneme, a consonant. However, all five words appear the same on a speaker's lips. Simply knowing which type of phoneme the initial consonant is would be enough information to disambiguate these words for an individual with relatively good low frequency hearing or proficiency in speech reading. In fact, simply knowing if the initial consonant is a plosive, fricative, and/or voiced is sufficient to discriminate between each word in the list.
CONCLUSIONWhile specific blocks, sections, devices, functions and modules may have been set forth above, a skilled technologist will realize that there are many ways to partition the system, and that there are many parts, components, modules or functions that may be substituted for those listed above.
While the above detailed description has shown, described, and pointed out the fundamental novel features of the invention as applied to various embodiments, it will be understood that various omissions and substitutions and changes in the form and details of the system illustrated may be made by those skilled in the art, without departing from the intent of the invention.