Movatterモバイル変換


[0]ホーム

URL:


US20090024183A1 - Somatic, auditory and cochlear communication system and method - Google Patents

Somatic, auditory and cochlear communication system and method
Download PDF

Info

Publication number
US20090024183A1
US20090024183A1US11/997,902US99790206AUS2009024183A1US 20090024183 A1US20090024183 A1US 20090024183A1US 99790206 AUS99790206 AUS 99790206AUS 2009024183 A1US2009024183 A1US 2009024183A1
Authority
US
United States
Prior art keywords
sequence
phonemes
sound
sounds
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/997,902
Inventor
Mark I. Fitchmun
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Somatek Inc
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by IndividualfiledCriticalIndividual
Priority to US11/997,902priorityCriticalpatent/US20090024183A1/en
Publication of US20090024183A1publicationCriticalpatent/US20090024183A1/en
Assigned to SomatekreassignmentSomatekASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: FITCHMUN, MARK I.
Abandonedlegal-statusCriticalCurrent

Links

Images

Classifications

Definitions

Landscapes

Abstract

Methods and devices (620, 1220, 1410) to deliver a tactile speech analog to a person's skin (404, 604, 1082, 1440) providing a silent, invisible, hands-free, eyes-free, and ears-free way to receive and directly comprehend electronic communications (1600b). Embodiments include an alternative to hearing aids that will enable people with hearing loss to better understand speech. A device (1410), worn like watch or bracelet, supplements a person's remaining hearing to help identify and disambiguate those sounds he or she can not hear properly. Embodiments for hearing aids (620) and hearing prosthetics (1220) are also described.

Description

Claims (110)

1. A method of transforming a sequence of symbols representing phonemes into a sequence of arrays of nerve stimuli, the method comprising:
establishing a correlation between each member of a phoneme symbol set and an assignment of one or more channels of a multi-electrode array;
accessing a sequence of phonetic symbols corresponding to a message; and
activating a sequence of one or more electrodes corresponding to each phonetic symbol of the message identified by the correlation.
2. The method ofclaim 1, wherein the phonetic symbols belong to one of SAMPA, Kirshenbaum, or IPA Unicode digital character sets.
3. The method ofclaim 1, wherein the symbols belong to the cmudict phoneme set.
4. The method ofclaim 1, wherein the correlation is a one to one correlation.
5. The method ofclaim 1, wherein activating a sequence of one or more electrodes includes an energizing period for each electrode, wherein the energizing period comprises a begin time parameter and an end time parameter.
6. The method ofclaim 5, wherein the begin time parameter is representative of a time from an end of components of a previous energizing period of a particular electrode.
7. The method ofclaim 1, wherein the electrodes are associated with a hearing prosthesis.
8. The method ofclaim 1, wherein the hearing prosthesis comprises a cochlear implant.
9. A method of processing a sequence of spoken words into a sequence of sounds, the method comprising:
converting a sequence of spoken words into electrical signals;
digitizing the electrical signals representative of the speech sounds;
transforming the speech sounds into digital symbols representing corresponding phonemes;
transforming the symbols representing the corresponding phonemes into sound representations; and
transforming the sound representations into sounds.
10. The method ofclaim 9, wherein transforming the symbols representing the phonemes into sound representations comprises:
accessing a data structure configured to map phonemes to sound representations;
locating the symbols representing the corresponding phonemes in the data structure; and
mapping the phonemes to sound representations.
11. The method ofclaim 10, additionally comprising creating the data structure, comprising:
identifying phonemes corresponding to a language used by a user of the method;
establishing a set of allowed sound frequencies;
generating a correspondence mapping the identified phonemes to the set of allowed sound frequencies such that each constituent phoneme of the identified phonemes is assigned a subset of one or more frequencies from the set of allowed sound frequencies; and
mapping each constituent phoneme of the identified phonemes to a set of one or more sounds.
12. The method ofclaim 11, wherein establishing a set of allowed sound frequencies comprises selecting a set of sound frequencies that are in a hearing range of the user.
13. The method ofclaim 11, wherein each sound of the set of one more sounds comprises an initial frequency parameter.
14. The method ofclaim 11, wherein each sound of the set of one more sounds comprises a begin time parameter.
15. The method ofclaim 14, wherein the begin time parameter is representative of a time from an end of components of a previous sound representation.
16. The method ofclaim 11, wherein each sound of the set of one more sounds comprises an end time parameter.
17. The method ofclaim 11, wherein each sound of the set of one more sounds comprises a power parameter.
18. The method ofclaim 11, wherein each sound of the set of one more sounds comprises a power shift parameter.
19. The method ofclaim 11, wherein each sound of the set of one more sounds comprises a frequency shift parameter.
20. The method ofclaim 11, wherein each sound of the set of one more sounds comprises a pulse rate parameter.
21. The method ofclaim 11, wherein each sound of the set of one more sounds comprises a duty cycle parameter.
22. A method of processing a sequence of spoken words into a sequence of nerve stimuli, the method comprising:
converting a sequence of spoken words into electrical signals;
digitizing the electrical signals representative of the speech sounds;
transforming the speech sounds into digital symbols representing corresponding phonemes;
transforming the symbols representing the corresponding phonemes into stimulus definitions; and
transforming the stimulus definitions into a sequence of nerve stimuli.
23. The method ofclaim 22, wherein the nerve stimuli are associated with a hearing prosthesis.
24. The method ofclaim 23, wherein the hearing prosthesis comprises a cochlear implant.
25. The method ofclaim 22, wherein the nerve stimuli are associated with a skin interface.
26. The method ofclaim 25, wherein the skin interface is located on the wrist and/or hand of the user.
27. The method ofclaim 25, wherein the skin interface is located on the ankle and/or foot of the user.
28. The method ofclaim 22, wherein the nerve stimuli are mechanical.
29. The method ofclaim 22, wherein the nerve stimuli are electrical.
30. The method ofclaim 22, wherein transforming the symbols representing the phonemes into stimulus definitions comprises:
accessing a data structure configured to map phonemes to stimulus definitions;
locating the symbols representing the corresponding phonemes in the data structure; and
mapping the phonemes to stimulus definitions.
31. The method ofclaim 22, wherein the stimulus definitions comprise sets of one or more stimuli.
32. The method ofclaim 31, wherein the sets of one or more stimuli correspond to one or more locations on the skin.
33. The method ofclaim 31, wherein the sets of one or more stimuli correspond to one or more locations in the cochlea.
34. The method ofclaim 31, wherein each stimulus of the sets of one or more stimuli comprises a begin time parameter.
35. The method ofclaim 34, wherein the begin time parameter is representative of a time from an end of components of a previous stimulus definition.
36. The method ofclaim 31, wherein each stimulus of the sets of one or more stimuli comprises an end time parameter.
37. A method of transforming a sequence of symbols representing phonemes into a sequence of arrays of nerve stimuli, the method comprising:
establishing a correlation between each member of a phoneme symbol set and an assignment of one or more channels of a multi-stimulator array;
accessing a sequence of phonetic symbols corresponding to a message; and
activating a sequence of one or more stimulators corresponding to each phonetic symbol of the message identified by the correlation.
38. A method ofclaim 37, wherein the stimulators are vibrators affixed to the user's skin.
39. A method ofclaim 37, wherein the phonetic symbols belong to one of SAMPA, Kirshenbaum, or IPA Unicode digital character sets.
40. A method ofclaim 37, wherein the symbols belong to the cmudict phoneme set.
41. A method ofclaim 37, wherein the correlation is a one to one correlation.
42. The method ofclaim 37, wherein activating a sequence of one or more stimulators includes an energizing period for each stimulator, wherein the energizing period comprises a begin time parameter and an end time parameter.
43. The method ofclaim 42, wherein the begin time parameter is representative of a time from an end of components of a previous energizing period of a particular stimulator.
44. A method of training a user, the method comprising:
providing a set of somatic stimulations to a user, wherein the set of somatic stimulations is indicative of a plurality of phonemes, and wherein the phonemes are based at least in part on an audio communication;
providing the audio communication concurrently to the user with the plurality of phonemes; and
selectively modifying at least portions of the audio communication to the user during the providing of the set of somatic stimulations to the user.
45. The method ofclaim 44, wherein selectively modifying at least portions of the audio communication comprises reducing an audio property of the audio communication.
46. The method ofclaim 45, wherein the audio property comprises a volume of the audio.
47. The method ofclaim 45, wherein the audio property comprises omitting selected words from the audio.
48. The method ofclaim 45, wherein the audio property comprises attenuating a volume of selected words from the audio.
49. The method ofclaim 45, wherein the audio property comprises omitting selected phonemes from the audio.
50. The method ofclaim 45, wherein the audio property comprises attenuating a volume of selected phonemes from the audio.
51. The method ofclaim 45, wherein the audio property comprises omitting selected sound frequencies from the audio.
52. The method ofclaim 45, wherein the audio property comprises attenuating a volume of selected sound frequencies from the audio.
53. A method of training a user, the method comprising:
providing a set of somatic stimulations to a user, wherein the set of somatic stimulations is indicative of a plurality of phonemes, and wherein the phonemes are based at least in part on an audiovisual communication;
providing the audiovisual communication concurrently to the user with the plurality of phonemes; and
selectively modifying at least portions of the audiovisual communication to the user during the providing of the set of somatic stimulations to the user.
54. The method ofclaim 53, wherein selectively modifying at least portions of the audiovisual communication comprises reducing an audio or video property of the audiovisual communication.
55. The method ofclaim 54, wherein the audio property comprises a volume of the audio.
56. The method ofclaim 54, wherein the audio property comprises omitting selected words from the audio.
57. The method ofclaim 54, wherein the audio property comprises attenuating a volume of selected words from the audio.
58. The method ofclaim 54, wherein the audio property comprises omitting selected phonemes from the audio.
59. The method ofclaim 54, wherein the audio property comprises attenuating a volume of selected phonemes from the audio.
60. The method ofclaim 54, wherein the audio property comprises omitting selected sound frequencies from the audio.
61. The method ofclaim 54, wherein the audio property comprises attenuating a volume of selected sound frequencies from the audio.
62. The method ofclaim 54, wherein the video property comprises a presence or brightness of the video.
63. A system for processing a sequence of spoken words into a sequence of sounds, the system comprising:
a first converter configured to digitize electrical signals representative of a sequence of spoken words;
a speech recognizer configured to receive the digitized electrical signals and generate a sequence of phonemes representative of the sequence of spoken words;
a mapper configured to assign sound sets to phonemes utilizing an audiogram so as to generate a map;
a transformer configured to receive the sequence of phonemes representative of the sequence of spoken words and the map and to generate a sequence of sound representations corresponding to the sequence of phonemes; and
a second converter configured to convert the sequence of sound representations into a sequence of audible sounds.
64. The system ofclaim 63, wherein the map is a user-specific map based on a particular user's audiogram.
65. A system for processing a sequence of spoken words into a sequence of sounds, the system comprising:
a first converter configured to digitize electrical signals representative of a sequence of spoken words;
a speech recognizer configured to receive the digitized electrical signals and generate a sequence of phonemes representative of the sequence of spoken words;
a data structure comprising sound sets mapped to phonemes;
a transformer configured to receive the sequence of phonemes representative of the sequence of spoken words and the data structure and to generate a sequence of sound representations corresponding to the sequence of phonemes; and
a second converter configured to convert the sequence of sound representations into a sequence of audible sounds.
66. The system ofclaim 65, wherein the data structure is generated utilizing a user's audiogram.
67. A system for processing a sequence of spoken words into a sequence of nerve stimuli, the system comprising:
a converter configured to digitize electrical signals representative of a sequence of spoken words;
a speech recognizer configured to receive the digitized electrical signals and generate a sequence of phonemes representative of the sequence of spoken words;
a mapper configured to assign nerve stimuli arrays to phonemes utilizing an audiogram so as to generate a map; and
a transformer configured to receive the sequence of phonemes representative of the sequence of spoken words and the map and to generate a sequence of stimulus definitions corresponding to the sequence of phonemes.
68. The system ofclaim 67, additionally comprising:
a receiver configured to convert the sequence of stimulus definitions into electrical waveforms; and
an electrode array configured to receive the electrical waveforms.
69. The system ofclaim 68, wherein the electrode array is surgically placed in the user's cochlea.
70. The system ofclaim 67, wherein the sequence of stimulus definitions comprise digital representations of nerve stimulation patterns.
71. A system for processing a sequence of spoken words into a sequence of nerve stimuli, the system comprising:
a converter configured to digitize electrical signals representative of a sequence of spoken words;
a speech recognizer configured to receive the digitized electrical signals and generate a sequence of phonemes representative of the sequence of spoken words;
a data structure comprising nerve stimuli arrays mapped to phonemes; and
a transformer configured to receive the sequence of phonemes representative of the sequence of spoken words and the data structure and to generate a sequence of stimulus definitions corresponding to the sequence of phonemes.
72. The system ofclaim 71, wherein the data structure is generated utilizing a user's audiogram.
73. The system ofclaim 71, additionally comprising:
a receiver configured to convert the sequence of stimulus definitions into electrical waveforms; and
an electrode array configured to receive the electrical waveforms.
74. The system ofclaim 73, wherein the electrode array is surgically placed in the user's cochlea.
75. The system ofclaim 71, wherein the sequence of stimulus definitions comprise digital representations of nerve stimulation patterns.
76. A system for processing a sequence of spoken words into a sequence of nerve stimuli, the system comprising:
a processor configured to generate a sequence of phonemes representative of a sequence of spoken words and to transform the sequence of phonemes using a data structure comprising nerve stimuli arrays mapped to phonemes to produce a sequence of stimulus definitions corresponding to the sequence of phonemes; and
an electrode array configured to play the sequence of stimulus definitions.
77. The system ofclaim 76, wherein the data structure is generated utilizing a user's audiogram.
78. The system ofclaim 76, wherein the electrode array comprises a converter configured to convert the sequence of stimulus definitions into electrical waveforms.
79. The system ofclaim 76, wherein the electrode array is surgically placed in the user's cochlea.
80. The system ofclaim 76, wherein the electrode array comprises a plurality of mechanical stimulators.
81. The system ofclaim 76, wherein the electrode array comprises a plurality of electrodes.
82. The system ofclaim 76, wherein the sequence of stimulus definitions comprise digital representations of nerve stimulation patterns.
83. A system for processing a sequence of spoken words into a sequence of sounds, the system comprising:
a processor configured to generate a sequence of phonemes representative of the sequence of spoken words and to transform the sequence of phonemes using a data structure comprising sound sets mapped to phonemes to produce sound representations corresponding to the sequence of phonemes; and
a converter configured to convert the sound representations into audible sounds.
84. The system ofclaim 83, wherein the data structure is generated utilizing a user's audiogram.
85. A system for processing a sequence of text into a sequence of sounds, the system comprising:
a first converter configured to receive a sequence of text and generate a sequence of phonemes representative of the sequence of text;
a mapper configured to assign sound sets to phonemes utilizing a hearing audiogram so as to generate a map;
a transformer configured to receive the sequence of phonemes representative of the sequence of text and the map and to generate sound representations corresponding to the sequence of phonemes; and
a second converter configured to convert the sound representations into audible sounds.
86. The system ofclaim 85, wherein the hearing audiogram is representative of a normal human hearing range.
87. The system ofclaim 85, wherein the hearing audiogram is representative of a hearing range for a specific individual.
88. A system for processing a sequence of text into a sequence of sounds, the system comprising:
a text converter configured to receive a sequence of text and generate a sequence of phonemes representative of the sequence of text;
a data structure comprising sound sets mapped to phonemes;
a transformer configured to receive the sequence of phonemes representative of the sequence of text and the data structure and to generate sound representations corresponding to the sequence of phonemes; and
a second converter configured to convert the sound representations into audible sounds.
89. The system ofclaim 88, wherein the data structure is generated utilizing a user's audiogram.
90. A system for processing a sequence of text into a sequence of nerve stimuli, the system comprising:
a converter configured to receive a sequence of text and generate a sequence of phonemes representative of the sequence of text;
a data structure comprising nerve stimuli arrays mapped to phonemes; and
a transformer configured to receive the sequence of phonemes representative of the sequence of text and the data structure and to generate a sequence of stimulus definitions corresponding to the sequence of phonemes.
91. The system ofclaim 90, wherein the data structure is generated utilizing a user's abilities.
92. The system ofclaim 91, wherein the user's abilities comprise useable channels of a cochlear implant of the user.
93. The system ofclaim 91, wherein the user's abilities comprise the ability to distinguish between two or more unique stimuli.
94. A method of processing a sequence of text into a sequence of sounds, the method comprising:
transforming the sequence of text into digital symbols representing corresponding phonemes;
transforming the symbols representing the corresponding phonemes into sound representations; and
transforming the sound representations into a sequence of sounds.
95. A method of processing a sequence of text into a sequence of nerve stimuli, the method comprising:
transforming the sequence of text into digital symbols representing corresponding phonemes;
transforming the symbols representing the corresponding phonemes into stimulus definitions; and
transforming the stimulus definitions into a sequence of nerve stimuli.
96. The method ofclaim 95, wherein the nerve stimuli are associated with a cochlear implant.
97. The method ofclaim 95, wherein the nerve stimuli are associated with a skin interface.
98. The method ofclaim 97, wherein the skin interface is located on the wrist and/or hand of the user.
99. The method ofclaim 95, wherein transforming the symbols representing the phonemes into stimulus definitions comprises:
accessing a data structure configured to map phonemes to stimulus definitions;
locating the symbols representing the corresponding phonemes in the data structure; and
mapping the phonemes to stimulus definitions.
100. A method of creating a data structure configured to transform symbols representing phonemes into sound representations, the method comprising:
identifying phonemes corresponding to a language utilized by a user;
establishing a set of allowed sound frequencies;
generating a correspondence mapping the identified phonemes to the set of allowed sound frequencies such that each constituent phoneme of the identified phonemes is assigned a subset of one or more frequencies from the set of allowed sound frequencies; and
mapping each constituent phoneme of the identified phonemes to a set of one or more sounds.
101. The method ofclaim 100, wherein establishing a set of allowed sound frequencies comprises selecting a set of sound frequencies that are in a hearing range of the user.
102. The method ofclaim 100, wherein each sound of the set of one more sounds comprises an initial frequency parameter.
103. The method ofclaim 100, wherein each sound of the set of one more sounds comprises a begin time parameter.
104. The method ofclaim 103, wherein the begin time parameter is representative of a time from an end of components of a previous sound representation.
105. The method ofclaim 100, wherein each sound of the set of one more sounds comprises an end time parameter.
106. The method ofclaim 100, wherein each sound of the set of one more sounds comprises a power parameter.
107. The method ofclaim 100, wherein each sound of the set of one more sounds comprises a power shift parameter.
108. The method ofclaim 100, wherein each sound of the set of one more sounds comprises a frequency shift parameter.
109. The method ofclaim 100, wherein each sound of the set of one more sounds comprises a pulse rate parameter.
110. The method ofclaim 100, wherein each sound of the set of one more sounds comprises a duty cycle parameter.
US11/997,9022005-08-032006-08-03Somatic, auditory and cochlear communication system and methodAbandonedUS20090024183A1 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US11/997,902US20090024183A1 (en)2005-08-032006-08-03Somatic, auditory and cochlear communication system and method

Applications Claiming Priority (3)

Application NumberPriority DateFiling DateTitle
US70521905P2005-08-032005-08-03
PCT/US2006/030437WO2007019307A2 (en)2005-08-032006-08-03Somatic, auditory and cochlear communication system and method
US11/997,902US20090024183A1 (en)2005-08-032006-08-03Somatic, auditory and cochlear communication system and method

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
PCT/US2006/030437A-371-Of-InternationalWO2007019307A2 (en)2005-08-032006-08-03Somatic, auditory and cochlear communication system and method

Related Child Applications (1)

Application NumberTitlePriority DateFiling Date
US14/489,406ContinuationUS10540989B2 (en)2005-08-032014-09-17Somatic, auditory and cochlear communication system and method

Publications (1)

Publication NumberPublication Date
US20090024183A1true US20090024183A1 (en)2009-01-22

Family

ID=37714372

Family Applications (5)

Application NumberTitlePriority DateFiling Date
US11/997,902AbandonedUS20090024183A1 (en)2005-08-032006-08-03Somatic, auditory and cochlear communication system and method
US14/489,406Active2028-12-19US10540989B2 (en)2005-08-032014-09-17Somatic, auditory and cochlear communication system and method
US16/746,592AbandonedUS20200152223A1 (en)2005-08-032020-01-17Somatic, auditory and cochlear communication system and method
US17/657,581ActiveUS11878169B2 (en)2005-08-032022-03-31Somatic, auditory and cochlear communication system and method
US18/418,747AbandonedUS20240157143A1 (en)2005-08-032024-01-22Somatic, auditory and cochlear communication system and method

Family Applications After (4)

Application NumberTitlePriority DateFiling Date
US14/489,406Active2028-12-19US10540989B2 (en)2005-08-032014-09-17Somatic, auditory and cochlear communication system and method
US16/746,592AbandonedUS20200152223A1 (en)2005-08-032020-01-17Somatic, auditory and cochlear communication system and method
US17/657,581ActiveUS11878169B2 (en)2005-08-032022-03-31Somatic, auditory and cochlear communication system and method
US18/418,747AbandonedUS20240157143A1 (en)2005-08-032024-01-22Somatic, auditory and cochlear communication system and method

Country Status (2)

CountryLink
US (5)US20090024183A1 (en)
WO (1)WO2007019307A2 (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20050243996A1 (en)*2004-05-032005-11-03Fitchmun Mark ISystem and method for providing particularized audible alerts
US20100056950A1 (en)*2008-08-292010-03-04University Of Florida Research Foundation, Inc.System and methods for creating reduced test sets used in assessing subject response to stimuli
US20110320481A1 (en)*2010-06-232011-12-29Business Objects Software LimitedSearching and matching of data
WO2013009805A1 (en)2011-07-112013-01-17Med-El Elektromedizinische Geraete GmbhTest methods for cochlear implant stimulation strategies
US20130023963A1 (en)*2011-07-222013-01-24Lockheed Martin CorporationCochlear implant using optical stimulation with encoded information designed to limit heating effects
US20140207456A1 (en)*2010-09-232014-07-24Waveform Communications, LlcWaveform analysis of speech
US20150012261A1 (en)*2012-02-162015-01-08Continetal Automotive GmbhMethod for phonetizing a data list and voice-controlled user interface
US20150289786A1 (en)*2014-04-112015-10-15Reginald G. GarrattMethod of Acoustic Screening for Processing Hearing Loss Patients by Executing Computer-Executable Instructions Stored On a Non-Transitory Computer-Readable Medium
US20160148616A1 (en)*2014-11-262016-05-26Panasonic Intellectual Property Corporation Of AmericaMethod and apparatus for recognizing speech by lip reading
US20160155437A1 (en)*2014-12-022016-06-02Google Inc.Behavior adjustment using speech recognition system
US20160180155A1 (en)*2014-12-222016-06-23Fu Tai Hua Industry (Shenzhen) Co., Ltd.Electronic device and method for processing voice in video
US20160331965A1 (en)*2015-05-142016-11-17Kuang-Chao ChenCochlea hearing aid fixed on eardrum
US20170098350A1 (en)*2015-05-152017-04-06Mick EbelingVibrotactile control software systems and methods
WO2017062701A1 (en)*2015-10-092017-04-13Med-El Elektromedizinische Geraete GmbhEstimation of harmonic frequencies for hearing implant sound coding using active contour models
EP3056022A4 (en)*2013-10-072017-05-31Med-El Elektromedizinische Geraete GmbHMethod for extracting temporal features from spike-like signals
US20170294086A1 (en)*2016-04-122017-10-12Andrew KerdemelidisHaptic Communication Apparatus and Method
US9800982B2 (en)2014-06-182017-10-24Cochlear LimitedElectromagnetic transducer with expanded magnetic flux functionality
US9913983B2 (en)2013-10-252018-03-13Cochlear LimitedAlternate stimulation strategies for perception of speech
US20180239581A1 (en)*2013-03-152018-08-23Sonitum Inc.Topological mapping of control parameters
CN108778410A (en)*2016-03-112018-11-09梅约医学教育与研究基金会 Cochlear Stimulation System with Surround Sound and Noise Cancellation
CN108883274A (en)*2016-02-292018-11-23领先仿生公司The system and method for determining behavioral audiogram value for using the response of induction
US10321247B2 (en)2015-11-272019-06-11Cochlear LimitedExternal component with inductance and mechanical vibratory functionality
US10540989B2 (en)2005-08-032020-01-21SomatekSomatic, auditory and cochlear communication system and method
US10757516B2 (en)2013-10-292020-08-25Cochlear LimitedElectromagnetic transducer with specific interface geometries
US10854108B2 (en)*2017-04-172020-12-01Facebook, Inc.Machine communication system using haptic symbol set
US20230351868A1 (en)*2014-05-162023-11-02Not Impossible, LlcVibrotactile control systems and methods
US12283272B1 (en)*2021-07-192025-04-22Amazon Technologies, Inc.Detecting utterance in audio

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
DE102011006515A1 (en)*2011-03-312012-10-04Siemens Medical Instruments Pte. Ltd. Method for improving speech intelligibility with a hearing aid device and hearing aid device
US20170154546A1 (en)*2014-08-212017-06-01Jobu ProductionsLexical dialect analysis system
CN113226454B (en)*2019-06-242025-01-17科利耳有限公司Prediction and identification techniques for use with auditory prostheses
RU192148U1 (en)*2019-07-152019-09-05Общество С Ограниченной Ответственностью "Бизнес Бюро" (Ооо "Бизнес Бюро") DEVICE FOR AUDIOVISUAL NAVIGATION OF DEAD-DEAF PEOPLE
CN120456955A (en)*2022-12-272025-08-08科利耳有限公司Audiologic intervention
WO2025133977A1 (en)*2023-12-202025-06-26Cochlear LimitedAdvanced comprehension training

Citations (50)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US4441202A (en)*1979-05-281984-04-03The University Of MelbourneSpeech processor
US4581491A (en)*1984-05-041986-04-08Research CorporationWearable tactile sensory aid providing information on voice pitch and intonation patterns
US4682367A (en)*1985-11-131987-07-21General Electric CompanyMobile radio communications system with join feature
US4720848A (en)*1983-12-051988-01-19Nippo Communication Industrial Co.Communication system with voice announcement means
US5479489A (en)*1994-11-281995-12-26At&T Corp.Voice telephone dialing architecture
WO1996012383A1 (en)*1994-10-171996-04-25The University Of MelbourneMultiple pulse stimulation
US5559860A (en)*1992-06-111996-09-24Sony CorporationUser selectable response to an incoming call at a mobile station
US5661788A (en)*1995-01-251997-08-26Samsung Electronics Co., Ltd.Method and system for selectively alerting user and answering preferred telephone calls
US5740532A (en)*1996-05-061998-04-14Motorola, Inc.Method of transmitting emergency messages in a RF communication system
US6002966A (en)*1995-04-261999-12-14Advanced Bionics CorporationMultichannel cochlear prosthesis with flexible control of stimulus waveforms
US6122347A (en)*1997-11-132000-09-19Advanced Micro Devices, Inc.System and method for self-announcing a caller of an incoming telephone call
US6160489A (en)*1994-06-232000-12-12Motorola, Inc.Wireless communication device adapted to generate a plurality of distinctive tactile alert patterns
US6178167B1 (en)*1996-04-042001-01-23Lucent Technologies, Inc.Customer telecommunication interface device having a unique identifier
US6289085B1 (en)*1997-07-102001-09-11International Business Machines CorporationVoice mail system, voice synthesizing device and method therefor
US6353671B1 (en)*1998-02-052002-03-05Bioinstco Corp.Signal processing circuit and method for increasing speech intelligibility
US6373925B1 (en)*1996-06-282002-04-16Siemens AktiengesellschaftTelephone calling party announcement system and method
US6385303B1 (en)*1997-11-132002-05-07Legerity, Inc.System and method for identifying and announcing a caller and a callee of an incoming telephone call
US20020118804A1 (en)*2000-10-262002-08-29Elisa CarrollCaller-identification phone without ringer
US20020137553A1 (en)*2001-03-222002-09-26Kraemer Tim D.Distinctive ringing for mobile devices using digitized user recorded audio message
US20020156630A1 (en)*2001-03-022002-10-24Kazunori HayashiReading system and information terminal
US20020196914A1 (en)*2001-06-252002-12-26Bellsouth Intellectual Property CorporationAudio caller identification
US6501967B1 (en)*1996-02-232002-12-31Nokia Mobile Phones, Ltd.Defining of a telephone's ringing tone
US20030013432A1 (en)*2000-02-092003-01-16Kazunari FukayaPortable telephone and music reproducing method
US20030016813A1 (en)*2001-07-172003-01-23Comverse Network Systems, Ltd.Personal ring tone message indicator
US20030061041A1 (en)*2001-09-252003-03-27Stephen JunkinsPhoneme-delta based speech compression
US6573825B1 (en)*1998-12-252003-06-03Nec CorporationCommunication apparatus and alerting method
US20030161454A1 (en)*2002-02-262003-08-28Shary NassimiSelf-contained distinctive ring, voice, facsimile, and internet device
US6618474B1 (en)*1999-03-082003-09-09Morris ReeseMethod and apparatus for providing to a customer a promotional message between ringing signals or after a call waiting tone
US6621418B1 (en)*1999-09-142003-09-16Christophe CayrolDevice warning against the presence of dangerous objects
US6628195B1 (en)*1999-11-102003-09-30Jean-Max CoudonTactile stimulation device for use by a deaf person
US6636602B1 (en)*1999-08-252003-10-21Giovanni VlacancichMethod for communicating
US20040037403A1 (en)*2002-03-292004-02-26Koch Robert A.Audio delivery of caller identification information
US6714637B1 (en)*1999-10-192004-03-30Nortel Networks LimitedCustomer programmable caller ID alerting indicator
US20040082980A1 (en)*2000-10-192004-04-29Jaouhar MouineProgrammable neurostimulator
US6807259B1 (en)*2000-06-092004-10-19Nortel Networks, Ltd.Audible calling line identification
US20050243996A1 (en)*2004-05-032005-11-03Fitchmun Mark ISystem and method for providing particularized audible alerts
US7062036B2 (en)*1999-02-062006-06-13Christopher Guy WilliamsTelephone call information delivery system
US7136811B2 (en)*2002-04-242006-11-14Motorola, Inc.Low bandwidth speech communication using default and personal phoneme tables
US20060274144A1 (en)*2005-06-022006-12-07Agere Systems, Inc.Communications device with a visual ring signal and a method of generating a visual signal
US7206572B2 (en)*1992-01-292007-04-17Classco Inc.Calling party announcement apparatus
US7231019B2 (en)*2004-02-122007-06-12Microsoft CorporationAutomatic identification of telephone callers based on voice characteristics
US20070147601A1 (en)*2002-01-182007-06-28Tischer Steven NAudio alert system and method
US7257210B1 (en)*1994-01-052007-08-14Intellect Wireless Inc.Picture phone with caller id
US7315618B1 (en)*2001-12-272008-01-01At&T Bls Intellectual Property, Inc.Voice caller ID
US7366337B2 (en)*2004-02-112008-04-29Sbc Knowledge Ventures, L.P.Personal bill denomination reader
US7443967B1 (en)*2003-09-292008-10-28At&T Intellectual Property I, L.P.Second communication during ring suppression
US7483832B2 (en)*2001-12-102009-01-27At&T Intellectual Property I, L.P.Method and system for customizing voice translation of text to speech
US20100141467A1 (en)*2007-02-072010-06-10Gary John KirkpatrickApparatus for Providing Visual and/or Audible Alert Signals
US7881449B2 (en)*2003-09-302011-02-01At&T Intellectual Property Ii, L.P.Enhanced call notification service
US8086245B2 (en)*2002-09-122011-12-27Broadcom CorporationAdvertising and controlling the advertisement of wireless hot spots

Family Cites Families (39)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5029200A (en)1989-05-021991-07-02At&T Bell LaboratoriesVoice message system using synthetic speech
WO1993013518A1 (en)1991-12-311993-07-08Digital Sound CorporationVoice controlled messaging system and processing method
US5907597A (en)1994-08-051999-05-25Smart Tone Authentication, Inc.Method and system for the secure communication of data
CA2211636C (en)1995-03-072002-01-22British Telecommunications Public Limited CompanySpeech recognition
JPH10513033A (en)1995-11-171998-12-08エイ・ティ・アンド・ティ・コーポレーション Automatic vocabulary creation for voice dialing based on telecommunications networks
US5912949A (en)1996-11-051999-06-15Northern Telecom LimitedVoice-dialing system using both spoken names and initials in recognition
US5933805A (en)1996-12-131999-08-03Intel CorporationRetaining prosody during speech analysis for later playback
US6775264B1 (en)1997-03-032004-08-10Webley Systems, Inc.Computer, internet and telecommunications based network
US5991364A (en)1997-03-271999-11-23Bell Atlantic Network Services, Inc.Phonetic voice activated dialing
AUPO709197A0 (en)*1997-05-301997-06-26University Of Melbourne, TheImprovements in electrotactile vocoders
US5978689A (en)1997-07-091999-11-02Tuoriniemi; Veijo M.Personal portable communication and audio system
US6018571A (en)1997-09-302000-01-25Mitel CorporationSystem for interactive control of a computer and telephone
JPH11219443A (en)1998-01-301999-08-10Konami Co LtdMethod and device for controlling display of character image, and recording medium
AU3361499A (en)1998-03-251999-10-18Qualcomm IncorporatedMethod and apparatus for performing handsfree operations and voicing text with acdma telephone
US6073094A (en)1998-06-022000-06-06MotorolaVoice compression by phoneme recognition and communication of phoneme indexes and voice features
US6163691A (en)1998-06-242000-12-19Uniden America CorporationCaller identification in a radio communication system
US6374217B1 (en)1999-03-122002-04-16Apple Computer, Inc.Fast update implementation for efficient latent semantic language modeling
US6385584B1 (en)1999-04-302002-05-07Verizon Services Corp.Providing automated voice responses with variable user prompting
US7260187B1 (en)1999-05-112007-08-21Verizon Services Corp.Voice response apparatus and method of providing automated voice responses with silent prompting
US6904405B2 (en)1999-07-172005-06-07Edwin A. SuominenMessage recognition using shared language model
US6421672B1 (en)1999-07-272002-07-16Verizon Services Corp.Apparatus for and method of disambiguation of directory listing searches utilizing multiple selectable secondary search keys
US6975988B1 (en)2000-11-102005-12-13Adam RothElectronic mail method and system using associated audio and visual techniques
US7376640B1 (en)2000-11-142008-05-20At&T Delaware Intellectual Property, Inc.Method and system for searching an information retrieval system according to user-specified location information
US6741835B2 (en)2001-01-122004-05-25Fred PulverSystems and methods for communications
AU2002255568B8 (en)2001-02-202014-01-09Adidas AgModular personal network systems and methods
US20040233892A1 (en)2001-05-162004-11-25Roberts Linda AnnPriority caller alert
US7548875B2 (en)2001-06-272009-06-16John MikkelsenMedia delivery platform
US7277734B1 (en)2001-09-282007-10-02At&T Bls Intellectual Property, Inc.Device, system and method for augmenting cellular telephone audio signals
US7735011B2 (en)2001-10-192010-06-08Sony Ericsson Mobile Communications AbMidi composer
US20030235282A1 (en)2002-02-112003-12-25Sichelman Ted M.Automated transportation call-taking system
US7250846B2 (en)2002-03-052007-07-31International Business Machines CorporationMethod and apparatus for providing dynamic user alert
US7353455B2 (en)2002-05-212008-04-01At&T Delaware Intellectual Property, Inc.Caller initiated distinctive presence alerting and auto-response messaging
US7467087B1 (en)2002-10-102008-12-16Gillick Laurence STraining and using pronunciation guessers in speech recognition
US20040114747A1 (en)2002-12-122004-06-17Trandal David S.Systems and methods for call processing
US7769811B2 (en)2003-03-032010-08-03Aol LlcInstant messaging sound control
US7672439B2 (en)2003-04-022010-03-02Aol Inc.Concatenated audio messages
US7412383B1 (en)2003-04-042008-08-12At&T CorpReducing time for annotating speech data to develop a dialog application
US7957513B2 (en)2003-11-212011-06-07At&T Intellectual Property I, L.P.Method, system and computer program product for providing a no-ring telephone call service
US20090024183A1 (en)2005-08-032009-01-22Fitchmun Mark ISomatic, auditory and cochlear communication system and method

Patent Citations (54)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US4441202A (en)*1979-05-281984-04-03The University Of MelbourneSpeech processor
US4720848A (en)*1983-12-051988-01-19Nippo Communication Industrial Co.Communication system with voice announcement means
US4581491A (en)*1984-05-041986-04-08Research CorporationWearable tactile sensory aid providing information on voice pitch and intonation patterns
US4682367A (en)*1985-11-131987-07-21General Electric CompanyMobile radio communications system with join feature
US7206572B2 (en)*1992-01-292007-04-17Classco Inc.Calling party announcement apparatus
US5559860A (en)*1992-06-111996-09-24Sony CorporationUser selectable response to an incoming call at a mobile station
US7257210B1 (en)*1994-01-052007-08-14Intellect Wireless Inc.Picture phone with caller id
US6160489A (en)*1994-06-232000-12-12Motorola, Inc.Wireless communication device adapted to generate a plurality of distinctive tactile alert patterns
WO1996012383A1 (en)*1994-10-171996-04-25The University Of MelbourneMultiple pulse stimulation
US5479489A (en)*1994-11-281995-12-26At&T Corp.Voice telephone dialing architecture
US5661788A (en)*1995-01-251997-08-26Samsung Electronics Co., Ltd.Method and system for selectively alerting user and answering preferred telephone calls
US6002966A (en)*1995-04-261999-12-14Advanced Bionics CorporationMultichannel cochlear prosthesis with flexible control of stimulus waveforms
US6501967B1 (en)*1996-02-232002-12-31Nokia Mobile Phones, Ltd.Defining of a telephone's ringing tone
US6178167B1 (en)*1996-04-042001-01-23Lucent Technologies, Inc.Customer telecommunication interface device having a unique identifier
US5740532A (en)*1996-05-061998-04-14Motorola, Inc.Method of transmitting emergency messages in a RF communication system
US6373925B1 (en)*1996-06-282002-04-16Siemens AktiengesellschaftTelephone calling party announcement system and method
US6289085B1 (en)*1997-07-102001-09-11International Business Machines CorporationVoice mail system, voice synthesizing device and method therefor
US6385303B1 (en)*1997-11-132002-05-07Legerity, Inc.System and method for identifying and announcing a caller and a callee of an incoming telephone call
US6122347A (en)*1997-11-132000-09-19Advanced Micro Devices, Inc.System and method for self-announcing a caller of an incoming telephone call
US6353671B1 (en)*1998-02-052002-03-05Bioinstco Corp.Signal processing circuit and method for increasing speech intelligibility
US6573825B1 (en)*1998-12-252003-06-03Nec CorporationCommunication apparatus and alerting method
US7062036B2 (en)*1999-02-062006-06-13Christopher Guy WilliamsTelephone call information delivery system
US6618474B1 (en)*1999-03-082003-09-09Morris ReeseMethod and apparatus for providing to a customer a promotional message between ringing signals or after a call waiting tone
US6636602B1 (en)*1999-08-252003-10-21Giovanni VlacancichMethod for communicating
US6621418B1 (en)*1999-09-142003-09-16Christophe CayrolDevice warning against the presence of dangerous objects
US6714637B1 (en)*1999-10-192004-03-30Nortel Networks LimitedCustomer programmable caller ID alerting indicator
US6628195B1 (en)*1999-11-102003-09-30Jean-Max CoudonTactile stimulation device for use by a deaf person
US20030013432A1 (en)*2000-02-092003-01-16Kazunari FukayaPortable telephone and music reproducing method
US6807259B1 (en)*2000-06-092004-10-19Nortel Networks, Ltd.Audible calling line identification
US20040082980A1 (en)*2000-10-192004-04-29Jaouhar MouineProgrammable neurostimulator
US20020118804A1 (en)*2000-10-262002-08-29Elisa CarrollCaller-identification phone without ringer
US20020156630A1 (en)*2001-03-022002-10-24Kazunori HayashiReading system and information terminal
US20020137553A1 (en)*2001-03-222002-09-26Kraemer Tim D.Distinctive ringing for mobile devices using digitized user recorded audio message
US20020196914A1 (en)*2001-06-252002-12-26Bellsouth Intellectual Property CorporationAudio caller identification
US7295656B2 (en)*2001-06-252007-11-13At&T Bls Intellectual Property, Inc.Audio caller identification
US20030016813A1 (en)*2001-07-172003-01-23Comverse Network Systems, Ltd.Personal ring tone message indicator
US20030061041A1 (en)*2001-09-252003-03-27Stephen JunkinsPhoneme-delta based speech compression
US7483832B2 (en)*2001-12-102009-01-27At&T Intellectual Property I, L.P.Method and system for customizing voice translation of text to speech
US7315618B1 (en)*2001-12-272008-01-01At&T Bls Intellectual Property, Inc.Voice caller ID
US7418096B2 (en)*2001-12-272008-08-26At&T Intellectual Property I, L.P.Voice caller ID
US20070147601A1 (en)*2002-01-182007-06-28Tischer Steven NAudio alert system and method
US20030161454A1 (en)*2002-02-262003-08-28Shary NassimiSelf-contained distinctive ring, voice, facsimile, and internet device
US20040037403A1 (en)*2002-03-292004-02-26Koch Robert A.Audio delivery of caller identification information
US7136811B2 (en)*2002-04-242006-11-14Motorola, Inc.Low bandwidth speech communication using default and personal phoneme tables
US8086245B2 (en)*2002-09-122011-12-27Broadcom CorporationAdvertising and controlling the advertisement of wireless hot spots
US7443967B1 (en)*2003-09-292008-10-28At&T Intellectual Property I, L.P.Second communication during ring suppression
US7881449B2 (en)*2003-09-302011-02-01At&T Intellectual Property Ii, L.P.Enhanced call notification service
US7366337B2 (en)*2004-02-112008-04-29Sbc Knowledge Ventures, L.P.Personal bill denomination reader
US7231019B2 (en)*2004-02-122007-06-12Microsoft CorporationAutomatic identification of telephone callers based on voice characteristics
US20050243996A1 (en)*2004-05-032005-11-03Fitchmun Mark ISystem and method for providing particularized audible alerts
US7869588B2 (en)*2004-05-032011-01-11SomatekSystem and method for providing particularized audible alerts
US20110123017A1 (en)*2004-05-032011-05-26SomatekSystem and method for providing particularized audible alerts
US20060274144A1 (en)*2005-06-022006-12-07Agere Systems, Inc.Communications device with a visual ring signal and a method of generating a visual signal
US20100141467A1 (en)*2007-02-072010-06-10Gary John KirkpatrickApparatus for Providing Visual and/or Audible Alert Signals

Cited By (63)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US10694030B2 (en)2004-05-032020-06-23SomatekSystem and method for providing particularized audible alerts
US20050243996A1 (en)*2004-05-032005-11-03Fitchmun Mark ISystem and method for providing particularized audible alerts
US7869588B2 (en)2004-05-032011-01-11SomatekSystem and method for providing particularized audible alerts
US20110123017A1 (en)*2004-05-032011-05-26SomatekSystem and method for providing particularized audible alerts
US9544446B2 (en)2004-05-032017-01-10SomatekMethod for providing particularized audible alerts
US8767953B2 (en)*2004-05-032014-07-01SomatekSystem and method for providing particularized audible alerts
US10104226B2 (en)2004-05-032018-10-16SomatekSystem and method for providing particularized audible alerts
US11878169B2 (en)2005-08-032024-01-23SomatekSomatic, auditory and cochlear communication system and method
US10540989B2 (en)2005-08-032020-01-21SomatekSomatic, auditory and cochlear communication system and method
US20130023960A1 (en)*2007-11-302013-01-24Lockheed Martin CorporationBroad wavelength profile to homogenize the absorption profile in optical stimulation of nerves
US9011508B2 (en)*2007-11-302015-04-21Lockheed Martin CorporationBroad wavelength profile to homogenize the absorption profile in optical stimulation of nerves
US9844326B2 (en)*2008-08-292017-12-19University Of Florida Research Foundation, Inc.System and methods for creating reduced test sets used in assessing subject response to stimuli
US20100056950A1 (en)*2008-08-292010-03-04University Of Florida Research Foundation, Inc.System and methods for creating reduced test sets used in assessing subject response to stimuli
US20110320481A1 (en)*2010-06-232011-12-29Business Objects Software LimitedSearching and matching of data
US8745077B2 (en)*2010-06-232014-06-03Business Objects Software LimitedSearching and matching of data
US20130054225A1 (en)*2010-06-232013-02-28Business Objects Software LimitedSearching and matching of data
US8321442B2 (en)*2010-06-232012-11-27Business Objects Software LimitedSearching and matching of data
US20140207456A1 (en)*2010-09-232014-07-24Waveform Communications, LlcWaveform analysis of speech
EP2732641A4 (en)*2011-07-112014-12-31Med El Elektromed Geraete Gmbh TEST METHODS FOR COCHLEAR IMPLANT STIMULATION STRATEGIES
WO2013009805A1 (en)2011-07-112013-01-17Med-El Elektromedizinische Geraete GmbhTest methods for cochlear implant stimulation strategies
US9162069B2 (en)2011-07-112015-10-20Med-El Elektromedizinische Geraete GmbhTest method for cochlear implant stimulation strategies
US20130023963A1 (en)*2011-07-222013-01-24Lockheed Martin CorporationCochlear implant using optical stimulation with encoded information designed to limit heating effects
US8840654B2 (en)*2011-07-222014-09-23Lockheed Martin CorporationCochlear implant using optical stimulation with encoded information designed to limit heating effects
US9405742B2 (en)*2012-02-162016-08-02Continental Automotive GmbhMethod for phonetizing a data list and voice-controlled user interface
US20150012261A1 (en)*2012-02-162015-01-08Continetal Automotive GmbhMethod for phonetizing a data list and voice-controlled user interface
US20180239581A1 (en)*2013-03-152018-08-23Sonitum Inc.Topological mapping of control parameters
EP3056022A4 (en)*2013-10-072017-05-31Med-El Elektromedizinische Geraete GmbHMethod for extracting temporal features from spike-like signals
US9913983B2 (en)2013-10-252018-03-13Cochlear LimitedAlternate stimulation strategies for perception of speech
US10757516B2 (en)2013-10-292020-08-25Cochlear LimitedElectromagnetic transducer with specific interface geometries
US20150289786A1 (en)*2014-04-112015-10-15Reginald G. GarrattMethod of Acoustic Screening for Processing Hearing Loss Patients by Executing Computer-Executable Instructions Stored On a Non-Transitory Computer-Readable Medium
US20240371240A1 (en)*2014-05-162024-11-07Not Impossible, LlcVibrotactile control systems and methods
US11625994B2 (en)*2014-05-162023-04-11Not Impossible, LlcVibrotactile control systems and methods
US20210366250A1 (en)*2014-05-162021-11-25Not Impossible, LlcVibrotactile control systems and methods
US12387577B2 (en)*2014-05-162025-08-12Not Impossible, LlcVibrotactile control systems and methods
US20230351868A1 (en)*2014-05-162023-11-02Not Impossible, LlcVibrotactile control systems and methods
US10964179B2 (en)*2014-05-162021-03-30Not Impossible, LlcVibrotactile control systems and methods
US12008892B2 (en)*2014-05-162024-06-11Not Impossible, LlcVibrotactile control systems and methods
US9800982B2 (en)2014-06-182017-10-24Cochlear LimitedElectromagnetic transducer with expanded magnetic flux functionality
US10856091B2 (en)2014-06-182020-12-01Cochlear LimitedElectromagnetic transducer with expanded magnetic flux functionality
US20160148616A1 (en)*2014-11-262016-05-26Panasonic Intellectual Property Corporation Of AmericaMethod and apparatus for recognizing speech by lip reading
US9741342B2 (en)*2014-11-262017-08-22Panasonic Intellectual Property Corporation Of AmericaMethod and apparatus for recognizing speech by lip reading
US9911420B1 (en)*2014-12-022018-03-06Google LlcBehavior adjustment using speech recognition system
US9899024B1 (en)2014-12-022018-02-20Google LlcBehavior adjustment using speech recognition system
US9570074B2 (en)*2014-12-022017-02-14Google Inc.Behavior adjustment using speech recognition system
US20160155437A1 (en)*2014-12-022016-06-02Google Inc.Behavior adjustment using speech recognition system
US20160180155A1 (en)*2014-12-222016-06-23Fu Tai Hua Industry (Shenzhen) Co., Ltd.Electronic device and method for processing voice in video
US20160331965A1 (en)*2015-05-142016-11-17Kuang-Chao ChenCochlea hearing aid fixed on eardrum
US9901736B2 (en)*2015-05-142018-02-27Kuang-Chao ChenCochlea hearing aid fixed on eardrum
US20170098350A1 (en)*2015-05-152017-04-06Mick EbelingVibrotactile control software systems and methods
AU2016335681B2 (en)*2015-10-092018-11-01Med-El Elektromedizinische Geraete GmbhEstimation of harmonic frequencies for hearing implant sound coding using active contour models
US10707836B2 (en)2015-10-092020-07-07Med-El Elektromedizinische Geraete GmbhEstimation of harmonic frequencies for hearing implant sound coding using active contour models
CN108141201A (en)*2015-10-092018-06-08Med-El电气医疗器械有限公司Estimated using movable contour model for the harmonic frequency of hearing implant acoustic coding
WO2017062701A1 (en)*2015-10-092017-04-13Med-El Elektromedizinische Geraete GmbhEstimation of harmonic frequencies for hearing implant sound coding using active contour models
US10321247B2 (en)2015-11-272019-06-11Cochlear LimitedExternal component with inductance and mechanical vibratory functionality
CN108883274A (en)*2016-02-292018-11-23领先仿生公司The system and method for determining behavioral audiogram value for using the response of induction
CN108778410A (en)*2016-03-112018-11-09梅约医学教育与研究基金会 Cochlear Stimulation System with Surround Sound and Noise Cancellation
US10269223B2 (en)*2016-04-122019-04-23Andrew KerdemelidisHaptic communication apparatus and method
US20170294086A1 (en)*2016-04-122017-10-12Andrew KerdemelidisHaptic Communication Apparatus and Method
US11355033B2 (en)2017-04-172022-06-07Meta Platforms, Inc.Neural network model for generation of compressed haptic actuator signal from audio input
US11011075B1 (en)2017-04-172021-05-18Facebook, Inc.Calibration of haptic device using sensor harness
US10943503B2 (en)2017-04-172021-03-09Facebook, Inc.Envelope encoding of speech signals for transmission to cutaneous actuators
US10854108B2 (en)*2017-04-172020-12-01Facebook, Inc.Machine communication system using haptic symbol set
US12283272B1 (en)*2021-07-192025-04-22Amazon Technologies, Inc.Detecting utterance in audio

Also Published As

Publication numberPublication date
WO2007019307A9 (en)2007-04-05
US10540989B2 (en)2020-01-21
US20220370803A1 (en)2022-11-24
US20200152223A1 (en)2020-05-14
WO2007019307A2 (en)2007-02-15
US20240157143A1 (en)2024-05-16
US20150194166A1 (en)2015-07-09
WO2007019307A3 (en)2007-08-02
US11878169B2 (en)2024-01-23

Similar Documents

PublicationPublication DateTitle
US11878169B2 (en)Somatic, auditory and cochlear communication system and method
US9936308B2 (en)Hearing aid apparatus with fundamental frequency modification
Mefferd et al.Tongue-and jaw-specific articulatory changes and their acoustic consequences in talkers with dysarthria due to amyotrophic lateral sclerosis: Effects of loud, clear, and slow speech
Clarke et al.Pitch and spectral resolution: A systematic comparison of bottom-up cues for top-down repair of degraded speech
EP4061219A1 (en)Scoring speech audiometry
Straatman et al.Advantage of bimodal fitting in prosody perception for children using a cochlear implant and a hearing aid
Vojtech et al.The effects of modulating fundamental frequency and speech rate on the intelligibility, communication efficiency, and perceived naturalness of synthetic speech
Turcott et al.Efficient evaluation of coding strategies for transcutaneous language communication
Skuk et al.Parameter-specific morphing reveals contributions of timbre and fundamental frequency cues to the perception of voice gender and age in cochlear implant users
Ball et al.Methods in clinical phonetics
Smith et al.Integration of partial information within and across modalities: Contributions to spoken and written sentence recognition
Ming et al.Efficient coding in human auditory perception
IfukubeSound-based assistive technology
Ng et al.Long-term average spectral characteristics of Cantonese alaryngeal speech
Rødvik et al.Consonant and vowel confusions in well-performing adult cochlear implant users, measured with a nonsense syllable repetition test
KuoFrequency importance functions for words and sentences in Mandarin Chinese: Implications for hearing aid prescriptions in tonal languages
Seong et al.A study on the voice security system using sensor technology
Deng et al.Speech analysis: the production-perception perspective
NiemanThe Effect of Breathy and Strained Vocal Quality on Vowel Perception
LamEffects of Hearing Experience, Music Training and Spectral-Temporal Processing on Music and Speech Perception in Adult Cochlear Implant Users
Saimai et al.Speech synthesis algorithm for Thai cochlear implants
玉井湧太 et al.Demonstration of a novel speech-coding method for single-channel cochlear stimulation
LeeLombard Effect in Speech Production by Cochlear Implant Users: Analysis, Assessment and Implications
Chatterjee et al.Spectral Subband Centroids for Tone Vocoder Simulations of Cochlear Implants
WO2024200875A1 (en)Ego dystonic voice conversion for reducing stuttering

Legal Events

DateCodeTitleDescription
ASAssignment

Owner name:SOMATEK, CALIFORNIA

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FITCHMUN, MARK I.;REEL/FRAME:025612/0589

Effective date:20101218

STCBInformation on status: application discontinuation

Free format text:ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION


[8]ページ先頭

©2009-2025 Movatter.jp