Movatterモバイル変換


[0]ホーム

URL:


WO2020191380A1 - Auto focus, auto focus within regions, and auto placement of beamformed microphone lobes with inhibition functionality - Google Patents

Auto focus, auto focus within regions, and auto placement of beamformed microphone lobes with inhibition functionality
Download PDF

Info

Publication number
WO2020191380A1
WO2020191380A1PCT/US2020/024063US2020024063WWO2020191380A1WO 2020191380 A1WO2020191380 A1WO 2020191380A1US 2020024063 WUS2020024063 WUS 2020024063WWO 2020191380 A1WO2020191380 A1WO 2020191380A1
Authority
WO
WIPO (PCT)
Prior art keywords
lobe
coordinates
sound activity
activity
new sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2020/024063
Other languages
French (fr)
Inventor
Dusan Veselinovic
Mathew T. ABRAHAM
Michael Ryan LESTER
Avinash K. VAIDYA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shure Acquisition Holdings Inc
Original Assignee
Shure Acquisition Holdings Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shure Acquisition Holdings IncfiledCriticalShure Acquisition Holdings Inc
Priority to CN202080036963.0ApriorityCriticalpatent/CN113841421B/en
Priority to CN202410766380.3Aprioritypatent/CN118803494B/en
Priority to JP2021556732Aprioritypatent/JP7572964B2/en
Priority to EP20719861.5Aprioritypatent/EP3942845A1/en
Publication of WO2020191380A1publicationCriticalpatent/WO2020191380A1/en
Anticipated expirationlegal-statusCritical
Ceasedlegal-statusCriticalCurrent

Links

Classifications

Definitions

Landscapes

Abstract

Array microphone systems and methods that can automatically focus and/or place beamformed lobes in response to detected sound activity are provided. The automatic focus and/or placement of the beamformed lobes can be inhibited based on a remote far end audio signal. The quality of the coverage of audio sources in an environment may be improved by ensuring that beamformed lobes are optimally picking up the audio sources even if they have moved and changed locations.

Description

AUTO FOCUS, AUTO FOCUS WITHIN REGIONS, AND AUTO PLACEMENT OF BEAMFORMED MICROPHONE LOBES WITH INHIBITION
FUNCTIONALITY
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Patent Application No. 62/821,800, filed March 21, 2019, U.S. Provisional Patent Application No. 62/855,187, filed May 31, 2019, and U.S. Provisional Patent Application No. 62/971,648, filed February 7, 2020. The contents of each application are fully incorporated by reference in their entirety herein.
TECHNICAL FIELD
[0002] This application generally relates to an array microphone having automatic focus and placement of beamformed microphone lobes. In particular, this application relates to an array microphone that adjusts the focus and placement of beamformed microphone lobes based on the detection of sound activity after the lobes have been initially placed, and allows inhibition of the adjustment of the focus and placement of the beamformed microphone lobes based on a remote far end audio signal.
BACKGROUND
[0003] Conferencing environments, such as conference rooms, boardrooms, video conferencing applications, and the like, can involve the use of microphones for capturing sound from various audio sources active in such environments. Such audio sources may include humans speaking, for example. The captured sound may be disseminated to a local audience in the environment through amplified speakers (for sound reinforcement), and/or to others remote from the environment (such as via a telecast and/or a webcast). The types of microphones and their placement in a particular environment may depend on the locations of the audio sources, physical space requirements, aesthetics, room layout, and/or other considerations. For example, in some environments, the microphones may be placed on a table or lectern near the audio sources. In other environments, the microphones may be mounted overhead to capture the sound from the entire room, for example. Accordingly, microphones are available in a variety of sizes, form factors, mounting options, and wiring options to suit the needs of particular environments.
[0004] Traditional microphones typically have fixed polar patterns and few manually selectable settings. To capture sound in a conferencing environment, many traditional microphones can be used at once to capture the audio sources within the environment. However, traditional microphones tend to capture unwanted audio as well, such as room noise, echoes, and other undesirable audio elements. The capturing of these unwanted noises is exacerbated by the use of many microphones.
[0005] Array microphones having multiple microphone elements can provide benefits such as steerable coverage or pick up patterns (having one or more lobes), which allow the microphones to focus on the desired audio sources and reject unwanted sounds such as room noise. The ability to steer audio pick up patterns provides the benefit of being able to be less precise in microphone placement, and in this way, array microphones are more forgiving. Moreover, array microphones provide the ability to pick up multiple audio sources with one array microphone or unit, again due to the ability to steer the pickup patterns.
[0006] However, the position of lobes of a pickup pattern of an array microphone may not be optimal in certain environments and situations. For example, an audio source that is initially detected by a lobe may move and change locations. In this situation, the lobe may not optimally pick up the audio source at the its new location. [0007] Accordingly, there is an opportunity for an array microphone that addresses these concerns. More particularly, there is an opportunity for an array microphone that automatically focuses and/or places beamformed microphone lobes based on the detection of sound activity after the lobes have been initially placed, while also being able to inhibit the focus and/or placement of the beamformed microphone lobes based on a remote far end audio signal, which can result in higher quality sound capture and more optimal coverage of environments.
SUMMARY
[0008] The invention is intended to solve the above-noted problems by providing array microphone systems and methods that are designed to, among other things: (1) enable automatic focusing of beamformed lobes of an array microphone in response to the detection of sound activity, after the lobes have been initially placed; (2) enable automatic placement of beamformed lobes of an array microphone in response to the detection of sound activity; (3) enable automatic focusing of beamformed lobes of an array microphone within lobe regions in response to the detection of sound activity, after the lobes have been initially placed; and (4) inhibit or restrict the automatic focusing or automatic placement of beamformed lobes of an array microphone, based on activity of a remote far end audio signal.
[0009] In an embodiment, beamformed lobes that have been positioned at initial coordinates may be focused by moving the lobes to new coordinates in the general vicinity of the initial coordinates, when new sound activity is detected at the new coordinates.
[0010] In another embodiment, beamformed lobes may be placed or moved to new coordinates, when new sound activity is detected at the new coordinates. [0011] In a further embodiment, beamformed lobes that have been positioned at initial coordinates may be focused by moving the lobes, but confined within lobe regions, when new sound activity is detected at the new coordinates.
[0012] In another embodiment, the movement or placement of beamformed lobes may be inhibited or restricted, when the activity of a remote far end audio signal exceeds a predetermined threshold.
[0013] These and other embodiments, and various permutations and aspects, will become apparent and be more fully understood from the following detailed description and accompanying drawings, which set forth illustrative embodiments that are indicative of the various ways in which the principles of the invention may be employed.
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] FIG. 1 is a schematic diagram of an array microphone with automatic focusing of beamformed lobes in response to the detection of sound activity, in accordance with some embodiments.
[0015] FIG. 2 is a flowchart illustrating operations for automatic focusing of beamformed lobes, in accordance with some embodiments.
[0016] FIG. 3 is a flowchart illustrating operations for automatic focusing of beamformed lobes that utilizes a cost functional, in accordance with some embodiments.
[0017] FIG. 4 is a schematic diagram of an array microphone with automatic placement of beamformed lobes of an array microphone in response to the detection of sound activity, in accordance with some embodiments.
[0018] FIG. 5 is a flowchart illustrating operations for automatic placement of beamformed lobes, in accordance with some embodiments. [0019] FIG. 6 is a flowchart illustrating operations for finding lobes near detected sound activity, in accordance with some embodiments.
[0020] FIG. 7 is an exemplary depiction of an array microphone with beamformed lobes within lobe regions, in accordance with some embodiments.
[0021] FIG. 8 is a flowchart illustrating operations for automatic focusing of beamformed lobes within lobe regions, in accordance with some embodiments.
[0022] FIG. 9 is a flowchart illustrating operations for determining whether detected sound activity is within a look radius of a lobe, in accordance with some embodiments.
[0023] FIG. 10 is an exemplary depiction of an array microphone with beamformed lobes within lobe regions and showing a look radius of a lobe, in accordance with some embodiments.
[0024] FIG. 11 is a flowchart illustrating operations for determining movement of a lobe within a move radius of a lobe, in accordance with some embodiments.
[0025] FIG. 12 is an exemplary depiction of an array microphone with beamformed lobes within lobe regions and showing a move radius of a lobe, in accordance with some embodiments.
[0026] FIG. 13 is an exemplary depiction of an array microphone with beamformed lobes within lobe regions and showing boundary cushions between lobe regions, in accordance with some embodiments.
[0027] FIG. 14 is a flowchart illustrating operations for limiting movement of a lobe based on boundary cushions between lobe regions, in accordance with some embodiments.
[0028] FIG. 15 is an exemplary depiction of an array microphone with beamformed lobes within regions and showing the movement of a lobe based on boundary cushions between regions, in accordance with some embodiments. [0029] FIG. 16 is a schematic diagram of an array microphone with automatic focusing of beamformed lobes in response to the detection of sound activity and inhibition of the automatic focusing based on a remote far end audio signal, in accordance with some embodiments.
[0030] FIG. 17 is a schematic diagram of an array microphone with automatic placement of beamformed lobes of an array microphone in response to the detection of sound activity and inhibition of the automatic placement based on a remote far end audio signal, in accordance with some embodiments.
[0031] FIG. 18 is a flowchart illustrating operations for inhibiting automatic adjustment of beamformed lobes of an array microphone based on a remote far end audio signal, in accordance with some embodiments.
[0032] FIG. 19 is a schematic diagram of an array microphone with automatic placement of beamformed lobes of an array microphone in response to the detection of sound activity and activity detection of the sound activity, in accordance with some embodiments.
[0033] FIG. 20 is a flowchart illustrating operations for automatic placement of beamformed lobes including activity detection of sound activity, in accordance with some embodiments.
DETAILED DESCRIPTION
[0034] The description that follows describes, illustrates and exemplifies one or more particular embodiments of the invention in accordance with its principles. This description is not provided to limit the invention to the embodiments described herein, but rather to explain and teach the principles of the invention in such a way to enable one of ordinary skill in the art to understand these principles and, with that understanding, be able to apply them to practice not only the embodiments described herein, but also other embodiments that may come to mind in accordance with these principles. The scope of the invention is intended to cover all such embodiments that may fall within the scope of the appended claims, either literally or under the doctrine of equivalents.
[0035] It should be noted that in the description and drawings, like or substantially similar elements may be labeled with the same reference numerals. However, sometimes these elements may be labeled with differing numbers, such as, for example, in cases where such labeling facilitates a more clear description. Additionally, the drawings set forth herein are not necessarily drawn to scale, and in some instances proportions may have been exaggerated to more clearly depict certain features. Such labeling and drawing practices do not necessarily implicate an underlying substantive purpose. As stated above, the specification is intended to be taken as a whole and interpreted in accordance with the principles of the invention as taught herein and understood to one of ordinary skill in the art.
[0036] The array microphone systems and methods described herein can enable the automatic focusing and placement of beamformed lobes in response to the detection of sound activity, as well as allow the focus and placement of the beamformed lobes to be inhibited based on a remote far end audio signal. In embodiments, the array microphone may include a plurality of microphone elements, an audio activity localizer, a lobe auto-focuser, a database, and a beamformer. The audio activity localizer may detect the coordinates and confidence score of new sound activity, and the lobe auto-focuser may determine whether there is a previously placed lobe nearby the new sound activity. If there is such a lobe and the confidence score of the new sound activity is greater than a confidence score of the lobe, then the lobe auto-focuser may transmit the new coordinates to the beamformer so that the lobe is moved to the new coordinates. In these embodiments, the location of a lobe may be improved and automatically focused on the latest location of audio sources inside and near the lobe, while also preventing the lobe from overlapping, pointing in an undesirable direction (e.g., towards unwanted noise), and/or moving too suddenly.
[0037] In other embodiments, the array microphone may include a plurality of microphone elements, an audio activity localizer, a lobe auto-placer, a database, and a beamformer. The audio activity localizer may detect the coordinates of new sound activity, and the lobe auto placer may determine whether there is a lobe nearby the new sound activity. If there is not such a lobe, then the lobe auto-placer may transmit the new coordinates to the beamformer so that an inactive lobe is placed at the new coordinates or so that an existing lobe is moved to the new coordinates. In these embodiments, the set of active lobes of the array microphone may point to the most recent sound activity in the coverage area of the array microphone.
[0038] In other embodiments, the audio activity localizer may detect the coordinates and confidence score of new sound activity, and if the confidence score of the new sound activity is greater than a threshold, the lobe auto-focuser may identify a lobe region that the new sound activity belongs to. In the identified lobe region, a previously placed lobe may be moved if the coordinates are within a look radius of the current coordinates of the lobe, i.e., a three- dimensional region of space around the current coordinates of the lobe where new sound activity can be considered. The movement of the lobe in the lobe region may be limited to within a move radius of the current coordinates of the lobe, i.e., a maximum distance in three-dimensional space that the lobe is allowed to move, and/or limited to outside a boundary cushion between lobe regions, i.e., how close a lobe can move to the boundaries between lobe regions. In these embodiments, the location of a lobe may be improved and automatically focused on the latest location of audio sources inside the lobe region associated with the lobe, while also preventing the lobes from overlapping, pointing in an undesirable direction (e.g., towards unwanted noise), and/or moving too suddenly.
[0039] In further embodiments, an activity detector may receive a remote audio signal, such as from a far end. The sound of the remote audio signal may be played in the local environment, such as on a loudspeaker within a conference room. If the activity of the remote audio signal exceeds a predetermined threshold, then the automatic adjustment (i.e., focus and/or placement) of beamformed lobes may be inhibited from occurring. For example, the activity of the remote audio signal could be measured by the energy level of the remote audio signal. In this example, the energy level of the remote audio signal may exceed the predetermined threshold when there is a certain level of speech or voice contained in the remote audio signal. In this situation, it may be desirable to prevent automatic adjustment of the beamformed lobes so that lobes are not directed to pick up the sound from the remote audio signal, e.g., that is being played in local environment. However, if the energy level of the remote audio signal does not exceed the predetermined threshold, then the automatic adjustment of beamformed lobes may be performed. The automatic adjustment of the beamformed lobes may include, for example, the automatic focus and/or placement of the lobes as described herein. In these embodiments, the location of a lobe may be improved and automatically focused and/or placed when the activity of the remote audio signal does not exceed a predetermined threshold, and inhibited or restricted from being automatically focused and/or placed when the activity of the remote audio signal exceeds the predetermined threshold.
[0040] Through the use of the systems and methods herein, the quality of the coverage of audio sources in an environment may be improved by, for example, ensuring that beamformed lobes are optimally picking up the audio sources even if the audio sources have moved and changed locations from an initial position. The quality of the coverage of audio source in an environment may also be improved by, for example, reducing the likelihood that beamformed lobes are deployed (e.g., focused or placed) to pick up unwanted sounds like voice, speech, or other noise from the far end.
[0041] FIGs. 1 and 4 are schematic diagrams of array microphones 100, 400 that can detect sounds from audio sources at various frequencies. The array microphone 100, 400 may be utilized in a conference room or boardroom, for example, where the audio sources may be one or more human speakers. Other sounds may be present in the environment which may be undesirable, such as noise from ventilation, other persons, audio/visual equipment, electronic devices, etc. In a typical situation, the audio sources may be seated in chairs at a table, although other configurations and placements of the audio sources are contemplated and possible.
[0042] The array microphone 100, 400 may be placed on or in a table, lectern, desktop, wall, ceiling, etc. so that the sound from the audio sources can be detected and captured, such as speech spoken by human speakers. The array microphone 100, 400 may include any number of microphone elements 102a,b,.. ,zz, 402a, b,.. ,zz, for example, and be able to form multiple pickup patterns with lobes so that the sound from the audio sources can be detected and captured. Any appropriate number of microphone elements 102, 402 are possible and contemplated.
[0043] Each of the microphone elements 102, 402 in the array microphone 100, 400 may detect sound and convert the sound to an analog audio signal. Components in the array microphone 100, 400, such as analog to digital converters, processors, and/or other components, may process the analog audio signals and ultimately generate one or more digital audio output signals. The digital audio output signals may conform to the Dante standard for transmitting audio over Ethernet, in some embodiments, or may conform to another standard and/or transmission protocol. In embodiments, each of the microphone elements 102, 402 in the array microphone 100, 400 may detect sound and convert the sound to a digital audio signal.
[0044] One or more pickup patterns may be formed by a beamformer 170, 470 in the array microphone 100, 400 from the audio signals of the microphone elements 102, 402. The beamformer 170, 470 may generate digital output signals 190a, b,c,...z, 490a,b,c,...,z corresponding to each of the pickup patterns. The pickup patterns may be composed of one or more lobes, e.g., main, side, and back lobes. In other embodiments, the microphone elements 102, 402 in the array microphone 100, 400 may output analog audio signals so that other components and devices (e.g., processors, mixers, recorders, amplifiers, etc.) external to the array microphone 100, 400 may process the analog audio signals.
[0045] The array microphone 100 of FIG. 1 that automatically focuses beamformed lobes in response to the detection of sound activity may include the microphone elements 102; an audio activity localizer 150 in wired or wireless communication with the microphone elements 102; a lobe auto-focuser 160 in wired or wireless communication with the audio activity localizer 150; a beamformer 170 in wired or wireless communication with the microphone elements 102 and the lobe auto-focuser 160; and a database 180 in wired or wireless communication with the lobe auto-focuser 160. These components are described in more detail below.
[0046] The array microphone 400 of FIG. 4 that automatically places beamformed lobes in response to the detection of sound activity may include the microphone elements 402; an audio activity localizer 450 in wired or wireless communication with the microphone elements 402; a lobe auto-placer 460 in wired or wireless communication with the audio activity localizer 450; a beamformer 470 in wired or wireless communication with the microphone elements 402 and the lobe auto-placer 460; and a database 480 in wired or wireless communication with the lobe auto placer 460. These components are described in more detail below.
[0047] In embodiments, the array microphone 100, 400 may include other components, such as an acoustic echo canceller or an automixer, that works with the audio activity localizer 150, 450 and/or the beamformer 170, 470. For example, when a lobe is moved to new coordinates in response to detecting new sound activity, as described herein, information from the movement of the lobe may be utilized by an acoustic echo canceller to minimize echo during the movement and/or by an automixer to improve its decision making capability. As another example, the movement of a lobe may be influenced by the decision of an automixer, such as allowing a lobe to be moved that the automixer has identified as having pertinent voice activity. The beamformer 170, 470 may be any suitable beamformer, such as a delay and sum beamformer or a minimum variance distortionless response (MVDR) beamformer.
[0048] The various components included in the array microphone 100, 400 may be implemented using software executable by one or more servers or computers, such as a computing device with a processor and memory, graphics processing units (GPUs), and/or by hardware (e.g., discrete logic circuits, application specific integrated circuits (ASIC), programmable gate arrays (PGA), field programmable gate arrays (FPGA), etc.
[0049] In some embodiments, the microphone elements 102, 402 may be arranged in concentric rings and/or harmonically nested. The microphone elements 102, 402 may be arranged to be generally symmetric, in some embodiments. In other embodiments, the microphone elements 102, 402 may be arranged asymmetrically or in another arrangement. In further embodiments, the microphone elements 102, 402 may be arranged on a substrate, placed in a frame, or individually suspended, for example. An embodiment of an array microphone is described in commonly assigned U.S. Patent No. 9,565,493, which is hereby incorporated by reference in its entirety herein. In embodiments, the microphone elements 102, 402 may be unidirectional microphones that are primarily sensitive in one direction. In other embodiments, the microphone elements 102, 402 may have other directionalities or polar patterns, such as cardioid, subcardioid, or omnidirectional, as desired. The microphone elements 102, 402 may be any suitable type of transducer that can detect the sound from an audio source and convert the sound to an electrical audio signal. In an embodiment, the microphone elements 102, 402 may be micro-electrical mechanical system (MEMS) microphones. In other embodiments, the microphone elements 102, 402 may be condenser microphones, balanced armature microphones, electret microphones, dynamic microphones, and/or other types of microphones. In embodiments, the microphone elements 102, 402 may be arrayed in one dimension or two dimensions. The array microphone 100, 400 may be placed or mounted on a table, a wall, a ceiling, etc., and may be next to, under, or above a video monitor, for example.
[0050] An embodiment of a process 200 for automatic focusing of previously placed beamformed lobes of the array microphone 100 is shown in FIG. 2. The process 200 may be performed by the lobe auto-focuser 160 so that the array microphone 100 can output one or more audio signals 180 from the array microphone 100, where the audio signals 180 may include sound picked up by the beamformed lobes that are focused on new sound activity of an audio source. One or more processors and/or other processing components (e.g., analog to digital converters, encryption chips, etc.) within or external to the array microphone 100 may perform any, some, or all of the steps of the process 200. One or more other types of components (e.g., memory, input and/or output devices, transmitters, receivers, buffers, drivers, discrete components, etc.) may also be utilized in conjunction with the processors and/or other processing components to perform any, some, or all of the steps of the process 200.
[0051] At step 202, the coordinates and a confidence score corresponding to new sound activity may be received at the lobe auto-focuser 160 from the audio activity localizer 150. The audio activity localizer 150 may continuously scan the environment of the array microphone 100 to find new sound activity. The new sound activity found by the audio activity localizer 150 may include suitable audio sources, e.g., human speakers, that are not stationary. The coordinates of the new sound activity may be a particular three dimensional coordinate relative to the location of the array microphone 100, such as in Cartesian coordinates (i.e., x, y, z), or in spherical coordinates (i.e., radial distance/magnitude r, elevation angle Q (theta), azimuthal angle f (phi)). The confidence score of the new sound activity may denote the certainty of the coordinates and/or the quality of the sound activity, for example. In embodiments, other suitable metrics related to the new sound activity may be received and utilized at step 202. It should be noted that Cartesian coordinates may be readily converted to spherical coordinates, and vice versa, as needed.
[0052] The lobe auto-focuser 160 may determine whether the coordinates of the new sound activity are nearby (i.e., in the vicinity of) an existing lobe, at step 204. Whether the new sound activity is nearby an existing lobe may be based on the difference in azimuth and/or elevation angles of (1) the coordinates of the new sound activity and (2) the coordinates of the existing lobe, relative to a predetermined threshold. The distance of the new sound activity away from the microphone 100 may also influence the determination of whether the coordinates of the new sound activity are nearby an existing lobe. The lobe auto-focuser 160 may retrieve the coordinates of the existing lobe from the database 180 for use in step 204, in some embodiments. An embodiment of the determination of whether the coordinates of the new sound activity are nearby an existing lobe is described in more detail below with respect to FIG. 6.
[0053] If the lobe auto-focuser 160 determines that the coordinates of the new sound activity are not nearby an existing lobe at step 204, then the process 200 may end at step 210 and the locations of the lobes of the array microphone 100 are not updated. In this scenario, the coordinates of the new sound activity may be considered to be outside the coverage area of the array microphone 100 and the new sound activity may therefore be ignored. However, if at step 204 the lobe auto-focuser 160 determines that the coordinates of the new sound activity are nearby an existing lobe, then the process 200 continues to step 206. In this scenario, the coordinates of the new sound activity may be considered to be an improved (i.e., more focused) location of the existing lobe.
[0054] At step 206, the lobe auto-focuser 160 may compare the confidence score of the new sound activity to the confidence score of the existing lobe. The lobe auto-focuser 160 may retrieve the confidence score of the existing lobe from the database 180, in some embodiments. If the lobe auto-focuser 160 determines at step 206 that the confidence score of the new sound activity is less than (i.e., worse than) the confidence score of the existing lobe, then the process 200 may end at step 210 and the locations of the lobes of the array microphone 100 are not updated. However, if the lobe auto-focuser 160 determines at step 206 that the confidence score of the new sound activity is greater than or equal to (i.e., better than or more favorable than) the confidence score of the existing lobe, then the process 200 may continue to step 208. At step 208, the lobe auto-focuser 160 may transmit the coordinates of the new sound activity to the beamformer 170 so that the beamformer 170 can update the location of the existing lobe to the new coordinates. In addition, the lobe auto-focuser 160 may store the new coordinates of the lobe in the database 180.
[0055] In some embodiments, at step 208, the lobe auto-focuser 160 may limit the movement of an existing lobe to prevent and/or minimize sudden changes in the location of the lobe. For example, the lobe auto-focuser 160 may not move a particular lobe to new coordinates if that lobe has been recently moved within a certain recent time period. As another example, the lobe auto-focuser 160 may not move a particular lobe to new coordinates if those new coordinates are too close to the lobe’s current coordinates, too close to another lobe, overlapping another lobe, and/or considered too far from the existing position of the lobe.
[0056] The process 200 may be continuously performed by the array microphone 100 as the audio activity localizer 150 finds new sound activity and provides the coordinates and confidence score of the new sound activity to the lobe auto-focuser 160. For example, the process 200 may be performed as audio sources, e.g., human speakers, are moving around a conference room so that one or more lobes can be focused on the audio sources to optimally pick up their sound.
[0057] An embodiment of a process 300 for automatic focusing of previously placed beamformed lobes of the array microphone 100 using a cost functional is shown in FIG. 3. The process 300 may be performed by the lobe auto-focuser 160 so that the array microphone 100 can output one or more audio signals 180, where the audio signals 180 may include sound picked up by the beamformed lobes that are focused on new sound activity of an audio source. One or more processors and/or other processing components (e.g., analog to digital converters, encryption chips, etc.) within or external to the microphone array 100 may perform any, some, or all of the steps of the process 300. One or more other types of components (e.g., memory, input and/or output devices, transmitters, receivers, buffers, drivers, discrete components, etc.) may also be utilized in conjunction with the processors and/or other processing components to perform any, some, or all of the steps of the process 300.
[0058] Steps 302, 304, and 306 of the process 300 for the lobe auto-focuser 160 may be substantially the same as steps 202, 204, and 206 of the process 200 of FIG. 2 described above. In particular, the coordinates and a confidence score corresponding to new sound activity may be received at the lobe auto-focuser 160 from the audio activity localizer 150. The lobe auto- focuser 160 may determine whether the coordinates of the new sound activity are nearby (i.e., in the vicinity of) an existing lobe. If the coordinates of the new sound activity are not nearby an existing lobe (or if the confidence score of the new sound activity is less than the confidence score of the existing lobe), then the process 300 may proceed to step 324 and the locations of the lobes of the array microphone 100 are not updated. However, if at step 306, the lobe auto- focuser 160 determines that the confidence score of the new sound activity is more than (i.e., better than or more favorable than) the confidence score of the existing lobe, then the process 300 may continue to step 308. In this scenario, the coordinates of the new sound activity may be considered to be a candidate location to move the existing lobe to, and a cost functional of the existing lobe may be evaluated and maximized, as described below.
[0059] A cost functional for a lobe may take into account spatial aspects of the lobe and the audio quality of the new sound activity. As used herein, a cost functional and a cost function have the same meaning. In particular, the cost functional for a lobe i may be defined in some embodiments as a function of the coordinates of the new sound activity ( LCi ), a signal-to-noise ratio for the lobe (SNRI), a gain value for the lobe (Gain), voice activity detection information related to the new sound activity (VADI), and distances from the coordinates of the existing lobe (distance (LOi)). In other embodiments, the cost functional for a lobe may be a function of other information. The cost functional for a lobe i can be written as Ji(x, y, z) with Cartesian coordinates or Ji(< azimuth, elevation, magnitude ) with spherical coordinates, for example. Using the cost functional with Cartesian coordinates as exemplary, the cost functional Ji(x, y, z) = f (LCi, distance (LOi), Gain, SNRi, VADi). Accordingly, the lobe may be moved by evaluating and maximizing the cost functional Ji over a spatial grid of coordinates, such that the movement of the lobe is in the direction of the gradient (i.e., steepest ascent) of the cost functional. The maximum of the cost functional may be the same as the coordinates of the new sound activity received by the lobe auto-focuser 160 at step 302 (i.e., the candidate location), in some situations. In other situations, the maximum of the cost functional may move the lobe to a different position than the coordinates of the new sound activity, when taking into account the other parameters described above.
[0060] At step 308, the cost functional for the lobe may be evaluated by the lobe auto-focuser 160 at the coordinates of the new sound activity. The evaluated cost functional may be stored by the lobe auto-focuser 160 in the database 180, in some embodiments. At step 310, the lobe auto- focuser 160 may move the lobe by each of an amount Dc, Ay, Dz in the x, y, and z directions, respectively, from the coordinates of the new sound activity. After each movement, the cost functional may be evaluated by the lobe auto-focuser 160 at each of these locations. For example, the lobe may be moved to a location (x + Dc, y, z) and the cost functional may be evaluated at that location; then moved to a location (x, y + Ay, z) and the cost functional may be evaluated at that location; and then moved to a location (x, y, z + Az) and the cost functional may be evaluated at that location. The lobe may be moved by the amounts Dc, Ay, Az in any order at step 310. Each of the evaluated cost functionals at these locations may be stored by the lobe auto-focuser 160 in the database 180, in some embodiments. The evaluations of the cost functional are performed by the lobe auto-focuser 160 at step 310 in order to compute an estimate of partial derivatives and the gradient of the cost functional, as described below. It should be noted that while the description above is with relation to Cartesian coordinates, a similar operation may be performed with spherical coordinates (e.g., azimuth, D elevation , D magnitude ).
[0061] At step 312, the gradient of the cost functional may be calculated by the lobe auto- focuser 160 based on the set of estimates of the partial derivatives. The gradient / may calculated as follows: v; = ( gxi, gyi, gzi )
(Ji (xi + Ax, yi, zi) -Ji (xi, yi, zi)
¾ V dx
Ji (xit + Ay, Zj) -Jj (xi yi zj) Jl(xl> yl> zl + Dz) -Ji (xi, yi, zi)
Ay ’ Az
[0062] At step 314, the lobe auto-focuser 160 may move the lobe by a predetermined step size m in the direction of the gradient calculated at step 312. In particular, the lobe may be moved to a new location: ^Xl
Figure imgf000021_0001
+ 9zϊ) jhe cost functional of the lobe at this new location may also be evaluated by the lobe auto-focuser 160 at step 314. This cost functional may be stored by the lobe auto-focuser 160 in the database 180, in some embodiments.
[0063] At step 316, the lobe auto-focuser 160 may compare the cost functional of the lobe at the new location (evaluated at step 314) with the cost functional of the lobe at the coordinates of the new sound activity (evaluated at step 308). If the cost functional of the lobe at the new location is less than the cost functional of the lobe at the coordinates of the new sound activity at step 316, then the step size m at step 314 may be considered as too large, and the process 300 may continue to step 322. At step 322, the step size may be adjusted and the process may return to step 314.
[0064] However, if the cost functional of the lobe at the new location is not less than the cost functional of the lobe at the coordinates of the new sound activity at step 316, then the process 300 may continue to step 318. At step 318, the lobe auto-focuser 160 may determine whether the difference between (1) the cost functional of the lobe at the new location (evaluated at step 314) and (2) the cost functional of the lobe at the coordinates of the new sound activity (evaluated at step 308) is close, i.e., whether the absolute value of the difference is within a small quantity e. If the condition is not satisfied at step 318, then it may be considered that a local maximum of the cost functional has not been reached. The process 300 may proceed to step 324 and the locations of the lobes of the array microphone 100 are not updated.
[0065] However, if the condition is satisfied at step 318, then it may be considered that a local maximum of the cost functional has been reached and that the lobe has been auto focused, and the process 300 proceeds to step 320. At step 320, the lobe auto-focuser 160 may transmit the coordinates of the new sound activity to the beamformer 170 so that the beamformer 170 can update the location of the lobe to the new coordinates. In addition, the lobe auto-focuser 160 may store the new coordinates of the lobe in the database 180.
[0066] In some embodiments, annealing/dithering movements of the lobe may be applied by the lobe auto-focuser 160 at step 320. The annealing/dithering movements may be applied to nudge the lobe out of a local maximum of the cost functional to attempt to find a better local maximum (and therefore a better location for the lobe). The annealing/dithering locations may be defined by ( ; +r ;> i +rV’z +rzd , where ( rxt, ry,, rzi) are small random values. [0067] The process 300 may be continuously performed by the array microphone 100 as the audio activity localizer 150 finds new sound activity and provides the coordinates and confidence score of the new sound activity to the lobe auto-focuser 160. For example, the process 300 may be performed as audio sources, e.g., human speakers, are moving around a conference room so that one or more lobes can be focused on the audio sources to optimally pick up their sound.
[0068] In embodiments, the cost functional may be re-evaluated and updated, e.g., steps 308- 318 and 322, and the coordinates of the lobe may be adjusted without needing to receive a set of coordinates of new sound activity, e.g., at step 302. For example, an algorithm may detect which lobe of the array microphone 100 has the most sound activity without providing a set of coordinates of new sound activity. Based on the sound activity information from such an algorithm, the cost functional may be re-evaluated and updated.
[0069] An embodiment of a process 500 for automatic placement or deployment of beamformed lobes of the array microphone 400 is shown in FIG. 5. The process 500 may be performed by the lobe auto-placer 460 so that the array microphone 400 can output one or more audio signals 480 from the array microphone 400 shown in FIG. 4, where the audio signals 480 may include sound picked up by the placed beamformed lobes that are from new sound activity of an audio source. One or more processors and/or other processing components (e.g., analog to digital converters, encryption chips, etc.) within or external to the microphone array 400 may perform any, some, or all of the steps of the process 500. One or more other types of components (e.g., memory, input and/or output devices, transmitters, receivers, buffers, drivers, discrete components, etc.) may also be utilized in conjunction with the processors and/or other processing components to perform any, some, or all of the steps of the process 500. [0070] At step 502, the coordinates corresponding to new sound activity may be received at the lobe auto-placer 460 from the audio activity localizer 450. The audio activity localizer 450 may continuously scan the environment of the array microphone 400 to find new sound activity. The new sound activity found by the audio activity localizer 450 may include suitable audio sources, e.g., human speakers, that are not stationary. The coordinates of the new sound activity may be a particular three dimensional coordinate relative to the location of the array microphone 400, such as in Cartesian coordinates (i.e., x, y, z), or in spherical coordinates (i.e., radial distance/magnitude r, elevation angle Q (theta), azimuthal angle f (phi)).
[0071] In embodiments, the placement of beamformed lobes may occur based on whether an amount of activity of the new sound activity exceeds a predetermined threshold. FIG. 19 is a schematic diagram of an array microphone 1900 that can detect sounds from audio sources at various frequencies, and automatically place beamformed lobes in response to the detection of sound activity while taking into account the amount of activity of the new sound activity. In embodiments, the array microphone 1900 may include some or all of the same components as the array microphone 400 described above, e.g., the microphones 402, the audio activity localizer 450, the lobe auto-placer 460, the beamformer 470, and/or the database 480. The array microphone 1900 may also include an activity detector 1904 in communication with the lobe auto-placer 460 and the beamformer 470.
[0072] The activity detector 1904 may detect an amount of activity in the new sound activity. In some embodiments, the amount of activity may be measured as the energy level of the new sound activity. In other embodiments, the amount of activity may be measured using methods in the time domain and/or frequency domain, such as by applying machine learning (e.g., using cepstrum coefficients), measuring signal non-stationarity in one or more frequency bands, and/or searching for features of desirable sound or speech.
[0073] In embodiments, the activity detector 1904 may be a voice activity detector (VAD) which can determine whether there is voice and/or noise present in the remote audio signal. A VAD may be implemented, for example, by analyzing the spectral variance of the remote audio signal, using linear predictive coding, applying machine learning or deep learning techniques to detect voice and/or noise, and/or using well-known techniques such as the ITU G.729 VAD, ETSI standards for VAD calculation included in the GSM specification, or long term pitch prediction.
[0074] Based on the detected amount of activity, automatic lobe placement may be performed or not performed. The automatic lobe placement may be performed when the detected activity of the new sound activity satisfies predetermined criteria. Conversely, the automatic lobe placement may not be performed when the detected activity of the new sound activity does not satisfy predetermined criteria. For example, satisfying the predetermined criteria may indicate that the new sound activity includes voice, speech, or other sound that is preferably to be picked up by a lobe. As another example, not satisfying the predetermined criteria may indicate that the new sound activity does not include voice, speech, or other sound that is preferably to be picked up by a lobe. By inhibiting automatic lobe placement in this latter scenario, a lobe will not be placed to avoid picking up sound from the new sound activity.
[0075] As seen in the process 2000 of FIG. 20, at step 2003 following step 502, it can be determined whether the amount of activity of the new sound activity satisfies the predetermined criteria. The new sound activity may be received by the activity detector 1904 from the beamformer 470, for example. The detected amount of activity may correspond to the amount of speech, voice, noise, etc. in the new sound activity. In embodiments, the amount of activity may be measured as the energy level of the new sound activity, or as the amount of voice in the new sound activity. In embodiments, the detected amount of activity may specifically indicate the amount of voice or speech in the new sound activity. In other embodiments, the detected amount of activity may be a voice-to-noise ratio, or indicate an amount of noise in the new sound activity.
[0076] If the amount of activity does not satisfy the predetermined criteria at step 2003, then the process 2000 may end at step 522 and the locations of the lobes of the array microphone 1900 are not updated. The detected amount of activity of the new sound activity may not satisfy the predetermined criteria when there is a relatively low amount of speech of voice in the new sound activity, and/or the voice-to-noise ratio is relatively low. Similarly, the detected amount of activity of the new sound activity may not satisfy the predetermined criteria when there is a relatively high amount of noise in the new sound activity. Accordingly, not automatically placing a lobe to detect the new sound activity may help to ensure that undesirable sound is not picked.
[0077] If the amount of activity satisfies the predetermined criteria at step 2003, then the process 2000 may continue to step 504 as described below. The detected amount of activity of the new sound activity may satisfy the predetermined criteria when there is a relatively high amount of speech or voice in the new sound activity, and/or the voice-to-noise ratio is relatively high. Similarly, the detected amount of activity of the new sound activity may satisfy the predetermined criteria when there is a relatively low amount of noise in the new sound activity. Accordingly, automatically placing a lobe to detect the new sound activity may be desirable in this scenario. [0078] Returning to the process 500, at step 504, the lobe auto-placer 460 may update a timestamp, such as to the current value of a clock. The timestamp may be stored in the database 480, in some embodiments. In embodiments, the timestamp and/or the clock may be real time values, e.g., hour, minute, second, etc. In other embodiments, the timestamp and/or the clock may be based on increasing integer values that may enable tracking of the time ordering of events.
[0079] The lobe auto-placer 460 may determine at step 506 whether the coordinates of the new sound activity are nearby (i.e., in the vicinity of) an existing active lobe. Whether the new sound activity is nearby an existing lobe may be based on the difference in azimuth and/or elevation angles of (1) the coordinates of the new sound activity and (2) the coordinates of the existing lobe, relative to a predetermined threshold. The distance of the new sound activity away from the microphone 400 may also influence the determination of whether the coordinates of the new sound activity are nearby an existing lobe. The lobe auto-placer 460 may retrieve the coordinates of the existing lobe from the database 480 for use in step 506, in some embodiments. An embodiment of the determination of whether the coordinates of the new sound activity are nearby an existing lobe is described in more detail below with respect to FIG. 6.
[0080] If at step 506 the lobe auto-placer 460 determines that the coordinates of the new sound activity are nearby an existing lobe, then the process 500 continues to step 520. At step 520, the timestamp of the existing lobe is updated to the current timestamp from step 504. In this scenario, the existing lobe is considered able to cover (i.e., pick up) the new sound activity. The process 500 may end at step 522 and the locations of the lobes of the array microphone 400 are not updated. [0081] However, if at step 506 the lobe auto-placer 460 determines that the coordinates of the new sound activity are not nearby an existing lobe, then the process 500 continues to step 508. In this scenario, the coordinates of the new sound activity may be considered to be outside the current coverage area of the array microphone 400, and therefore the new sound activity needs to be covered. At step 508, the lobe auto-placer 460 may determine whether an inactive lobe of the array microphone 400 is available. In some embodiments, a lobe may be considered inactive if the lobe is not pointed to a particular set of coordinates, or if the lobe is not deployed (i.e., does not exist). In other embodiments, a deployed lobe may be considered inactive based on whether a metric of the deployed lobe (e.g., time, age, etc.) satisfies certain criteria. If the lobe auto placer 460 determines that there is an inactive lobe available at step 508, then the inactive lobe is selected at step 510 and the timestamp of the newly selected lobe is updated to the current timestamp (from step 504) at step 514.
[0082] However, if the lobe auto-placer 460 determines that there is not an inactive lobe available at step 508, then the process 500 may continue to step 512. At step 512, the lobe auto placer 460 may select a currently active lobe to recycle to be pointed at the coordinates of the new sound activity. In some embodiments, the lobe selected for recycling may be an active lobe with the lowest confidence score and/or the oldest timestamp. The confidence score for a lobe may denote the certainty of the coordinates and/or the quality of the sound activity, for example. In embodiments, other suitable metrics related to the lobe may be utilized. The oldest timestamp for an active lobe may indicate that the lobe has not recently detected sound activity, and possibly that the audio source is no longer present in the lobe. The lobe selected for recycling at step 512 may have its timestamp updated to the current timestamp (from step 504) at step 514. [0083] At step 516, a new confidence score may be assigned to the lobe, both when the lobe is a selected inactive lobe from step 510 or a selected recycled lobe from step 512. At step 518, the lobe auto-placer 460 may transmit the coordinates of the new sound activity to the beamformer 470 so that the beamformer 470 can update the location of the lobe to the new coordinates. In addition, the lobe auto-placer 460 may store the new coordinates of the lobe in the database 480.
[0084] The process 500 may be continuously performed by the array microphone 400 as the audio activity localizer 450 finds new sound activity and provides the coordinates of the new sound activity to the lobe auto-placer 460. For example, the process 500 may be performed as audio sources, e.g., human speakers, are moving around a conference room so that one or more lobes can be placed to optimally pick up the sound of the audio sources.
[0085] An embodiment of a process 600 for finding previously placed lobes near sound activity is shown in FIG. 6. The process 600 may be utilized by the lobe auto-focuser 160 at step 204 of the process 200, at step 304 of the process 300, and/or at step 806 of the process 800, and/or by the lobe auto-placer 460 at step 506 of the process 500. In particular, the process 600 may determine whether the coordinates of the new sound activity are nearby an existing lobe of an array microphone 100, 400. Whether the new sound activity is nearby an existing lobe may be based on the difference in azimuth and/or elevation angles of (1) the coordinates of the new sound activity and (2) the coordinates of the existing lobe, relative to a predetermined threshold. The distance of the new sound activity away from the array microphone 100, 400 may also influence the determination of whether the coordinates of the new sound activity are nearby an existing lobe.
[0086] At step 602, the coordinates corresponding to new sound activity may be received at the lobe auto-focuser 160 or the lobe auto-placer 460 from the audio activity localizer 150, 450, respectively. The coordinates of the new sound activity may be a particular three dimensional coordinate relative to the location of the array microphone 100, 400, such as in Cartesian coordinates (i.e., x, y, z), or in spherical coordinates (i.e., radial distance/magnitude r, elevation angle Q (theta), azimuthal angle f (phi)). It should be noted that Cartesian coordinates may be readily converted to spherical coordinates, and vice versa, as needed.
[0087] At step 604, the lobe auto-focuser 160 or the lobe auto-placer 460 may determine whether the new sound activity is relatively far away from the array microphone 100, 400 by evaluating whether the distance of the new sound activity is greater than a determined threshold. The distance of the new sound activity may be determined by the magnitude of the vector representing the coordinates of the new sound activity. If the new sound activity is determined to be relatively far away from the array microphone 100, 400 at step 604 (i.e., greater than the threshold), then at step 606 a lower azimuth threshold may be set for later usage in the process 600. If the new sound activity is determined to not be relatively far away from the array microphone 100, 400 at step 604 (i.e., less than or equal to the threshold), then at step 608 a higher azimuth threshold may be set for later usage in the process 600.
[0088] Following the setting of the azimuth threshold at step 606 or step 608, the process 600 may continue to step 610. At step 610, the lobe auto-focuser 160 or the lobe auto-placer 460 may determine whether there are any lobes to check for their vicinity to the new sound activity. If there are no lobes of the array microphone 100, 400 to check at step 610, then the process 600 may end at step 616 and denote that there are no lobes in the vicinity of the array microphone 100, 400.
[0089] However, if there are lobes of the array microphone 100, 400 to check at step 610, then the process 600 may continue to step 612 and examine one of the existing lobes. At step 612, the lobe auto-focuser 160 or the lobe auto-placer 460 may determine whether the absolute value of the difference between (1) the azimuth of the existing lobe and (2) the azimuth of the new sound activity is greater than the azimuth threshold (that was set at step 606 or step 608). If the condition is satisfied at step 612, then it may be considered that the lobe under examination is not within the vicinity of the new sound activity. The process 600 may return to step 610 to determine whether there are further lobes to examine.
[0090] However, if the condition is not satisfied at step 612, then the process 600 may proceed to step 614. At step 614, the lobe auto-focuser 160 or the lobe auto-placer 460 may determine whether the absolute value of the difference between (1) the elevation of the existing lobe and (2) the elevation of the new sound activity is greater than a predetermined elevation threshold. If the condition is satisfied at step 614, then it may be considered that the lobe under examination is not within the vicinity of the new sound activity. The process 600 may return to step 610 to determine whether there are further lobes to examine. However, if the condition is not satisfied at step 614, then the process 600 may end at step 618 and denote that the lobe under examination is in the vicinity of the new sound activity.
[0091] FIG. 7 is an exemplary depiction of an array microphone 700 that can automatically focus previously placed beamformed lobes within associated lobe regions in response to the detection of new sound activity. In embodiments, the array microphone 700 may include some or all of the same components as the array microphone 100 described above, e.g., the audio activity localizer 150, the lobe auto-focuser 160, the beamformer 170, and/or the database 180. Each lobe of the array microphone 700 may be moveable within its associated lobe region, and a lobe may not cross the boundaries between the lobe regions. It should be noted that while FIG. 7 depicts eight lobes with eight associated lobe regions, any number of lobes and associated lobe regions is possible and contemplated, such as the four lobes with four associated lobe regions depicted in FIGs. 10, 12, 13, and 15. It should also be noted that FIGs. 7, 10, 12, 13, and 15 are depicted as two-dimensional representations of the three-dimensional space around an array microphone.
[0092] At least two sets of coordinates may be associated with each lobe of the array microphone 700: (1) original or initial coordinates ^°i (e.g., that are configured automatically or manually at the time of set up of the array microphone 700), and (2) current coordinates
Figure imgf000032_0001
where a lobe is currently pointing at a given time. The sets of coordinates may indicate the position of the center of a lobe, in some embodiments. The sets of coordinates may be stored in the database 180, in some embodiments.
[0093] In addition, each lobe of the array microphone 700 may be associated with a lobe region of three-dimensional space around it. In embodiments, a lobe region may be defined as a set of points in space that is closer to the initial coordinates
Figure imgf000032_0002
0fa i0be than to the coordinates of any other lobe of the array microphone. In other words, if p is defined as a point in space, then the point p may belong to a particular lobe region LRi, if the distance D between the point p and the center of a lobe i ( LOt ) is the smallest than for any other lobe, as in the following: p e LRj iff i = argmin(Z)(p, LOff)
1£i£N . Regions that are defined in this fashion are known as
Voronoi regions or Voronoi cells. For example, it can be seen in FIG. 7 that there are eight lobes with associated lobe regions that have boundaries depicted between each of the lobe regions. The boundaries between the lobe regions are the sets of points in space that are equally distant from two or more adjacent lobes. It is also possible that some sides of a lobe region may be unbounded. In embodiments, the distance D may be the Euclidean distance between point p and LOi e.g., V ( i 2)2 + C i y2)2 + (zi— ¾)2. In some embodiments, the lobe regions may be recalculated as particular lobes are moved.
[0094] In embodiments, the lobe regions may be calculated and/or updated based on sensing the environment (e.g., objects, walls, persons, etc.) that the array microphone 700 is situated in using infrared sensors, visual sensors, and/or other suitable sensors. For example, information from a sensor may be used by the array microphone 700 to set the approximate boundaries for lobe regions, which in turn can be used to place the associated lobes. In further embodiments, the lobe regions may be calculated and/or updated based on a user defining the lobe regions, such as through a graphical user interface of the array microphone 700.
[0095] As further shown in FIG. 7, there may be various parameters associated with each lobe that can restrict its movement during the automatic focusing process, as described below. One parameter is a look radius of a lobe that is a three-dimensional region of space around the initial coordinates0f the lobe where new sound activity can be considered. In other words, if new sound activity is detected in a lobe region but is outside the look radius of the lobe, then there would be no movement or automatic focusing of the lobe in response to the detection of the new sound activity. Points that are outside of the look radius of a lobe can therefore be considered as an ignore or“don’t care” portion of the associated lobe region. For example, in FIG. 7, the point denoted as A is outside the look radius of lobe 5 and its associated lobe region 5, so any new sound activity at point A would not cause the lobe to be moved. Conversely, if new sound activity is detected in a particular lobe region and is inside the look radius of its lobe, then the lobe may be automatically moved and focused in response to the detection of the new sound activity. [0096] Another parameter is a move radius of a lobe that is a maximum distance in space that the lobe is allowed to move. The move radius of a lobe is generally less than the look radius of the lobe, and may be set to prevent the lobe from moving too far away from the array microphone or too far away from the initial coordinates ^°i of the lobe. For example, in FIG. 7, the point denoted as B is both within the look radius and the move radius of lobe 5 and its associated lobe region 5. If new sound activity is detected at point B, then lobe 5 could be moved to point B. As another example, in FIG. 7, the point denoted as C is within the look radius of lobe 5 but outside the move radius of lobe 5 and its associated lobe region 5. If new sound activity is detected at point C, then the maximum distance that lobe 5 could be moved is limited to the move radius.
[0097] A further parameter is a boundary cushion of a lobe that is a maximum distance in space that the lobe is allowed to move towards a neighboring lobe region and toward the boundary between the lobe regions. For example, in FIG. 7, the point denoted as D is outside of the boundary cushion of lobe 8 and its associated lobe region 8 (that is adjacent to lobe region 7). The boundary cushions of the lobes may be set to minimize the overlap of adjacent lobes. In FIGs. 7, 10, 12, 13, and 15, the boundaries between lobe regions are denoted by a dashed line, and the boundary cushions for each lobe region are denoted by dash-dot lines that are parallel to the boundaries.
[0098] An embodiment of a process 800 for automatic focusing of previously placed beamformed lobes of the array microphone 700 within associated lobe regions is shown in FIG. 8. The process 800 may be performed by the lobe auto-focuser 160 so that the array microphone 700 can output one or more audio signals 180 from the array microphone 700, where the audio signals 180 may include sound picked up by the beamformed lobes that are focused on new sound activity of an audio source. One or more processors and/or other processing components (e.g., analog to digital converters, encryption chips, etc.) within or external to the array microphone 700 may perform any, some, or all of the steps of the process 800. One or more other types of components (e.g., memory, input and/or output devices, transmitters, receivers, buffers, drivers, discrete components, etc.) may also be utilized in conjunction with the processors and/or other processing components to perform any, some, or all of the steps of the process 800.
[0099] Step 802 of the process 800 for the lobe auto-focuser 160 may be substantially the same as step 202 of the process 200 of FIG. 2 described above. In particular, the coordinates and a confidence score corresponding to new sound activity may be received at the lobe auto-focuser 160 from the audio activity localizer 150 at step 802. In embodiments, other suitable metrics related to the new sound activity may be received and utilized at step 802. At step 804, the lobe auto-focuser 160 may compare the confidence score of the new sound activity to a predetermined threshold to determine whether the new confidence score is satisfactory. If the lobe auto-focuser 160 determines at step 804 that the confidence score of the new sound activity is less than the predetermined threshold (i.e., that the confidence score is not satisfactory), then the process 800 may end at step 820 and the locations of the lobes of the array microphone 700 are not updated. However, if the lobe auto-focuser 160 determines at step 804 that the confidence score of the new sound activity is greater than or equal to the predetermined threshold (i.e., that the confidence score is satisfactory), then the process 800 may continue to step 806.
[00100] At step 806, the lobe auto-focuser 160 may identify the lobe region that the new sound activity is within, i.e., the lobe region which the new sound activity belongs to. In embodiments, the lobe auto-focuser 160 may find the lobe closest to the coordinates of the new sound activity in order to identify the lobe region at step 806. For example, the lobe region may be identified by finding the initial coordinates ^°i of a lobe that are closest to the new sound activity, such as by finding an index i of a lobe such that the distance between the coordinates of the new sound i = argmin(Z)(s, LOj)) activity and the initial coordinates ^°i of a lobe is minimized:1£i£iV . The lobe and its associated lobe region that contain the new sound activity may be determined as the lobe and lobe region identified at step 806.
[00101] After the lobe region has been identified at step 806, the lobe auto-focuser 160 may determine whether the coordinates of the new sound activity are outside a look radius of the lobe at step 808. If the lobe auto-focuser 160 determines that the coordinates of the new sound activity are outside the look radius of the lobe at step 808, then the process 800 may end at step 820 and the locations of the lobes of the array microphone 700 are not updated. In other words, if the new sound activity is outside the look radius of the lobe, then the new sound activity can be ignored and it may be considered that the new sound activity is outside the coverage of the lobe. As an example, point A in FIG. 7 is within lobe region 5 that is associated with lobe 5, but is outside the look radius of lobe 5. Details of determining whether the coordinates of the new sound activity are outside the look radius of a lobe are described below with respect to FIGs. 9 and 10.
[00102] However, if at step 808 the lobe auto-focuser 160 determines that the coordinates of the new sound activity are not outside (i.e., are inside) the look radius of the lobe, then the process 800 may continue to step 810. In this scenario, the lobe may be moved towards the new sound activity contingent on assessing the coordinates of the new sound activity with respect to other parameters such as a move radius and a boundary cushion, as described below. At step 810, the lobe auto-focuser 160 may determine whether the coordinates of the new sound activity are outside a move radius of the lobe. If the lobe auto-focuser 160 determines that the coordinates of the new sound activity are outside the move radius of the lobe at step 810, then the process 800 may continue to step 816 where the movement of the lobe may be limited or restricted. In particular, at step 816, the new coordinates where the lobe may be provisionally moved to can be set to no more than the move radius. The new coordinates may be provisional because the movement of the lobe may still be assessed with respect to the boundary cushion parameter, as described below. In embodiments, the movement of the lobe at step 816 may be restricted based on a scaling factor a (where 0 < a < 1), in order to prevent the lobe from moving too far from its initial coordinates
Figure imgf000037_0001
As an example, point C in FIG. 7 is outside the move radius of lobe 5 so the farthest distance that lobe 5 could be moved is the move radius. After step 816, the process 800 may continue to step 812. Details of limiting the movement of a lobe to within its move radius are described below with respect to FIGs. 11 and 12.
[00103] The process 800 may also continue to step 812 if at step 810 the lobe auto-focuser 160 determines that the coordinates of the new sound activity are not outside (i.e., are inside) the move radius of the lobe. As an example, point B in Fig. 7 is inside the move radius of lobe 5 so lobe 5 could be moved to point B. At step 812, the lobe auto-focuser 160 may determine whether the coordinates of the new sound activity are close to a boundary cushion and are therefore too close to an adjacent lobe. If the lobe auto-focuser 160 determines that the coordinates of the new sound activity are close to a boundary cushion at step 812, then the process 800 may continue to step 818 where the movement of the lobe may be limited or restricted. In particular, at step 818, the new coordinates where the lobe may be moved to may be set to just outside the boundary cushion. In embodiments, the movement of the lobe at step 818 may be restricted based on a scaling factor b (where 0 < b < 1). As an example, point D in FIG. 7 is outside the boundary cushion between adjacent lobe region 8 and lobe region 7. The process 800 may continue to step 814 following step 818. Details regarding the boundary cushion are described below with respect to FIGs. 13-15.
[00104] The process 800 may also continue to step 814 if at step 812 the lobe auto-focuser 160 determines that the coordinates of the new sound activity are not close to a boundary cushion. At step 812, the lobe auto-focuser 160 may transmit the new coordinates of the lobe to the beamformer 170 so that the beamformer 170 can update the location of the existing lobe to the new coordinates. In embodiments, the new coordinates
Figure imgf000038_0001
of the lobe may be defined as L = LOi + min(a, b) M = LOt + Mr where ^ is a motion vector and
Figure imgf000038_0002
is a restricted motion vector, as described in more detail below. In embodiments, the lobe auto-focuser 160 may store the new coordinates of the lobe in the database 180.
[00105] Depending on the steps of the process 800 described above, when a lobe is moved due to the detection of new sound activity, the new coordinates of the lobe may be: (1) the coordinates of the new sound activity, if the coordinates of the new sound activity are within the look radius of the lobe, within the move radius of the lobe, and not close to the boundary cushion of the associated lobe region; (2) a point in the direction of the motion vector towards the new sound activity and limited to the range of the move radius, if the coordinates of the new sound activity are within the look radius of the lobe, outside the move radius of the lobe, and not close to the boundary cushion of the associated lobe region; or (3) just outside the boundary cushion, if the coordinates of the new sound activity are within the look radius of the lobe and close to the boundary cushion.
[00106] The process 800 may be continuously performed by the array microphone 700 as the audio activity localizer 150 finds new sound activity and provides the coordinates and confidence score of the new sound activity to the lobe auto-focuser 160. For example, the process 800 may be performed as audio sources, e.g., human speakers, are moving around a conference room so that one or more lobes can be focused on the audio sources to optimally pick up their sound.
[00107] An embodiment of a process 900 for determining whether the coordinates of new sound activity are outside the look radius of a lobe is shown in FIG. 9. The process 900 may be utilized by the lobe auto-focuser 160 at step 808 of the process 800, for example. In particular, the process 900 may begin at step 902 where a motion vectorM may be computed as M = s— LOL The motion vector may be the vector connecting the center of the original coordinates
Figure imgf000039_0001
0f the lobe to the coordinates s of the new sound activity. For example, as shown in FIG. 10, new sound activity S is present in lobe region 3 and the motion vectorM is shown between the original coordinates LCb of lobe 3 and the coordinates of the new sound activity S. The look radius for lobe 3 is also depicted in FIG. 10.
[00108] After computing the motion vectorM at step 902, the process 900 may continue to step 904. At step 904, the lobe auto-focuser 160 may determine whether the magnitude of the motion vector is greater than the look radius for the lobe, as in the following:
|M| = J(m,y + (my)2 + (mJ2 > (too Radius ,If the magnitude of the motioll vector Mis greater than the look radius for the lobe at step 904, then at step 906, the coordinates of the new sound activity may be denoted as outside the look radius for the lobe. For example, as shown in FIG. 10, because the new sound activity S is outside the look radius of lobe 3, the new sound activity S would be ignored. However, if the magnitude of the motion vectorM is less than or equal to the look radius for the lobe at step 904, then at step 908, the coordinates of the new sound activity may be denoted as inside the look radius for the lobe.
[00109] An embodiment of a process 1100 for limiting the movement of a lobe to within its move radius is shown in FIG. 11. The process 1100 may be utilized by the lobe auto-focuser 160 at step 816 of the process 800, for example. In particular, the process 1100 may begin at step
1102 where a motion vectorM may be computed as M = s— LOi similar to as described above with respect to step 902 of the process 900 shown in FIG. 9. For example, as shown in
FIG. 12, new sound activity S is present in lobe region 3 and the motion vectorM is shown between the original coordinates LCb of lobe 3 and the coordinates of the new sound activity S. The move radius for lobe 3 is also depicted in FIG. 12.
[00110] After computing the motion vectorM at step 1102, the process 1100 may continue to step 1104. At step 1104, the lobe auto-focuser 160 may determine whether the magnitude of the motion vectorM is less than or equal to the move radius for the lobe, as in the following:
|M| £ ( MoveRadius)i
Figure imgf000040_0001
magnitude of the motion vector M is less than or equal to the move radius at step 1104, then at step 1106, the new coordinates of the lobe may be provisionally moved to the coordinates of the new sound activity. For example, as shown in FIG. 12, because the new sound activity S is inside the move radius of lobe 3, the lobe would provisionally be moved to the coordinates of the new sound activity S.
[00111] However, if the magnitude of the motion vectorM is greater than the move radius at step 1104, then at step 1108, the magnitude of the motion vectorM may be scaled by a scaling factor a to the maximum value of the move radius while keeping the same direction, as in the
(MoveRadius)j
M = - -— -— M = aM
following: \ \ , where the scaling factor a may be defined as:
Figure imgf000041_0001
[00112] FIGs. 13-15 relate to the boundary cushion of a lobe region, which is the portion of the space next to the boundary or edge of the lobe region that is adjacent to another lobe region. In particular, the boundary cushion next to the boundary between two lobes i and j may be described indirectly using a vector ¾ that connects the original coordinates of the two lobes
(i.e., LOiancj LOj Accordingly, such a vector can be described as: ¾= ^ °J~ ^0i . The midpoint of this vector ¾ may be a point that is at the boundary between the two lobe regions. In particular, moving from the original coordinates ^°i of lobe i in the direction of the vector
¾ is the shortest path towards the adjacent lobe j. Furthermore, moving from the original coordinates ^°i of lobe / in the direction of the vector ¾ but keeping the amount of movement to half of the magnitude of the vector ¾ will be the exact boundary between the two lobe regions.
[00113] Based on the above, moving from the original coordinates
Figure imgf000041_0002
of lobe i in the direction of the vector ¾ but restricting the amount of movement based on a value A (where 0 < A < 1)
A S
(i.e., 2 ) will be within (100 * A)% of the boundary between the lobe regions. For example, if A is 0.8 (i.e., 80%), then the new coordinates of a moved lobe would be within 80% of the boundary between lobe regions. Therefore, the value A can be utilized to create the boundary cushion between two adjacent lobe regions. In general, a larger boundary cushion can prevent a lobe from moving into another lobe region, while a smaller boundary cushion can allow a lobe to move closer to another lobe region. [00114] In addition, it should be noted that if a lobe i is moved in a direction towards a lobe j due to the detection of new sound activity (e.g., in the direction of a motion vectorM as described above), there is a component of movement in the direction of the lobe j, i.e., in the direction of the vector ¾ . In order to find the component of movement in the direction of the vector ¾ , the motion vector Mcan be projected onto the unit vector ^uij= //l¾ I (which has the same direction as the vector ¾ with unity magnitude) to compute a projected vector As an example, FIG. 13 shows a vector ¾ that connects lobes 3 and 2, which is also the shortest path from the center of lobe 3 towards lobe region 2. The projected vector PM32 shown in FIG. 13 is the projection of the motion vector M onto the unit vector ^32/ 1^231 .
[00115] An embodiment of a process 1400 for creating a boundary cushion of a lobe region using vector projections is shown in FIG. 14. The process 1400 may be utilized by the lobe auto- focuser 160 at step 818 of the process 800, for example. The process 1400 may result in restricting the magnitude of a motion vectorM such that a lobe is not moved in the direction of any other lobe region by more than a certain percentage that characterizes the size of the boundary cushion.
[00116] Prior to performing the process 1400, a vector ¾ and unit vectors ^uy= /|¾ I can be computed for all pairs of active lobes. As described previously, the vectors ¾ may connect the original coordinates of lobes i and j. The parameter Ai (where 0 < Ai < 1) may be determined for all active lobes, which characterizes the size of the boundary cushion for each lobe region. As described previously, prior to the process 1400 being performed (i.e., prior to step 818 of the process 800), the lobe region of new sound activity may be identified (i.e., at step 806) and a motion vector may be computed (i.e., using the process 1100/step 810). [00117] At step 1402 of the process 1400, the projected vector
Figure imgf000043_0001
may be computed for all lobes that are not associated with the lobe region identified for the new sound activity. The magnitude of a projected vector
Figure imgf000043_0002
(as described above with respect to FIG. 13) can determine the amount of movement of a lobe in the direction of a boundary between lobe regions. Such a magnitude of the projected vector
Figure imgf000043_0003
can be computed as a scalar, such as by a dot product of the motion vector M and the unit vector ^u;/= A//IA/ 1 such that projection PMij = MxDUij z + MyDUij y + Mz Duij Z
[00118] When ^^< ^ , the motion vector M has a component in the opposite direction of the vector Ay . This means that movement of a lobe i would be in the direction opposite of the boundary with a lobe j. In this scenario, the boundary cushion between lobes i and j is not a concern because the movement of the lobe i would be away from the boundary with lobe j.
However, when
Figure imgf000043_0004
> ^ , the motion vectorM has a component in the same direction as the direction of the vector ^ . This means that movement of a lobe i would be in the same direction as the boundary with lobe j. In this scenario, movement of the lobe i can be limited to outside the boundary cushion so that
Figure imgf000043_0005
, where Ai (with 0 < Ai < 1) is a parameter that characterizes the boundary cushion for a lobe region associated with lobe i.
[00119] A scaling factor b may be utilized to ensure that
Figure imgf000043_0006
The scaling factor b may be used to scale the motion vectorM and be defined as
Figure imgf000044_0001
. Accordingly, if new sound activity is detected that is outside the boundary cushion of a lobe region, then the scaling factor b may be equal to 1, which indicates that there is no scaling of the motion vector M . At step 1404, the scaling factor b may be computed for all the lobes that are not associated with the lobe region identified for the new sound activity.
[00120] At step 1406, the minimum scaling factor b can be determined that corresponds to the boundary cushion of the nearest lobe regions, as in the following: ^
Figure imgf000044_0002
. After the minimum scaling factor b has been determined at step 1406, then at step 1408, the minimum scaling factor b may be applied to the motion vector M to determine a restricted motion vector Mr = min(a, b) M
[00121] For example, FIG. 15 shows new sound activity S that is present in lobe region 3 as well as a motion vector M between the initial coordinates LCb of lobe 3 and the coordinates of the new sound activity S. Vectors ^3i , ^32 , ^34 and projected vectors PM 31, PM 32, PM 34 are depicted between lobe 3 and each of the other lobes that are not associated with lobe region 3
(i.e., lobes 1, 2, and 4). In particular, vectors ^31 , ^32 , ^34 may be computed for all pairs of active lobes (i.e., lobes 1, 2, 3, and 4), and projections ^M3i, PM^2, PM34 are computed for all lobes that are not associated with lobe region 3 (that is identified for the new sound activity S). The magnitude of the projected vectors may be utilized to compute scaling factors b, and the minimum scaling factor b may be used to scale the motion vector M The motion vector M may therefore be restricted to outside the boundary cushion of lobe region 3 because the new sound activity S is too close to the boundary between lobe 3 and lobe 2. Based on the restricted motion vector, the coordinates of lobe 3 may be moved to a coordinate Sr that is outside the boundary cushion of lobe region 3.
[00122] The projected vectorPM 34 depicted in FIG. 15 is negative and the corresponding scaling factor b4 (for lobe 4) is equal to 1. The scaling factor bi (for lobe 1) is also equal to 1 because
Figure imgf000045_0001
, while the scaling factor b2 (for lobe 2) is less than 1 because the new sound activity S is inside the boundary cushion between lobe region 2 and lobe region 3 (i.e.,
PM32 > A3
Figure imgf000045_0002
Accordingly, the minimum scaling factor b2 may be utilized to ensure that lobe 3 moves to the coordinate Sr.
[00123] FIGs. 16 and 17 are schematic diagrams of array microphones 1600, 1700 that can detect sounds from audio sources at various frequencies. The array microphone 1600 of FIG. 16 can automatically focus beamformed lobes in response to the detection of sound activity, while enabling inhibition of the automatic focus of the beamformed lobes when the activity of a remote audio signal from a far end exceeds a predetermined threshold. In embodiments, the array microphone 1600 may include some or all of the same components as the array microphone 100 described above, e.g., the microphones 102, the audio activity localizer 150, the lobe auto- focuser 160, the beamformer 170, and/or the database 180. The array microphone 1600 may also include a transducer 1602, e.g., a loudspeaker, and an activity detector 1604 in communication with the lobe auto-focuser 160. The remote audio signal from the far end may be in communication with the transducer 1602 and the activity detector 1604.
[00124] The array microphone 1700 of FIG. 17 can automatically place beamformed lobes in response to the detection of sound activity, while enabling inhibition of the automatic placement of the beamformed lobes when the activity of a remote audio signal from a far end exceeds a predetermined threshold. In embodiments, the array microphone 1700 may include some or all of the same components as the array microphone 400 described above, e.g., the microphones 402, the audio activity localizer 450, the lobe auto-placer 460, the beamformer 470, and/or the database 480. The array microphone 1700 may also include a transducer 1702, e.g., a loudspeaker, and an activity detector 1704 in communication with the lobe auto-placer 460. The remote audio signal from the far end may be in communication with the transducer 1702 and the activity detector 1704.
[00125] The transducer 1602, 1702 may be utilized to play the sound of the remote audio signal in the local environment where the array microphone 1600, 1700 is located. The activity detector 1604, 1704 may detect an amount of activity in the remote audio signal. In some embodiments, the amount of activity may be measured as the energy level of the remote audio signal. In other embodiments, the amount of activity may be measured using methods in the time domain and/or frequency domain, such as by applying machine learning (e.g., using cepstrum coefficients), measuring signal non-stationarity in one or more frequency bands, and/or searching for features of desirable sound or speech.
[00126] In embodiments, the activity detector 1604, 1704 may be a voice activity detector (VAD) which can determine whether there is voice present in the remote audio signal. A VAD may be implemented, for example, by analyzing the spectral variance of the remote audio signal, using linear predictive coding, applying machine learning or deep learning techniques to detect voice, and/or using well-known techniques such as the ITU G.729 VAD, ETSI standards for VAD calculation included in the GSM specification, or long term pitch prediction.
[00127] Based on the detected amount of activity, automatic lobe adjustment may be performed or inhibited. Automatic lobe adjustment may include, for example, auto focusing of lobes, auto focusing of lobes within regions, and/or auto placement of lobes, as described herein. The automatic lobe adjustment may be performed when the detected activity of the remote audio signal does not exceed a predetermined threshold. Conversely, the automatic lobe adjustment may be inhibited (i.e., not be performed) when the detected activity of the remote audio signal exceeds the predetermined threshold. For example, exceeding the predetermined threshold may indicate that the remote audio signal includes voice, speech, or other sound that is preferably not to be picked up by a lobe. By inhibiting automatic lobe adjustment in this scenario, a lobe will not be focused or placed to avoid picking up sound from the remote audio signal.
[00128] In some embodiments, the activity detector 1604, 1704 may determine whether the detected amount of activity of the remote audio signal exceeds the predetermined threshold. When the detected amount of activity does not exceed the predetermined threshold, the activity detector 1604, 1704 may transmit an enable signal to the lobe auto-focuser 160 or the lobe auto placer 460, respectively, to allow lobes to be adjusted. In addition to or alternatively, when the detected amount of activity of the remote audio signal exceeds the predetermined threshold, the activity detector 1604, 1704 may transmit a pause signal to the lobe auto-focuser 160 or the lobe auto-placer 460, respectively, to stop lobes from being adjusted.
[00129] In other embodiments, the activity detector 1604, 1704 may transmit the detected amount of activity of the remote audio signal to the lobe auto-focuser 160 or to the lobe auto placer 460, respectively. The lobe auto-focuser 160 or the lobe auto-placer 460 may determine whether the detected amount of activity exceeds the predetermined threshold. Based on whether the detected amount of activity exceeds the predetermined threshold, the lobe auto-focuser 160 or lobe auto-placer 460 may execute or pause the adjustment of lobes. [00130] The various components included in the array microphone 1600, 1700 may be implemented using software executable by one or more servers or computers, such as a computing device with a processor and memory, graphics processing units (GPUs), and/or by hardware (e.g., discrete logic circuits, application specific integrated circuits (ASIC), programmable gate arrays (PGA), field programmable gate arrays (FPGA), etc.
[00131] An embodiment of a process 1800 for inhibiting automatic adjustment of beamformed lobes of an array microphone based on a remote far end audio signal is shown in FIG. 18. The process 1800 may be performed by the array microphones 1600, 1700 so that the automatic focus or the automatic placement of beamformed lobes can be performed or inhibited based on the amount of activity of a remote audio signal from a far end. One or more processors and/or other processing components (e.g., analog to digital converters, encryption chips, etc.) within or external to the array microphones 1600, 1700 may perform any, some, or all of the steps of the process 1800. One or more other types of components (e.g., memory, input and/or output devices, transmitters, receivers, buffers, drivers, discrete components, etc.) may also be utilized in conjunction with the processors and/or other processing components to perform any, some, or all of the steps of the process 1800.
[00132] At step 1802, a remote audio signal may be received at the array microphone 1600, 1700. The remote audio signal may be from a far end (e.g., a remote location), and may include sound from the far end (e.g., speech, voice, noise, etc.). The remote audio signal may be output on a transducer 1602, 1702 at step 1804, such as a loudspeaker in the local environment. Accordingly, the sound from the far end may be played in the local environment, such as during a conference call so that the local participants can hear the remote participants. [00133] The remote audio signal may be received by an activity detector 1604, 1704, which may detect an amount of activity of the remote audio signal at step 1806. The detected amount of activity may correspond to the amount of speech, voice, noise, etc. in the remote audio signal. In embodiments, the amount of activity may be measured as the energy level of the remote audio signal. At step 1808, if the detected amount of activity of the remote audio signal does not exceed a predetermined threshold, then the process 1800 may continue to step 1810. The detected amount of activity of the remote audio signal not exceeding the predetermined threshold may indicate that there is a relatively low amount of speech, voice, noise, etc. in the remote audio signal. In embodiments, the detected amount of activity may specifically indicate the amount of voice or speech in the remote audio signal. At step 1810, lobe adjustments may be performed. Step 1810 may include, for example, the processes 200 and 300 for automatic focusing of beamformed lobes, the process 400 for automatic placement of beamformed lobes, and/or the process 800 for automatic focusing of beamformed lobes within lobe regions, as described herein. Lobe adjustments may be performed in this scenario because even though lobes may be focused or placed, there is a lower likelihood that such a lobe will pick up undesirable sound from the remote audio signal that is being output in the local environment. After step 1810, the process 1800 may return to step 1802.
[00134] However, if at step 1808 the detected amount of activity of the remote audio signal exceeds the predetermined threshold, then the process 1800 may continue to step 1812. At step 1812, no lobe adjustment may be performed, i.e., lobe adjustment may be inhibited. The detected amount of activity of the remote audio signal exceeding the predetermined threshold may indicate that there is a relatively high amount of speech, voice, noise, etc. in the remote audio signal. Inhibiting lobe adjustments from occurring in this scenario may help to ensure that a lobe is not focused or placed to pick up sound from the remote audio signal that is being output in the local environment. In some embodiments, the process 1800 may return to step 1802 after step 1812. In other embodiments, the process 1800 may wait for a certain time duration at step 1812 before returning to step 1802. Waiting for a certain time duration may allow reverberations in the local environment (e.g., caused by playing the sound of the remote audio signal) to dissipate.
[00135] The process 1800 may be continuously performed by the array microphones 1600, 1700 as the remote audio signal from the far end is received. For example, the remote audio signal may include a low amount of activity (e.g., no speech or voice) that does not exceed the predetermined threshold. In this situation, lobe adjustments may be performed. As another example, the remote audio signal may include a high amount of activity (e.g., speech or voice) that exceeds the predetermined threshold. In this situation, the performance of lobe adjustments may be inhibited. Whether lobe adjustments are performed or inhibited may therefore change as the amount of activity of the remote audio signal changes. The process 1800 may result in more optimal pick up of sound in the local environment by reducing the likelihood that sound from the far end is undesirably picked up.
[00136] Any process descriptions or blocks in figures should be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process, and alternate implementations are included within the scope of the embodiments of the invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those having ordinary skill in the art. [00137] This disclosure is intended to explain how to fashion and use various embodiments in accordance with the technology rather than to limit the true, intended, and fair scope and spirit thereof. The foregoing description is not intended to be exhaustive or to be limited to the precise forms disclosed. Modifications or variations are possible in light of the above teachings. The embodiment s) were chosen and described to provide the best illustration of the principle of the described technology and its practical application, and to enable one of ordinary skill in the art to utilize the technology in various embodiments and with various modifications as are suited to the particular use contemplated. All such modifications and variations are within the scope of the embodiments as determined by the appended claims, as may be amended during the pendency of this application for patent, and all equivalents thereof, when interpreted in accordance with the breadth to which they are fairly, legally and equitably entitled.

Claims

1. A method, comprising:
deploying a plurality of lobes from an array microphone in an environment;
selecting one of the plurality of lobes to move, based on location data of sound activity in the environment; and
relocating the selected lobe based on the location data of the sound activity.
2. The method of claim 1, wherein the location data of the sound activity comprises coordinates of the sound activity in the environment.
3. The method of claim 2, wherein selecting the one of the plurality of lobes is based on a proximity of the coordinates of the sound activity to the selected lobe.
4. The method of claim 1,
further comprising determining whether a metric associated with the sound activity is greater than or equal to a metric associated with the selected lobe;
wherein relocating the selected lobe comprises relocating the selected lobe based on the location of the sound activity, when it is determined that the metric associated with the sound activity is greater than or equal to a metric associated with the selected lobe.
5. The method of claim 4, wherein the metric associated with the sound activity comprises a confidence score denoting one or more of a certainty of the location data of the sound activity, or a quality of the sound activity.
6. The method of claim 4, further comprising storing the metric associated with the sound activity in a database as the metric associated with the selected lobe, when it is determined that the metric associated with the sound activity is greater than or equal to a metric associated with the selected lobe.
7. The method of claim 6, wherein determining whether the metric associated with the sound activity is greater than or equal to the metric associated with the selected lobe comprises: retrieving the metric associated with the selected lobe from the database; and
comparing the metric associated with the sound activity with the retrieved metric associated with the selected lobe.
8. The method of claim 2, wherein selecting the one of the plurality of lobes to move is based on one or more of: (1) a difference in an azimuth of the coordinates of the sound activity and an azimuth of the selected lobe, relative to an azimuth threshold, or (2) a difference in an elevation angle of the coordinates of the sound activity and an elevation angle of the selected lobe, relative to an elevation angle threshold.
9. The method of claim 8, wherein selecting the one of the plurality of lobes to move is based on a distance of the coordinates of the sound activity from the array microphone.
10. The method of claim 9, further comprising setting the azimuth threshold based on the distance of the coordinates of the sound activity from the array microphone.
11. The method of claim 8, wherein selecting the one of the plurality of lobes to move comprises selecting the selected lobe when (1) an absolute value of the difference in the azimuth of the coordinates of the sound activity and the azimuth of the selected lobe is not greater than the azimuth threshold; and (2) an absolute value of the difference in the elevation angle of the coordinates of the sound activity and the elevation angle of the selected lobe is greater than the elevation angle threshold.
12. The method of claim 4, further comprising storing the location data of the sound activity in a database as a new location of the selected lobe, when it is determined that the metric associated with the sound activity is greater than or equal to a metric associated with the selected lobe.
13. The method of claim 4:
further comprising determining a time duration since a last move of the selected lobe; wherein when the metric associated with the sound activity is greater than or equal to the metric associated with the selected lobe and the time duration exceeds a time threshold, relocating the selected lobe based on the location data of the sound activity.
14. The method of claim 1, further comprising:
evaluating and maximizing a cost functional associated with the coordinates of the sound activity; and when the cost functional associated with the coordinates of the sound activity has been maximized, relocating the selected lobe based on adjusted location data, wherein the adjusted location data comprises the location data of the sound activity that is adjusted based on the evaluation and maximization of the cost functional associated with the coordinates of the sound activity.
15. The method of claim 14, wherein the cost functional is evaluated and maximized based on one or more of the coordinates of the sound activity, a signal to noise ratio associated with the selected lobe, a gain value associated with the selected lobe, voice activity detection information associated with the sound activity, or a distance between the selected lobe and the location data of the sound activity.
16. The method of claim 14, wherein the adjusted location data is adjusted in a direction of a gradient of the cost functional.
17. The method of claim 14, wherein evaluating and maximizing the cost function comprises:
(A) moving the selected lobe based on the location data of the sound activity;
(B) evaluating the cost functional of the moved selected lobe;
(C) moving the selected lobe by a predetermined amount in each three dimensional direction;
(D) after each movement of the selected lobe at step (C), evaluating the cost functional of the selected lobe at each of the moved locations; (E) calculating a gradient of the cost functional based on estimates of partial derivatives that are calculated based on the evaluated cost functionals at the location data of the sound activity and each of the moved locations of step (C);
(F) moving the selected lobe by a predetermined step size in the direction of the gradient;
(G) evaluating the cost functional of the selected lobe at the moved location of step (F);
(H) adjusting the predetermined step size when the cost functional of step (G) is less than the cost functional of step (B), and repeating step (F); and
(I) when the absolute value of the difference between the cost functional of step (G) and the cost functional of step (B) is less than a predetermined amount, denoting the moved location of step (F) as the adjusted location data.
18. The method of claim 17, wherein evaluating and maximizing the cost function further comprises:
when the absolute value of the difference between the cost functional of step (G) and the cost functional of step (B) is less than a predetermined amount:
dithering the existing lobe at the moved coordinates of step (F) by random amounts; and
evaluating the cost functional of the selected lobe at the dithered moved locations.
19. The method of claim 1, further comprising:
determining limited location data for movement of the selected lobe, based on the location data of the sound activity and a parameter associated with the selected lobe; and
relocating the selected lobe based on the limited location data.
20. The method of claim 19:
wherein each of the plurality of lobes is associated with one of a plurality of lobe regions; the method further comprising identifying the lobe region including the sound activity, based on the location data of the sound activity in the environment, wherein the identified lobe region is associated with the selected lobe;
wherein the parameter is further associated with the identified lobe region.
21. The method of claim 19,
wherein the parameter comprises a look radius around the selected lobe, the look radius comprising a space around the selected lobe where the sound activity can be considered; and wherein determining whether the selected lobe is near the sound activity comprises determining whether the sound activity is within the look radius, based on the location data of the sound activity.
22. The method of claim 19,
wherein the parameter comprises a move radius, the move radius comprising a maximum distance from the selected lobe that the selected lobe is permitted to move; and
wherein the limited location data comprises:
the location data of the sound activity, when the location data of the sound activity denotes that the sound activity is within the move radius; or
the move radius, when the location data of the sound activity denotes that the sound activity is outside of the move radius.
23. The method of claim 19,
wherein the parameter comprises a boundary cushion in the lobe region, the boundary cushion comprising a maximum distance from the selected lobe that the selected lobe is permitted to move towards a boundary of a neighboring lobe region; and
wherein the limited location data comprises:
the location data of the sound activity, when the location data of the sound activity denotes that the sound activity is outside of the boundary cushion; or
a location outside of the boundary cushion, when the location data of the sound activity denotes that the sound activity is within the boundary cushion.
24. The method of claim 1, further comprising:
receiving a remote audio signal from a far end;
detecting an amount of activity of the remote audio signal; and
when the amount of activity of the remote audio signal exceeds a predetermined threshold, inhibiting performance of the steps of selecting the one of the plurality of lobes to move and relocating the selected lobe.
25. An array microphone system, comprising:
a plurality of microphone elements, each of the plurality of microphone elements configured to detect sound and output an audio signal;
a beamformer in communication with the plurality of microphone elements, the beamformer configured to generate one or more beamformed signals based on the audio signals of the plurality of microphone elements, wherein the one or more beamformed signals correspond with one or more lobes each positioned at a location in an environment;
an audio activity localizer in communication with the plurality of microphone elements, the audio activity localizer configured to determine (1) coordinates of new sound activity in the environment and (2) a metric associated with the new sound activity; and
a lobe auto-focuser in communication with the audio activity localizer and the beamformer, the lobe auto-focuser configured to:
receive the coordinates of the new sound activity and the metric associated with the new sound activity;
determine whether the coordinates of the new sound activity are near an existing lobe, wherein an existing lobe comprises one of the one or more lobes;
when the coordinates of the new sound activity are determined to be near the existing lobe, determine whether the metric associated with the new sound activity is greater than or equal to a metric associated with the existing lobe; and
when it is determined that the metric associated with the new sound activity is greater than or equal to a metric associated with the existing lobe, transmit the coordinates of the new sound activity to the beamformer to cause the beamformer to update the location of the existing lobe to the coordinates of the new sound activity.
26. The system of claim 25:
wherein the metric comprises a confidence score of the new sound activity; and wherein the lobe auto-focuser is configured to determine whether the metric associated with the new sound activity is greater than or equal to a metric associated with the existing lobe by determining whether the confidence score of the new sound activity is greater than or equal to a confidence score of the existing lobe.
27. The system of claim 26, wherein the confidence score denotes one or more of a certainty of the coordinates of the new sound activity or a quality of the new sound activity.
28. The system of claim 26, further comprising a database in communication with the lobe auto-focuser, wherein the lobe auto-focuser is further configured to store the confidence score associated with the new sound activity in the database as a new confidence score of the existing lobe, when it is determined that the metric associated with the new sound activity is greater than or equal to a metric associated with the existing lobe.
29. The system of claim 28, wherein the lobe auto-focuser is configured to determine whether the confidence score of the new sound activity is greater than or equal to a confidence score of the existing lobe by:
retrieving the confidence score of the existing lobe from the database; and
comparing the confidence score associated with the new sound activity with the retrieved confidence score of the existing lobe.
30. The system of claim 25, wherein the lobe auto-focuser is configured to determine whether the coordinates of the new sound activity are near the existing lobe, based on one or more of: (1) a difference in an azimuth of the coordinates of the new sound activity and an azimuth of the location of the existing lobe, relative to an azimuth threshold, or (2) a difference in an elevation angle of the coordinates of the new sound activity and an elevation angle of the location of the existing lobe, relative to an elevation angle threshold.
31. The system of claim 30, wherein the lobe auto-focuser is configured to determine whether the coordinates of the new sound activity are near an existing lobe, based on a distance of the coordinates of the new sound activity from the system.
32. The system of claim 31, wherein the lobe auto-focuser is further configured to set the azimuth threshold based on the distance of the coordinates of the new sound activity from the system.
33. The system of claim 30, wherein the lobe auto-focuser is configured to determine that the coordinates of the new sound activity are near the existing lobe when (1) an absolute value of the difference in the azimuth of the coordinates of the new sound activity and the azimuth of the location of the existing lobe is not greater than the azimuth threshold; and (2) an absolute value of the difference in the elevation angle of the coordinates of the new sound activity and the elevation angle of the location of the existing lobe is greater than the elevation angle threshold.
34. The system of claim 25, further comprising a database in communication with the lobe auto-focuser, wherein the lobe auto-focuser is further configured to store the coordinates of the new sound activity in the database as the new location of the existing lobe, when it is determined that the metric associated with the new sound activity is greater than or equal to a metric associated with the existing lobe.
35. The system of claim 25, wherein the lobe auto-focuser is further configured to: when the coordinates of the new sound activity are determined to be near the existing lobe, determine whether the metric associated with the new sound activity is greater than or equal to a metric associated with the existing lobe, and based on a time duration since a last move of the existing lobe; and
when it is determined that the metric associated with the new sound activity is greater than or equal to a metric associated with the existing lobe and the time duration since the last move of the existing lobe exceeds a time threshold, transmit the coordinates of the new sound activity to the beamformer to cause the beamformer to update the location of the existing lobe to the coordinates of the new sound activity.
36. The system of claim 25, wherein the lobe auto-focuser is further configured to:
when it is determined that the metric associated with the new sound activity is greater than or equal to a metric associated with the existing lobe, evaluating and maximizing a cost functional associated with the coordinates of the new sound activity; and
when the cost functional associated with the coordinates of the new sound activity has been maximized, transmit adjusted coordinates of the new sound activity to the beamformer to cause the beamformer to update the location of the existing lobe to the adjusted coordinates; wherein the adjusted coordinates comprise the coordinates of the new sound activity that are adjusted based on the evaluation and maximization of the cost functional associated with the coordinates of the new sound activity.
37. The system of claim 36, wherein the cost functional is evaluated and maximized based on one or more of the coordinates of the new sound activity, a signal to noise ratio associated with the existing lobe, a gain value associated with the existing lobe, voice activity detection information associated with the new sound activity, or a distance between the location of the existing lobe and the coordinates of the new sound activity.
38. The system of claim 25, wherein the lobe auto-focuser is further configured to:
when it is determined that the metric associated with the new sound activity is greater than or equal to a metric associated with the existing lobe:
determine a lobe region that the coordinates of the new sound activity is within; determine whether the coordinates of the new sound activity are near the existing lobe, based on the coordinates of the new sound activity and a parameter associated with the existing lobe and the lobe region; and
when it is determined that the coordinates of the new sound activity are near the existing lobe:
restrict the update of the location of the existing lobe to limited coordinates within the lobe region around the existing lobe, wherein the limited coordinates are based on the coordinates of the new sound activity and the parameter associated with the existing lobe and the lobe region; and transmit the limited coordinates to the beamformer to cause the beamformer to update the location of the existing lobe to the limited coordinates.
39. The system of claim 38:
wherein the parameter comprises a look radius in the lobe region around the existing lobe, the look radius comprising a space around the location of the existing lobe where the new sound activity can be considered; and
wherein the lobe auto-focuser is further configured to determine whether the coordinates of the new sound activity are near an existing lobe by determining whether the coordinates of the new sound activity are within the look radius.
40. The system of claim 38:
wherein the parameter comprises a move radius in the lobe region, the move radius comprising a maximum distance from the location of the existing lobe that the existing lobe is permitted to move; and
wherein the limited coordinates comprise:
the coordinates of the new sound activity, when the coordinates of the new sound activity are within the move radius; or
the move radius, when the coordinates of the new sound activity are outside the move radius.
41. The system of claim 38:
wherein the parameter comprises a boundary cushion in the lobe region, the boundary cushion comprising a maximum distance from the location of the existing lobe that the existing lobe is permitted to move towards a boundary of a neighboring lobe region; and
wherein the limited coordinates comprise: the coordinates of the new sound activity, when the coordinates of the new sound activity are outside of the boundary cushion; or
a location outside of the boundary cushion, when the coordinates of the new sound activity are within the boundary cushion.
42. The system of claim 25:
further comprising an activity detector in communication with a far end and the lobe auto-focuser, the activity detector configured to:
receive a remote audio signal from the far end;
detect an amount of activity of the remote audio signal; and
transmit the detected amount of activity to the lobe auto-focuser; and wherein the lobe auto-focuser is further configured to:
when the amount of activity of the remote audio signal exceeds a predetermined threshold, inhibit the lobe auto-focuser from performing the steps of determining whether the coordinates of the new sound activity are near the existing lobe, determining whether the metric associated with the new sound activity is greater than or equal to the metric associated with the existing lobe, and transmitting the coordinates of the new sound activity to the beamformer.
43. The system of claim 25:
further comprising an activity detector in communication with a far end and the lobe auto-focuser, the activity detector configured to:
receive a remote audio signal from the far end; detect an amount of activity of the remote audio signal; and
when the amount of activity of the remote audio signal exceeds a predetermined threshold, transmit a signal to the lobe auto-focuser to cause the lobe auto-focuser to stop performing the steps of determining whether the coordinates of the new sound activity are near the existing lobe, determining whether the metric associated with the new sound activity is greater than or equal to the metric associated with the existing lobe, and transmitting the coordinates of the new sound activity to the beamformer.
44. An array microphone system, comprising:
a plurality of microphone elements, each of the plurality of microphone elements configured to detect sound and output an audio signal;
a beamformer in communication with the plurality of microphone elements, the beamformer configured to generate one or more beamformed signals based on the audio signals of the plurality of microphone elements, wherein the one or more beamformed signals correspond with one or more lobes each positioned at a location in an environment;
an audio activity localizer in communication with the plurality of microphone elements, the audio activity localizer configured to determine (1) coordinates of new sound activity in the environment and (2) a metric associated with the new sound activity; and
a lobe auto-focuser in communication with the audio activity localizer and the beamformer, the lobe auto-focuser configured to:
receive the coordinates of the new sound activity and the metric associated with the new sound activity; determine whether the coordinates of the new sound activity are near an existing lobe, wherein an existing lobe comprises one of the one or more lobes;
when the coordinates of the new sound activity are determined to be near the existing lobe, determine whether the metric associated with the new sound activity is greater than or equal to a metric associated with the existing lobe;
when it is determined that the metric associated with the new sound activity is greater than or equal to a metric associated with the existing lobe, evaluating and maximizing a cost functional associated with the coordinates of the new sound activity such that the existing lobe is moved to adjusted coordinates that are in a direction of a gradient of the cost functional; and
when the cost functional associated with the coordinates of the new sound activity has been maximized, transmit the adjusted coordinates to the beamformer to cause the beamformer to update the location of the existing lobe to the adjusted coordinates.
45. The system of claim 44, wherein the cost functional is evaluated and maximized based on one or more of the coordinates of the new sound activity, a signal to noise ratio associated with the existing lobe, a gain value associated with the existing lobe, voice activity detection information associated with the new sound activity, or a distance between the location of the existing lobe and the coordinates of the new sound activity.
46. The system of claim 44, wherein the lobe auto-focuser is configured to evaluate and maximize the cost functional associated with the coordinates of the new sound activity by:
(A) moving the location of the existing lobe to the coordinates of the new sound activity; (B) evaluating the cost functional of the existing lobe at the coordinates of the new sound activity;
(C) moving the existing lobe from the coordinates of the new sound activity by a predetermined amount in each three dimensional direction;
(D) after each movement of the existing lobe at step (C), evaluating the cost functional of the existing lobe at each of the moved coordinates;
(E) calculating the gradient of the cost functional based on estimates of partial derivatives that are calculated based on the evaluated cost functionals at the coordinates of the new sound activity and each of the moved coordinates of step (C);
(F) moving the existing lobe by a predetermined step size in the direction of the gradient;
(G) evaluating the cost functional of the existing lobe at the moved coordinates of step
(F);
(H) adjusting the predetermined step size when the cost functional of step (G) is less than the cost functional of step (B), and repeating step (F); and
(I) when the absolute value of the difference between the cost functional of step (G) and the cost functional of step (B) is less than a predetermined amount, denoting the moved coordinates of step (F) as the adjusted coordinates.
47. The system of claim 46, wherein the lobe auto-focuser is configured to:
when the absolute value of the difference between the cost functional of step (G) and the cost functional of step (B) is less than a predetermined amount:
dither the existing lobe at the moved coordinates of step (F) by random amounts; and evaluate the cost functional of the existing lobe at the dithered moved coordinates.
48. A method, comprising:
determining whether an inactive lobe of a plurality of lobes of an array microphone in an environment is available for deployment;
when it is determined that the inactive lobe is available, locating the inactive lobe based on location data of sound activity; and
when it is determined that the inactive lobe is not available:
selecting one of the plurality of deployed lobes to move; and
relocating the selected deployed lobe based on the location data of the sound activity.
49. The method of claim 48, wherein the location data of the sound activity comprises coordinates of the sound activity in the environment.
50. The method of claim 48, wherein selecting the one of the plurality of deployed lobes comprises selecting the one of the plurality of deployed lobes based on timestamps associated with the plurality of deployed lobes.
51. The method of claim 50, wherein the timestamps comprise a first timestamp associated with receiving the location data of the sound activity, and a second timestamp associated with the selected deployed lobe.
52. The method of claim 48, wherein selecting the one of the plurality of deployed lobes comprises selecting the one of the plurality of deployed lobes based on metrics associated with the plurality of deployed lobes.
53. The method of claim 52:
wherein the metric comprises a confidence score of the selected deployed lobe; and wherein the confidence score denotes one or more of a certainty of a location of the selected deployed lobe or a quality of sound of the selected deployed lobe.
54. The method of claim 48, further comprising:
determining whether an existing lobe of the plurality of lobes is near the sound activity, based on the location data of the sound activity; and
when it is determined that the existing lobe is not near the sound activity, performing the steps of determining whether the inactive lobe is available for deployment, locating the inactive lobe, selecting the one of the plurality of lobes to move, and relocating the selected lobe.
55. The method of claim 48, wherein the inactive lobe comprises one or more of a lobe of the plurality of lobes that is not positioned at specific coordinates in the environment, a lobe of the plurality of lobes that has not been deployed, or a lobe of the plurality of lobes that is inactive based on a metric.
56. The method of claim 49, wherein selecting the one of the plurality of deployed lobes to move is based on one or more of: (1) a difference in an azimuth of the coordinates of the sound activity and an azimuth of the selected deployed lobe, relative to an azimuth threshold, or (2) a difference in an elevation angle of the coordinates of the sound activity and an elevation angle of the selected deployed lobe, relative to an elevation angle threshold.
57. The method of claim 56, wherein selecting the one of the plurality of deployed lobes to move is based on a distance of the coordinates of the sound activity from the array microphone.
58. The method of claim 57, further comprising setting the azimuth threshold based on the distance of the coordinates of the sound activity from the array microphone.
59. The method of claim 56, wherein selecting the one of the plurality of deployed lobes to move comprises selecting the selected deployed lobe when (1) an absolute value of the difference in the azimuth of the coordinates of the sound activity and the azimuth of the selected deployed lobe is not greater than the azimuth threshold; and (2) an absolute value of the difference in the elevation angle of the coordinates of the sound activity and the elevation angle of the selected deployed lobe is greater than the elevation angle threshold.
60. The method of claim 48, further comprising storing the location data of the sound activity in a database as a new location of the selected deployed lobe.
61. The method of claim 48, further comprising:
receiving a remote audio signal from a far end;
detecting an amount of activity of the remote audio signal; and when the amount of activity of the remote audio signal exceeds a predetermined threshold, inhibiting performance of the steps of determining whether the inactive lobe is available, locating the inactive lobe, selecting the one of the plurality of deployed lobes, and relocating the selected deployed lobe.
62. An array microphone system, comprising:
a plurality of microphone elements, each of the plurality of microphone elements configured to detect sound and output an audio signal;
a beamformer in communication with the plurality of microphone elements, the beamformer configured to generate one or more beamformed signals based on the audio signals of the plurality of microphone elements, wherein the one or more beamformed signals correspond with one or more lobes each positioned at a location in an environment;
an audio activity localizer in communication with the plurality of microphone elements, the audio activity localizer configured to determine coordinates of new sound activity in the environment; and
a lobe auto-placer in communication with the audio activity localizer and the beamformer, the lobe auto-placer configured to:
receive the coordinates of the new sound activity;
determine whether the coordinates of the new sound activity are near an existing lobe, wherein an existing lobe comprises one of the one or more lobes;
when the coordinates of the new sound activity are determined to not be near the existing lobe:
determine whether an inactive lobe is available; when it is determined that the inactive lobe is available, select the inactive lobe;
when it is determined that the inactive lobe is not available, select one of the one or more lobes; and
transmit the coordinates of the new sound activity to the beamformer to cause the beamformer to update the location of the selected lobe to the coordinates of the new sound activity.
63. The system of claim 62, wherein the inactive lobe comprises one or more of a lobe of the beamformer that is not positioned at specific coordinates in the environment, a lobe of the beamformer that has not been deployed, or a lobe of the beamformer that is inactive based on a metric.
64. The system of claim 62, wherein the lobe auto-placer is configured to determine whether the coordinates of the new sound activity are near the existing lobe, based on one or more of: (1) a difference in an azimuth of the coordinates of the new sound activity and an azimuth of the location of the existing lobe, relative to an azimuth threshold, or (2) a difference in an elevation angle of the coordinates of the new sound activity and an elevation angle of the location of the existing lobe, relative to an elevation angle threshold.
65. The system of claim 64, wherein the lobe auto-placer is configured to determine whether the coordinates of the new sound activity are near an existing lobe, based on a distance of the coordinates of the new sound activity from the system.
66. The system of claim 65, wherein the lobe auto-placer is further configured to set the azimuth threshold based on the distance of the coordinates of the new sound activity from the system.
67. The system of claim 64, wherein the lobe auto-placer is configured to determine that the coordinates of the new sound activity are near the existing lobe when (1) an absolute value of the difference in the azimuth of the coordinates of the new sound activity and the azimuth of the location of the existing lobe is not greater than the azimuth threshold; and (2) an absolute value of the difference in the elevation angle of the coordinates of the new sound activity and the elevation angle of the location of the existing lobe is greater than the elevation angle threshold.
68. The system of claim 62, further comprising a database in communication with the lobe auto-placer, wherein the lobe auto-placer is further configured to store a first timestamp associated with receiving the coordinates of the new sound activity in the database.
69. The system of claim 68, wherein the lobe auto-placer is further configured to when the coordinates of the new sound activity are determined to be near the existing lobe, update a second timestamp associated with the existing lobe in the database to the first timestamp.
70. The system of claim 68, wherein the lobe auto-placer is further configured to when the coordinates of the new sound activity are determined to not be near the existing lobe, update a third timestamp associated with the selected lobe in the database to the first timestamp.
71. The system of claim 62, wherein the lobe auto-placer is further configured to when the coordinates of the new sound activity are determined to not be near the existing lobe and when it is determined that the inactive lobe is not available, select the one of the one or more lobes based on a timestamp associated with the one of the one or more lobes.
72. The system of claim 62, wherein the lobe auto-placer is further configured to when the coordinates of the new sound activity are determined to not be near the existing lobe, assign a metric associated with the selected lobe.
73. The system of claim 62, wherein the lobe auto-placer is further configured to when the coordinates of the new sound activity are determined to not be near the existing lobe and when it is determined that the inactive lobe is not available, select the one of the one or more lobes based on a metric associated with the one of the one or more lobes.
74. The system of claim 72:
wherein the metric comprises a confidence score of the selected lobe; and
wherein the confidence score denotes one or more of a certainty of the coordinates of the selected lobe or a quality of sound of the selected lobe.
75. The system of claim 62, further comprising a database in communication with the lobe auto-placer, wherein the lobe auto-placer is further configured to store the coordinates of the new sound activity as the new location of the selected lobe, when the coordinates of the new sound activity are determined to not be near the existing lobe.
76. The system of claim 62:
further comprising an activity detector in communication with a far end and the lobe auto-placer, the activity detector configured to:
receive a remote audio signal from the far end;
detect an amount of activity of the remote audio signal; and
transmit the detected amount of activity to the lobe auto-placer; and wherein the lobe auto-placer is further configured to:
when the amount of activity of the remote audio signal exceeds a predetermined threshold, inhibit the lobe auto-placer from performing the steps of determining whether the coordinates of the new sound activity are near the existing lobe, determining whether the inactive lobe is available, selecting the inactive lobe, selecting one of the one or more lobes, and transmitting the coordinates of the new sound activity to the beamformer.
77. The system of claim 62:
further comprising an activity detector in communication with a far end and the lobe auto-placer, the activity detector configured to:
receive a remote audio signal from the far end;
detect an amount of activity of the remote audio signal; and
when the amount of activity of the remote audio signal exceeds a predetermined threshold, transmit a signal to the lobe auto-placer to cause the lobe auto-placer to stop performing the steps of determining whether the coordinates of the new sound activity are near the existing lobe, determining whether the inactive lobe is available, selecting the inactive lobe, selecting one of the one or more lobes, and transmitting the coordinates of the new sound activity to the beamformer.
78. A method, comprising:
deploying a plurality of lobes from an array microphone in an environment;
selecting one of the plurality of lobes to move, based on location data of sound activity in the environment;
determining limited location data for movement of the selected lobe, based on the location data of the sound activity and a parameter associated with the selected lobe; and
relocating the selected lobe based on the limited location data.
79. The method of claim 78:
wherein each of the plurality of lobes is associated with one of a plurality of lobe regions; the method further comprising identifying the lobe region including the sound activity, based on the location data of the sound activity in the environment, wherein the identified lobe region is associated with the selected lobe;
wherein the parameter is further associated with the identified lobe region.
80. The method of claim 78, wherein the location data of the sound activity comprises coordinates of the sound activity in the environment.
81. The method of claim 78, further comprising:
determining whether a metric associated with the sound activity is greater than or equal to a metric threshold;
when it is determined that the metric associated with the sound activity is greater than or equal to the metric threshold, performing the steps of selecting the one of the plurality of lobes to move, determining the limited location data, and relocating the selected lobe.
82. The method of claim 78, further comprising:
determining whether the selected lobe is near the sound activity, based on the location data of the sound activity and the parameter associated with the selected lobe; and
when it is determined that the selected lobe is near the sound activity, performing the steps of determining the limited location data and relocating the selected lobe.
83. The method of claim 82,
wherein the parameter comprises a look radius around the selected lobe, the look radius comprising a space around the selected lobe where the sound activity can be considered; and wherein determining whether the selected lobe is near the sound activity comprises determining whether the sound activity is within the look radius, based on the location data of the sound activity.
84. The method of claim 83, determining whether the sound activity is within the look radius comprises:
computing a motion vector between the selected lobe and the sound activity; determining whether a magnitude of the motion vector is less than or equal to the look radius;
when the magnitude of the motion vector is less than or equal to the look radius, denoting that the sound activity is within the look radius; and
when the magnitude of the motion vector is greater than the look radius, denoting that the sound activity is outside the look radius.
85. The method of claim 78,
wherein the parameter comprises a move radius, the move radius comprising a maximum distance from the selected lobe that the selected lobe is permitted to move; and
wherein the limited location data comprises:
the location data of the sound activity, when the location data of the sound activity denotes that the sound activity is within the move radius; or
the move radius, when the location data of the sound activity denotes that the sound activity is outside of the move radius.
86. The method of claim 85, wherein determining the limited location data comprises:
computing a motion vector between the selected lobe and the sound activity;
determining whether a magnitude of the motion vector is less than or equal to the look radius;
when the magnitude of the motion vector is less than or equal to the look radius, denoting that the sound activity is within the move radius; and when the magnitude of the motion vector is greater than the look radius, denoting that the sound activity is outside the move radius.
87. The method of claim 79,
wherein the parameter comprises a boundary cushion in the lobe region, the boundary cushion comprising a maximum distance from the selected lobe that the selected lobe is permitted to move towards a boundary of a neighboring lobe region; and
wherein the limited location data comprises:
the location data of the sound activity, when the location data of the sound activity denotes that the sound activity is outside of the boundary cushion; or
a location outside of the boundary cushion, when the location data of the sound activity denotes that the sound activity is within the boundary cushion.
88. The method of claim 87, wherein the boundary of the neighboring lobe region comprises a set of points that is equally distant from the selected lobe and a neighboring lobe associated with the neighboring lobe region.
89. The method of claim 87, wherein determining the limited location data comprises:
computing a motion vector between the selected lobe and the sound activity;
computing a projected vector between each of the plurality of lobes and the selected lobe; computing a scaling factor for each of the plurality of lobes except for the selected lobe; determining a minimum scaling factor of the computed scaling factors for each of the plurality of lobes except for the selected lobe; and computing a restricted motion vector by applying the minimum scaling factor to the motion vector, wherein whether the sound activity is outside of or within the boundary cushion is based on the restricted motion vector.
90. The method of claim 80, wherein selecting the one of the plurality of lobes to move is based on one or more of: (1) a difference in an azimuth of the coordinates of the sound activity and an azimuth of the selected lobe, relative to an azimuth threshold, or (2) a difference in an elevation angle of the coordinates of the sound activity and an elevation angle of the selected lobe, relative to an elevation angle threshold.
91. The method of claim 90, wherein selecting the one of the plurality of lobes to move is based on a distance of the coordinates of the sound activity from the array microphone.
92. The method of claim 91, further comprising setting the azimuth threshold based on the distance of the coordinates of the sound activity from the array microphone.
93. The method of claim 90, wherein selecting the one of the plurality of lobes to move comprises selecting the selected lobe when (1) an absolute value of the difference in the azimuth of the coordinates of the sound activity and the azimuth of the selected lobe is not greater than the azimuth threshold; and (2) an absolute value of the difference in the elevation angle of the coordinates of the sound activity and the elevation angle of the selected lobe is greater than the elevation angle threshold.
94. The method of claim 79, wherein each of the plurality of lobe regions comprises a space around its associated lobe that is closer to the associated lobe than to any other of the plurality of lobes.
95. The method of claim 79, further comprising calculating each of the plurality of lobe regions based on a definition received through a user interface.
96. The method of claim 79, further comprising calculating each of the plurality of lobe regions based on information from a sensor detecting the environment the array microphone is situated in.
97. The method of claim 78, further comprising:
receiving a remote audio signal from a far end;
detecting an amount of activity of the remote audio signal; and
when the amount of activity of the remote audio signal exceeds a predetermined threshold, inhibiting performance of the steps of selecting the one of the plurality of lobes to move, determining the limited location data, and relocating the selected lobe.
98. An array microphone system, comprising:
a plurality of microphone elements, each of the plurality of microphone elements configured to detect sound and output an audio signal;
a beamformer in communication with the plurality of microphone elements, the beamformer configured to generate one or more beamformed signals based on the audio signals of the plurality of microphone elements, wherein the one or more beamformed signals correspond with one or more lobes each positioned at a location in an environment;
an audio activity localizer in communication with the plurality of microphone elements, the audio activity localizer configured to determine (1) coordinates of new sound activity in the environment and (2) a metric associated with the new sound activity; and
a lobe auto-focuser in communication with the audio activity localizer and the beamformer, the lobe auto-focuser configured to:
receive the coordinates of the new sound activity and the metric associated with the new sound activity;
determine whether the metric associated with the new sound activity is greater than or equal to a metric threshold;
when it is determined that the metric associated with the new sound activity is greater than or equal to the metric threshold:
determine a lobe region that the coordinates of the new sound activity is within, wherein the lobe region comprises an existing lobe and the existing lobe comprises one of the one or more lobes;
determine whether the coordinates of the new sound activity are near the existing lobe, based on the coordinates of the new sound activity and a parameter associated with the existing lobe and the lobe region;
when it is determined that the coordinates of the new sound activity are near the existing lobe:
restrict the update of the location of the existing lobe to limited coordinates within the lobe region around the existing lobe, wherein the limited coordinates are based on the coordinates of the new sound activity and the parameter associated with the existing lobe and the lobe region; and
transmit the limited coordinates to the beamformer to cause the beamformer to update the location of the existing lobe to the limited coordinates.
99. The system of claim 98:
wherein the parameter comprises a look radius in the lobe region around the existing lobe, the look radius comprising a space around the location of the existing lobe where the new sound activity can be considered; and
wherein the lobe auto-focuser is further configured to determine whether the coordinates of the new sound activity are near an existing lobe by determining whether the coordinates of the new sound activity are within the look radius.
100. The system of claim 99, wherein the lobe auto-focuser is configured to determine whether the coordinates of the new sound activity are within the look radius by:
computing a motion vector between the location of the existing lobe and the coordinates of the new sound activity;
determining whether a magnitude of the motion vector is less than or equal to the look radius;
when the magnitude of the motion vector is less than or equal to the look radius, denoting that the coordinates of the new sound activity are within the look radius; and when the magnitude of the motion vector is greater than the look radius, denoting that the coordinates of the new sound activity are outside the look radius.
101. The system of claim 98:
wherein the parameter comprises a move radius in the lobe region, the move radius comprising a maximum distance from the location of the existing lobe that the existing lobe is permitted to move; and
wherein the limited coordinates comprise:
the coordinates of the new sound activity, when the coordinates of the new sound activity are within the move radius; or
the move radius, when the coordinates of the new sound activity are outside the move radius.
102. The system of claim 101, wherein the lobe auto-focuser is configured to restrict the update of the location of the existing lobe to the limited coordinates by:
computing a motion vector between the location of the existing lobe and the coordinates of the new sound activity;
determining whether a magnitude of the motion vector is less than or equal to the look radius;
when the magnitude of the motion vector is less than or equal to the look radius, denoting that the coordinates of the new sound activity are within the move radius; and
when the magnitude of the motion vector is greater than the look radius, denoting that the coordinates of the new sound activity are outside the move radius.
103. The system of claim 98:
wherein the parameter comprises a boundary cushion in the lobe region, the boundary cushion comprising a maximum distance from the location of the existing lobe that the existing lobe is permitted to move towards a boundary of a neighboring lobe region; and
wherein the limited coordinates comprise:
the coordinates of the new sound activity, when the coordinates of the new sound activity are outside of the boundary cushion; or
a location outside of the boundary cushion, when the coordinates of the new sound activity are within the boundary cushion.
104. The system of claim 103, wherein the boundary of the neighboring lobe region comprises a set of points that is equally distant from the location of the existing lobe and a location of a neighboring lobe associated with the neighboring lobe region.
105. The system of claim 103, wherein the lobe auto-focuser is configured to restrict the update of the location of the existing lobe to the limited coordinates by:
computing a motion vector between the location of the existing lobe and the coordinates of the new sound activity;
computing a projected vector between the location of each of the one or more lobes and the location of the existing lobe;
computing a scaling factor for each of the one or more lobes except for the existing lobe; determining a minimum scaling factor of the computed scaling factors for each of the one or more lobes except for the existing lobe; and
computing a restricted motion vector by applying the minimum scaling factor to the motion vector, wherein whether the coordinates of the new sound activity are outside of or within the boundary cushion is based on the restricted motion vector.
106. The system of claim 98:
wherein the metric comprises a confidence score of the new sound activity; and wherein the lobe auto-focuser is configured to determine whether the metric associated with the new sound activity is greater than or equal to the metric threshold by determining whether the confidence score of the new sound activity is greater than or equal to the metric threshold.
107. The system of claim 106, wherein the confidence score denotes one or more of a certainty of the coordinates of the new sound activity or a quality of the new sound activity.
108. The system of claim 98, wherein the lobe auto-focuser is configured to determine the lobe region that the coordinates of the new sound activity is within by determining the existing lobe that is near the new sound activity which is associated with the lobe region, based on one or more of: (1) a difference in an azimuth of the coordinates of the new sound activity and an azimuth of the location of the existing lobe, relative to an azimuth threshold, or (2) a difference in an elevation angle of the coordinates of the new sound activity and an elevation angle of the location of the existing lobe, relative to an elevation angle threshold.
109. The system of claim 108, wherein the lobe auto-focuser is configured to determine the lobe region that the coordinates of the new sound activity is within by determining the existing lobe that is near the new sound activity which is associated with the lobe region, based on a distance of the coordinates of the new sound activity from the system.
110. The system of claim 109, wherein the lobe auto-focuser is further configured to set the azimuth threshold based on the distance of the coordinates of the new sound activity from the system.
111. The system of claim 108, wherein the lobe auto-focuser is configured to determine the lobe region that the coordinates of the new sound activity is within by determining the existing lobe that is near the new sound activity which is associated with the lobe region based on (1) an absolute value of the difference in the azimuth of the coordinates of the new sound activity and the azimuth of the location of the existing lobe is not greater than the azimuth threshold; and (2) an absolute value of the difference in the elevation angle of the coordinates of the new sound activity and the elevation angle of the location of the existing lobe is greater than the elevation angle threshold.
112. The system of claim 98, wherein the lobe region comprises a space around the location of the existing lobe that is closer to the existing lobe than to the location of any other of the one or more lobes.
113. The system of claim 98, wherein the lobe auto-focuser is further configured to calculate the lobe region associated with each of the one or more lobes based on a definition received through a user interface.
114. The system of claim 98, wherein the lobe auto-focuser is further configured to calculate the lobe region associated with each of the one or more lobes based on information from a sensor detecting the environment the array microphone is situated in.
115. The system of claim 98:
further comprising an activity detector in communication with a far end and the lobe auto-focuser, the activity detector configured to:
receive a remote audio signal from the far end;
detect an amount of activity of the remote audio signal; and
transmit the detected amount of activity to the lobe auto-focuser; and wherein the lobe auto-focuser is further configured to:
when the amount of activity of the remote audio signal exceeds a predetermined threshold, inhibit the lobe auto-focuser from performing the steps of determining whether the metric associated with the new sound activity is greater than or equal to the metric threshold, determining the lobe region that the coordinates of the new sound activity is within, determining whether the coordinates of the new sound activity are near the existing lobe, restricting the update of the location of the existing lobe to limited coordinates within the lobe region around the existing lobe, and transmitting the coordinates of the new sound activity to the beamformer.
116. The system of claim 98:
further comprising an activity detector in communication with a far end and the lobe auto-focuser, the activity detector configured to:
receive a remote audio signal from the far end;
detect an amount of activity of the remote audio signal; and
when the amount of activity of the remote audio signal exceeds a predetermined threshold, transmit a signal to the lobe auto-focuser to cause the lobe auto-focuser to stop performing the steps of determining whether the metric associated with the new sound activity is greater than or equal to the metric threshold, determining the lobe region that the coordinates of the new sound activity is within, determining whether the coordinates of the new sound activity are near the existing lobe, restricting the update of the location of the existing lobe to limited coordinates within the lobe region around the existing lobe, and transmitting the coordinates of the new sound activity to the beamformer.
117. An array microphone system, comprising:
a plurality of microphone elements, each of the plurality of microphone elements configured to detect sound and output an audio signal;
a beamformer in communication with the plurality of microphone elements, the beamformer configured to generate one or more beamformed signals based on the audio signals of the plurality of microphone elements, wherein the one or more beamformed signals correspond with one or more lobes each positioned at a location in an environment; an audio activity localizer in communication with the plurality of microphone elements, the audio activity localizer configured to determine (1) coordinates of new sound activity in the environment and (2) a confidence score associated with the new sound activity; and
a lobe auto-focuser in communication with the audio activity localizer and the beamformer, the lobe auto-focuser configured to:
receive the coordinates of the new sound activity and the confidence score associated with the new sound activity;
determine whether the confidence score of the new sound activity is greater than or equal to a threshold;
when it is determined that the confidence score of the new sound activity is greater than the threshold:
determine a lobe region that the coordinates of the new sound activity is within, wherein the lobe region comprises an existing lobe and the existing lobe comprises one of the one or more lobes;
determine whether the coordinates of the new sound activity are within a look radius of the location of the existing lobe in the lobe region, wherein the look radius comprises a space around the location of the existing lobe where the new sound activity can be considered;
when it is determined that the coordinates of the new sound activity are within the look radius of the location of the existing lobe, determine whether the coordinates of the new sound activity are within a move radius of the location of the existing lobe within the lobe region, wherein the move radius comprises a maximum distance from the location of the existing lobe that the existing lobe is permitted to move;
when it is determined that the coordinates of the new sound activity are within the move radius of the location of the existing lobe within the lobe region:
determine whether the coordinates of the new sound activity are close to a boundary cushion in the lobe region, wherein the boundary cushion comprises a maximum distance from the location of the existing lobe that the existing lobe is allowed to move towards a neighboring lobe region;
when it is determined that the coordinates of the new sound activity are close to the boundary cushion:
modify the coordinates of the new sound activity to first modified coordinates that are outside the boundary cushion; and transmit the first modified coordinates to the beamformer to cause the beamformer to update the location of the existing lobe to the first modified coordinates; and
when it is determined that the coordinates of the new sound activity are not close to the boundary cushion, transmit the coordinates of the new sound activity to the beamformer to update the location of the existing lobe to the coordinates of the new sound activity;
when it is determined that the coordinates of the new sound activity are not within the move radius of the location of the existing lobe within the lobe region: modify the coordinates of the new sound activity to second modified coordinates within the move radius;
determine whether the second modified coordinates are close to the boundary cushion of the lobe region;
when is it determined that the second modified coordinates are close to the boundary cushion:
modify the second modified coordinates to third modified coordinates that are outside the boundary cushion; and
transmit the third modified coordinates to the beamformer to cause the beamformer to update the location of the existing lobe to the third modified coordinates; and
when it is determined that the coordinates of the new sound activity are not close to the boundary cushion, transmit the second modified coordinates to the beamformer to update the location of the existing lobe to the second modified coordinates.
PCT/US2020/0240632019-03-212020-03-20Auto focus, auto focus within regions, and auto placement of beamformed microphone lobes with inhibition functionalityCeasedWO2020191380A1 (en)

Priority Applications (4)

Application NumberPriority DateFiling DateTitle
CN202080036963.0ACN113841421B (en)2019-03-212020-03-20 Autofocus, autofocus within area, and auto configuration of beamforming microphone lobes with suppression
CN202410766380.3ACN118803494B (en)2019-03-212020-03-20Auto-focus, in-area auto-focus, and auto-configuration of beam forming microphone lobes with suppression functionality
JP2021556732AJP7572964B2 (en)2019-03-212020-03-20 Beamforming with rejection Autofocus, autofocus in area, and autoplacement of microphone lobes
EP20719861.5AEP3942845A1 (en)2019-03-212020-03-20Auto focus, auto focus within regions, and auto placement of beamformed microphone lobes with inhibition functionality

Applications Claiming Priority (6)

Application NumberPriority DateFiling DateTitle
US201962821800P2019-03-212019-03-21
US62/821,8002019-03-21
US201962855187P2019-05-312019-05-31
US62/855,1872019-05-31
US202062971648P2020-02-072020-02-07
US62/971,6482020-02-07

Publications (1)

Publication NumberPublication Date
WO2020191380A1true WO2020191380A1 (en)2020-09-24

Family

ID=70293112

Family Applications (1)

Application NumberTitlePriority DateFiling Date
PCT/US2020/024063CeasedWO2020191380A1 (en)2019-03-212020-03-20Auto focus, auto focus within regions, and auto placement of beamformed microphone lobes with inhibition functionality

Country Status (6)

CountryLink
US (3)US11438691B2 (en)
EP (1)EP3942845A1 (en)
JP (1)JP7572964B2 (en)
CN (2)CN113841421B (en)
TW (1)TWI865506B (en)
WO (1)WO2020191380A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2023059655A1 (en)*2021-10-042023-04-13Shure Acquisition Holdings, Inc.Networked automixer systems and methods

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9554207B2 (en)2015-04-302017-01-24Shure Acquisition Holdings, Inc.Offset cartridge microphones
US9565493B2 (en)2015-04-302017-02-07Shure Acquisition Holdings, Inc.Array microphone system and method of assembling the same
US10367948B2 (en)2017-01-132019-07-30Shure Acquisition Holdings, Inc.Post-mixing acoustic echo cancellation systems and methods
CN112335261B (en)2018-06-012023-07-18舒尔获得控股公司Patterned microphone array
US11297423B2 (en)2018-06-152022-04-05Shure Acquisition Holdings, Inc.Endfire linear array microphone
US11310596B2 (en)2018-09-202022-04-19Shure Acquisition Holdings, Inc.Adjustable lobe shape for array microphones
CN113841419B (en)2019-03-212024-11-12舒尔获得控股公司 Ceiling array microphone enclosure and associated design features
WO2020191380A1 (en)2019-03-212020-09-24Shure Acquisition Holdings,Inc.Auto focus, auto focus within regions, and auto placement of beamformed microphone lobes with inhibition functionality
US11558693B2 (en)2019-03-212023-01-17Shure Acquisition Holdings, Inc.Auto focus, auto focus within regions, and auto placement of beamformed microphone lobes with inhibition and voice activity detection functionality
CN114051738B (en)2019-05-232024-10-01舒尔获得控股公司 Steerable speaker array, system and method thereof
WO2020243471A1 (en)2019-05-312020-12-03Shure Acquisition Holdings, Inc.Low latency automixer integrated with voice and noise activity detection
EP4018680A1 (en)2019-08-232022-06-29Shure Acquisition Holdings, Inc.Two-dimensional microphone array with improved directivity
WO2021087377A1 (en)2019-11-012021-05-06Shure Acquisition Holdings, Inc.Proximity microphone
US11552611B2 (en)2020-02-072023-01-10Shure Acquisition Holdings, Inc.System and method for automatic adjustment of reference gain
US11706562B2 (en)2020-05-292023-07-18Shure Acquisition Holdings, Inc.Transducer steering and configuration systems and methods using a local positioning system
JP7060905B1 (en)*2020-11-112022-04-27株式会社オーディオテクニカ Sound collection system, sound collection method and program
EP4285605A1 (en)2021-01-282023-12-06Shure Acquisition Holdings, Inc.Hybrid audio beamforming system
US20240127839A1 (en)*2021-02-262024-04-18Hewlett-Packard Development Company, L.P.Noise suppression controls
WO2023049773A1 (en)*2021-09-212023-03-30Shure Acquisition Holdings, Inc.Conferencing systems and methods for room intelligence
JP7687677B2 (en)*2021-10-122025-06-03株式会社オーディオテクニカ Beamforming microphone system, sound pickup program and setting program for the beamforming microphone system, setting device for the beamforming microphone, and setting method for the beamforming microphone
US12250526B2 (en)2022-01-072025-03-11Shure Acquisition Holdings, Inc.Audio beamforming with nulling control system and methods
WO2024039892A1 (en)*2022-08-192024-02-22Shure Acquisition Holdings, Inc.System and method for camera motion stabilization using audio localization
JP2024031241A (en)*2022-08-262024-03-07ヤマハ株式会社 Sound collection control method and sound collection device
WO2024186708A1 (en)2023-03-032024-09-12Shure Acquisition Holdings, Inc.Audio fencing system and method
EP4475560A1 (en)2023-06-092024-12-11Sonova AGMicrophone assembly and method for providing hearing assistance
CN117636858B (en)*2024-01-252024-03-29深圳市一么么科技有限公司Intelligent furniture controller and control method

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20160323668A1 (en)*2015-04-302016-11-03Shure Acquisition Holdings, Inc.Array microphone system and method of assembling the same
US9973848B2 (en)*2011-06-212018-05-15Amazon Technologies, Inc.Signal-enhancing beamforming in an augmented reality environment
US10210882B1 (en)*2018-06-252019-02-19Biamp Systems, LLCMicrophone array with automated adaptive beam tracking

Family Cites Families (1042)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US1535408A (en)1923-03-311925-04-28Charles F FrickeDisplay device
US1540788A (en)1924-10-241925-06-09Mcclure EdwardBorder frame for open-metal-work panels and the like
US1965830A (en)1933-03-181934-07-10Reginald B HammerAcoustic device
US2113219A (en)1934-05-311938-04-05Rca CorpMicrophone
US2075588A (en)1936-06-221937-03-30James V LewisMirror and picture frame
US2233412A (en)1937-07-031941-03-04Willis C HillMetallic window screen
US2164655A (en)1937-10-281939-07-04Bertel J KleerupStereopticon slide and method and means for producing same
US2268529A (en)1938-11-211941-12-30Alfred H StilesPicture mounting means
US2343037A (en)1941-02-271944-02-29William I AdelmanFrame
US2377449A (en)1943-02-021945-06-05Joseph M PrevetteCombination screen and storm door and window
US2539671A (en)1946-02-281951-01-30Rca CorpDirectional microphone
US2521603A (en)1947-03-261950-09-05Pru Lesco IncPicture frame securing means
US2481250A (en)1948-05-201949-09-06Gen Motors CorpEngine starting apparatus
US2533565A (en)1948-07-031950-12-12John M EichelmanDisplay device having removable nonrigid panel
US2828508A (en)1954-02-011958-04-01Specialites Alimentaires BourgMachine for injection-moulding of plastic articles
US2777232A (en)1954-11-101957-01-15Robert M KulickePicture frame
US2912605A (en)1955-12-051959-11-10Tibbetts Lab IncElectromechanical transducer
US2938113A (en)1956-03-171960-05-24Schneil HeinrichRadio receiving set and housing therefor
US2840181A (en)1956-08-071958-06-24Benjamin H WildmanLoudspeaker cabinet
US2882633A (en)1957-07-261959-04-21Arlington Aluminum CoPoster holder
US2950556A (en)1958-11-191960-08-30William E FordFoldable frame
US3019854A (en)1959-10-121962-02-06Waitus A O'bryantFilter for heating and air conditioning ducts
US3132713A (en)1961-05-251964-05-12Shure BrosMicrophone diaphragm
US3240883A (en)1961-05-251966-03-15Shure BrosMicrophone
US3143182A (en)1961-07-171964-08-04E J MosherSound reproducers
US3160225A (en)1962-04-181964-12-08Edward L SechristSound reproduction system
US3161975A (en)1962-11-081964-12-22John L McmillanPicture frame
US3205601A (en)1963-06-111965-09-14Gawne DanielDisplay holder
US3239973A (en)1964-01-241966-03-15Johns ManvilleAcoustical glass fiber panel with diaphragm action and controlled flow resistance
US3906431A (en)1965-04-091975-09-16Us NavySearch and track sonar system
US3310901A (en)1965-06-151967-03-28Sarkisian RobertDisplay holder
US3321170A (en)1965-09-211967-05-23Earl F VyeMagnetic adjustable pole piece strip heater clamp
US3509290A (en)1966-05-031970-04-28Nippon Musical Instruments MfgFlat-plate type loudspeaker with frame mounted drivers
DE1772445A1 (en)1968-05-161971-03-04Niezoldi & Kraemer Gmbh Camera with built-in color filters that can be moved into the light path
US3573399A (en)1968-08-141971-04-06Bell Telephone Labor IncDirectional microphone
AT284927B (en)1969-03-041970-10-12Eumig Directional pipe microphone
JPS5028944B1 (en)1970-12-041975-09-19
US3857191A (en)1971-02-081974-12-31Talkies Usa IncVisual-audio device
US3696885A (en)1971-08-191972-10-10Electronic Res AssDecorative loudspeakers
US3755625A (en)1971-10-121973-08-28Bell Telephone Labor IncMultimicrophone loudspeaking telephone system
JPS4867579U (en)1971-11-271973-08-27
US3936606A (en)1971-12-071976-02-03Wanke Ronald LAcoustic abatement method and apparatus
US3828508A (en)1972-07-311974-08-13W MoellerTile device for joining permanent ceiling tile to removable ceiling tile
US3895194A (en)1973-05-291975-07-15Thermo Electron CorpDirectional condenser electret microphone
US3938617A (en)1974-01-171976-02-17Fort Enterprises, LimitedSpeaker enclosure
JPS5215972B2 (en)1974-02-281977-05-06
US4029170A (en)1974-09-061977-06-14B & P Enterprises, Inc.Radial sound port speaker
US3941638A (en)1974-09-181976-03-02Reginald Patrick HorkyManufactured relief-sculptured sound grills (used for covering the sound producing side and/or front of most manufactured sound speaker enclosures) and the manufacturing process for the said grills
US4212133A (en)1975-03-141980-07-15Lufkin Lindsey DPicture frame vase
US3992584A (en)1975-05-091976-11-16Dugan Daniel WAutomatic microphone mixer
JPS51137507A (en)1975-05-211976-11-27Asano Tetsukoujiyo KkPrinting machine
US4007461A (en)1975-09-051977-02-08Field Operations Bureau Of The Federal Communications CommissionAntenna system for deriving cardiod patterns
US4070547A (en)1976-01-081978-01-24Superscope, Inc.One-point stereo microphone
US4072821A (en)1976-05-101978-02-07Cbs Inc.Microphone system for producing signals for quadraphonic reproduction
JPS536565U (en)1976-07-021978-01-20
US4032725A (en)1976-09-071977-06-28Motorola, Inc.Speaker mounting
US4096353A (en)1976-11-021978-06-20Cbs Inc.Microphone system for producing signals for quadraphonic reproduction
US4169219A (en)1977-03-301979-09-25Beard Terry DCompander noise reduction method and apparatus
FR2390864A1 (en)1977-05-091978-12-08France Etat AUDIOCONFERENCE SYSTEM BY TELEPHONE LINK
IE47296B1 (en)1977-11-031984-02-08Post OfficeImprovements in or relating to audio teleconferencing
USD255234S (en)1977-11-221980-06-03Ronald WellwardCeiling speaker
US4131760A (en)1977-12-071978-12-26Bell Telephone Laboratories, IncorporatedMultiple microphone dereverberation system
US4127156A (en)1978-01-031978-11-28Brandt James RBurglar-proof screening
USD256015S (en)1978-03-201980-07-22Epicure Products, Inc.Loudspeaker mounting bracket
DE2821294B2 (en)1978-05-161980-03-13Deutsche Texaco Ag, 2000 Hamburg Phenol aldehyde resin, process for its preparation and its use
JPS54157617A (en)1978-05-311979-12-12Kyowa Electric & ChemicalMethod of manufacturing cloth coated speaker box and material therefor
US4305141A (en)1978-06-091981-12-08The Stoneleigh TrustLow-frequency directional sonar systems
US4198705A (en)1978-06-091980-04-15The Stoneleigh Trust, Donald P. Massa and Fred M. Dellorfano, TrusteesDirectional energy receiving systems for use in the automatic indication of the direction of arrival of the received signal
US4334740A (en)1978-09-121982-06-15Polaroid CorporationReceiving system having pre-selected directional response
JPS5546033A (en)1978-09-271980-03-31Nissan Motor Co LtdElectronic control fuel injection system
JPS5910119B2 (en)1979-04-261984-03-07日本ビクター株式会社 variable directional microphone
US4254417A (en)1979-08-201981-03-03The United States Of America As Represented By The Secretary Of The NavyBeamformer for arrays with rotational symmetry
DE2941485A1 (en)1979-10-101981-04-23Hans-Josef 4300 Essen HasenäckerAnti-vandal public telephone kiosk, without handset - has recessed microphone and loudspeaker leaving only dial, coin slot and volume control visible
SE418665B (en)1979-10-161981-06-15Gustav Georg Arne Bolin WAY TO IMPROVE Acoustics in a room
JPS5685173U (en)1979-11-301981-07-08
US4311874A (en)1979-12-171982-01-19Bell Telephone Laboratories, IncorporatedTeleconference microphone arrays
US4330691A (en)1980-01-311982-05-18The Futures Group, Inc.Integral ceiling tile-loudspeaker system
US4296280A (en)1980-03-171981-10-20Richie Ronald AWall mounted speaker system
JPS5710598A (en)1980-06-201982-01-20Sony CorpTransmitting circuit of microphone output
US4373191A (en)1980-11-101983-02-08Motorola Inc.Absolute magnitude difference function generator for an LPC system
US4393631A (en)1980-12-031983-07-19Krent Edward DThree-dimensional acoustic ceiling tile system for dispersing long wave sound
US4365449A (en)1980-12-311982-12-28James P. LiautaudHoneycomb framework system for drop ceilings
AT371969B (en)1981-11-191983-08-25Akg Akustische Kino Geraete MICROPHONE FOR STEREOPHONIC RECORDING OF ACOUSTIC EVENTS
US4436966A (en)1982-03-151984-03-13Darome, Inc.Conference microphone unit
US4429850A (en)1982-03-251984-02-07Uniweb, Inc.Display panel shelf bracket
US4449238A (en)1982-03-251984-05-15Bell Telephone Laboratories, IncorporatedVoice-actuated switching system
DE3331440C2 (en)1982-09-011987-04-23Victor Company Of Japan, Ltd., Yokohama, Kanagawa Phased-controlled sound pickup arrangement with essentially elongated arrangement of microphones
US4489442A (en)1982-09-301984-12-18Shure Brothers, Inc.Sound actuated microphone system
US4485484A (en)1982-10-281984-11-27At&T Bell LaboratoriesDirectable microphone system
US4518826A (en)1982-12-221985-05-21Mountain Systems, Inc.Vandal-proof communication system
FR2542549B1 (en)1983-03-091987-09-04Lemaitre Guy ANGLE ACOUSTIC DIFFUSER
US4669108A (en)1983-05-231987-05-26Teleconferencing Systems International Inc.Wireless hands-free conference telephone system
USD285067S (en)1983-07-181986-08-12Pascal DelbuckLoudspeaker
CA1202713A (en)1984-03-161986-04-01Beverley W. GumbTransmitter assembly for a telephone handset
US4712231A (en)1984-04-061987-12-08Shure Brothers, Inc.Teleconference system
US4696043A (en)1984-08-241987-09-22Victor Company Of Japan, Ltd.Microphone apparatus having a variable directivity pattern
US4675906A (en)1984-12-201987-06-23At&T Company, At&T Bell LaboratoriesSecond order toroidal microphone
JPH0670748B2 (en)1985-03-201994-09-07ペイスト.ロジヤ−.エム Video display
US4658425A (en)1985-04-191987-04-14Shure Brothers, Inc.Microphone actuation control system suitable for teleconference systems
CA1268546C (en)1985-08-301990-05-01Stereophonic voice signal transmission system
US4752961A (en)1985-09-231988-06-21Northern Telecom LimitedMicrophone arrangement
US4625827A (en)1985-10-161986-12-02Crown International, Inc.Microphone windscreen
US4653102A (en)1985-11-051987-03-24Position Orientation SystemsDirectional microphone system
US4693174A (en)1986-05-091987-09-15Anderson Philip KAir deflecting means for use with air outlets defined in dropped ceiling constructions
US4860366A (en)1986-07-311989-08-22Nec CorporationTeleconference system using expanders for emphasizing a desired signal with respect to undesired signals
US4741038A (en)1986-09-261988-04-26American Telephone And Telegraph Company, At&T Bell LaboratoriesSound location arrangement
JPH0657079B2 (en)1986-12-081994-07-27日本電信電話株式会社 Phase switching sound pickup device with multiple pairs of microphone outputs
US4862507A (en)1987-01-161989-08-29Shure Brothers, Inc.Microphone acoustical polar pattern converter
NL8701633A (en)1987-07-101989-02-01Philips Nv DIGITAL ECHO COMPENSATOR.
US4805730A (en)1988-01-111989-02-21Peavey Electronics CorporationLoudspeaker enclosure
US4866868A (en)1988-02-241989-09-19Ntg Industries, Inc.Display device
JPH01260967A (en)1988-04-111989-10-18Nec CorpVoice conference equipment for multi-channel signal
US4969197A (en)1988-06-101990-11-06Murata ManufacturingPiezoelectric speaker
JP2748417B2 (en)1988-07-301998-05-06ソニー株式会社 Microphone device
US4881135A (en)1988-09-231989-11-14Heilweil Jordan BConcealed audio-video apparatus for recording conferences and meetings
US4928312A (en)1988-10-171990-05-22Amel HillAcoustic transducer
US4888807A (en)1989-01-181989-12-19Audio-Technica U.S., Inc.Variable pattern microphone system
JPH0728470B2 (en)1989-02-031995-03-29松下電器産業株式会社 Array microphone
USD329239S (en)1989-06-261992-09-08PRS, Inc.Recessed speaker grill
US4923032A (en)1989-07-211990-05-08Nuernberger Mark ACeiling panel sound system
US5000286A (en)1989-08-151991-03-19Klipsch And Associates, Inc.Modular loudspeaker system
USD324780S (en)1989-09-271992-03-24Sebesta Walter CCombined picture frame and golf ball rack
US5121426A (en)1989-12-221992-06-09At&T Bell LaboratoriesLoudspeaking telephone station including directional microphone
US5038935A (en)1990-02-211991-08-13Uniek Plastics, Inc.Storage and display unit for photographic prints
US5088574A (en)1990-04-161992-02-18Kertesz Iii EmeryCeiling speaker system
AT407815B (en)1990-07-132001-06-25Viennatone Gmbh HEARING AID
JP2518823Y2 (en)1990-11-201996-11-27日本メクトロン株式会社 Inverted F printed antenna with integrated main plate
US5550925A (en)1991-01-071996-08-27Canon Kabushiki KaishaSound processing device
JP2792252B2 (en)1991-03-141998-09-03日本電気株式会社 Method and apparatus for removing multi-channel echo
US5224170A (en)1991-04-151993-06-29Hewlett-Packard CompanyTime domain compensation for transducer mismatch
US5204907A (en)1991-05-281993-04-20Motorola, Inc.Noise cancelling microphone and boot mounting arrangement
US5353279A (en)1991-08-291994-10-04Nec CorporationEcho canceler
USD345346S (en)1991-10-181994-03-22International Business Machines Corp.Pen-based computer
US5189701A (en)1991-10-251993-02-23Micom Communications Corp.Voice coder/decoder and methods of coding/decoding
USD340718S (en)1991-12-201993-10-26Square D CompanySpeaker frame assembly
US5289544A (en)1991-12-311994-02-22Audiological Engineering CorporationMethod and apparatus for reducing background noise in communication systems and for enhancing binaural hearing systems for the hearing impaired
US5322979A (en)1992-01-081994-06-21Cassity Terry ASpeaker cover assembly
JP2792311B2 (en)1992-01-311998-09-03日本電気株式会社 Method and apparatus for removing multi-channel echo
JPH05260589A (en)1992-03-101993-10-08Nippon Hoso Kyokai <Nhk>Focal point sound collection method
US5297210A (en)1992-04-101994-03-22Shure Brothers, IncorporatedMicrophone actuation control system
USD345379S (en)1992-07-061994-03-22Canadian Moulded Products Inc.Card holder
US5383293A (en)1992-08-271995-01-24Royal; John D.Picture frame arrangement
JPH06104970A (en)1992-09-181994-04-15Fujitsu Ltd Loud phone
US5307405A (en)1992-09-251994-04-26Qualcomm IncorporatedNetwork echo canceller
US5400413A (en)1992-10-091995-03-21Dana InnovationsPre-formed speaker grille cloth
IT1257164B (en)1992-10-231996-01-05Ist Trentino Di Cultura PROCEDURE FOR LOCATING A SPEAKER AND THE ACQUISITION OF A VOICE MESSAGE, AND ITS SYSTEM.
JP2508574B2 (en)1992-11-101996-06-19日本電気株式会社 Multi-channel eco-removal device
US5406638A (en)1992-11-251995-04-11Hirschhorn; Bruce D.Automated conference system
US5359374A (en)1992-12-141994-10-25Talking Frames Corp.Talking picture frames
US5335011A (en)1993-01-121994-08-02Bell Communications Research, Inc.Sound localization system for teleconferencing using self-steering microphone arrays
US5329593A (en)1993-05-101994-07-12Lazzeroni John JNoise cancelling microphone
US5555447A (en)1993-05-141996-09-10Motorola, Inc.Method and apparatus for mitigating speech loss in a communication system
JPH084243B2 (en)1993-05-311996-01-17日本電気株式会社 Method and apparatus for removing multi-channel echo
EP0707763B1 (en)1993-07-072001-08-29Picturetel CorporationReduction of background noise for speech enhancement
US5657393A (en)1993-07-301997-08-12Crow; Robert P.Beamed linear array microphone system
DE4330243A1 (en)1993-09-071995-03-09Philips Patentverwaltung Speech processing facility
US5525765A (en)1993-09-081996-06-11Wenger CorporationAcoustical virtual environment
US5664021A (en)1993-10-051997-09-02Picturetel CorporationMicrophone system for teleconferencing system
US5473701A (en)1993-11-051995-12-05At&T Corp.Adaptive microphone array
USD363045S (en)1994-03-291995-10-10Phillips Verla DWall plaque
JPH07336790A (en)1994-06-131995-12-22Nec CorpMicrophone system
US5509634A (en)1994-09-281996-04-23Femc Ltd.Self adjusting glass shelf label holder
JP3397269B2 (en)1994-10-262003-04-14日本電信電話株式会社 Multi-channel echo cancellation method
NL9401860A (en)1994-11-081996-06-03Duran Bv Loudspeaker system with controlled directivity.
US5633936A (en)1995-01-091997-05-27Texas Instruments IncorporatedMethod and apparatus for detecting a near-end speech signal
US5645257A (en)1995-03-311997-07-08Metro Industries, Inc.Adjustable support apparatus
USD382118S (en)1995-04-171997-08-12Kimberly-Clark Tissue CompanyPaper towel
US6731334B1 (en)1995-07-312004-05-04Forgent Networks, Inc.Automatic voice tracking camera system and method of operation
WO1997008896A1 (en)1995-08-231997-03-06Scientific-Atlanta, Inc.Open area security system
KR19990044330A (en)1995-09-021999-06-25헨리 에이지마 Panel Loudspeakers
US6285770B1 (en)1995-09-022001-09-04New Transducers LimitedNoticeboards incorporating loudspeakers
US6198831B1 (en)1995-09-022001-03-06New Transducers LimitedPanel-form loudspeakers
US6215881B1 (en)1995-09-022001-04-10New Transducers LimitedCeiling tile loudspeaker
CA2186416C (en)1995-09-262000-04-18Suehiro ShimauchiMethod and apparatus for multi-channel acoustic echo cancellation
US5766702A (en)1995-10-051998-06-16Lin; Chii-HsiungLaminated ornamental glass
US5768263A (en)1995-10-201998-06-16Vtel CorporationMethod for talk/listen determination and multipoint conferencing system using such method
US6125179A (en)1995-12-132000-09-263Com CorporationEcho control device with quick response to sudden echo-path change
US5612929A (en)1995-12-271997-03-18The United States Of America As Represented By The Secretary Of The NavySpectral processor and range display unit
US6144746A (en)1996-02-092000-11-07New Transducers LimitedLoudspeakers comprising panel-form acoustic radiating elements
US5888412A (en)1996-03-041999-03-30Motorola, Inc.Method for making a sculptured diaphragm
US5673327A (en)1996-03-041997-09-30Julstrom; Stephen D.Microphone mixer
US5706344A (en)1996-03-291998-01-06Digisonix, Inc.Acoustic echo cancellation in an integrated audio and telecommunication system
US5717171A (en)1996-05-091998-02-10The Solar CorporationAcoustical cabinet grille frame
US5848146A (en)1996-05-101998-12-08Rane CorporationAudio system for conferencing/presentation room
US6205224B1 (en)1996-05-172001-03-20The Boeing CompanyCircularly symmetric, zero redundancy, planar array having broad frequency range applications
US5715319A (en)1996-05-301998-02-03Picturetel CorporationMethod and apparatus for steerable and endfire superdirective microphone arrays with reduced analog-to-digital converter and computational requirements
US5796819A (en)1996-07-241998-08-18Ericsson Inc.Echo canceller for non-linear circuits
KR100212314B1 (en)1996-11-061999-08-02윤종용 Stand structure of liquid crystal display device
US5888439A (en)1996-11-141999-03-30The Solar CorporationMethod of molding an acoustical cabinet grille frame
JP3797751B2 (en)1996-11-272006-07-19富士通株式会社 Microphone system
US5878147A (en)1996-12-311999-03-02Etymotic Research, Inc.Directional microphone assembly
US6151399A (en)1996-12-312000-11-21Etymotic Research, Inc.Directional microphone system providing for ease of assembly and disassembly
US6301357B1 (en)1996-12-312001-10-09Ericsson Inc.AC-center clipper for noise and echo suppression in a communications system
US7881486B1 (en)1996-12-312011-02-01Etymotic Research, Inc.Directional microphone assembly
US5870482A (en)1997-02-251999-02-09Knowles Electronics, Inc.Miniature silicon condenser microphone
JP3175622B2 (en)1997-03-032001-06-11ヤマハ株式会社 Performance sound field control device
USD392977S (en)1997-03-111998-03-31LG Fosta Ltd.Speaker
US6041127A (en)1997-04-032000-03-21Lucent Technologies Inc.Steerable and variable first-order differential microphone array
AU6515798A (en)1997-04-161998-11-11Isight Ltd.Video teleconferencing
FR2762467B1 (en)1997-04-161999-07-02France Telecom MULTI-CHANNEL ACOUSTIC ECHO CANCELING METHOD AND MULTI-CHANNEL ACOUSTIC ECHO CANCELER
US6633647B1 (en)1997-06-302003-10-14Hewlett-Packard Development Company, L.P.Method of custom designing directional responses for a microphone of a portable computer
USD394061S (en)1997-07-011998-05-05Windsor Industries, Inc.Combined computer-style radio and alarm clock
US6137887A (en)1997-09-162000-10-24Shure IncorporatedDirectional microphone system
NL1007321C2 (en)1997-10-201999-04-21Univ Delft Tech Hearing aid to improve audibility for the hearing impaired.
US6563803B1 (en)1997-11-262003-05-13Qualcomm IncorporatedAcoustic echo canceller
US6039457A (en)1997-12-172000-03-21Intex Exhibits International, L.L.C.Light bracket
US6393129B1 (en)1998-01-072002-05-21American Technology CorporationPaper structures for speaker transducers
US6505057B1 (en)1998-01-232003-01-07Digisonix LlcIntegrated vehicle voice enhancement system and hands-free cellular telephone system
WO1999042981A1 (en)1998-02-201999-08-26Display Edge Technology Ltd.Shelf-edge display system
US6895093B1 (en)1998-03-032005-05-17Texas Instruments IncorporatedAcoustic echo-cancellation system
US6553122B1 (en)1998-03-052003-04-22Nippon Telegraph And Telephone CorporationMethod and apparatus for multi-channel acoustic echo cancellation and recording medium with the method recorded thereon
EP1070417B1 (en)1998-04-082002-09-18BRITISH TELECOMMUNICATIONS public limited companyEcho cancellation
US6173059B1 (en)1998-04-242001-01-09Gentner Communications CorporationTeleconferencing system with visual feedback
EP0993674B1 (en)1998-05-112006-08-16Philips Electronics N.V.Pitch detection
US6442272B1 (en)1998-05-262002-08-27Tellabs, Inc.Voice conferencing system having local sound amplification
US6266427B1 (en)1998-06-192001-07-24Mcdonnell Douglas CorporationDamped structural panel and method of making same
AU2004200802B2 (en)1998-07-132007-05-10Telefonaktiebolaget Lm Ericsson (Publ)Digital adaptive filter and acoustic echo canceller using the same
USD416315S (en)1998-09-011999-11-09Fujitsu General LimitedAir conditioner
USD424538S (en)1998-09-142000-05-09Fujitsu General LimitedDisplay device
US6049607A (en)1998-09-182000-04-11Lamar Signal ProcessingInterference canceling method and apparatus
US6424635B1 (en)1998-11-102002-07-23Nortel Networks LimitedAdaptive nonlinear processor for echo cancellation
US6526147B1 (en)1998-11-122003-02-25Gn Netcom A/SMicrophone array with high directivity
US7068801B1 (en)1998-12-182006-06-27National Research Council Of CanadaMicrophone array diffracting structure
KR100298300B1 (en)1998-12-292002-05-01강상훈Method for coding audio waveform by using psola by formant similarity measurement
US6507659B1 (en)1999-01-252003-01-14Cascade Audio, Inc.Microphone apparatus for producing signals for surround reproduction
US6035962A (en)1999-02-242000-03-14Lin; Chih-HsiungEasily-combinable and movable speaker case
US7423983B1 (en)1999-09-202008-09-09Broadcom CorporationVoice and data exchange over a packet based network
US7558381B1 (en)1999-04-222009-07-07Agere Systems Inc.Retrieval of deleted voice messages in voice messaging system
JP3789685B2 (en)1999-07-022006-06-28富士通株式会社 Microphone array device
US6889183B1 (en)1999-07-152005-05-03Nortel Networks LimitedApparatus and method of regenerating a lost audio segment
US20050286729A1 (en)1999-07-232005-12-29George HarwoodFlat speaker with a flat membrane diaphragm
CN100358393C (en)1999-09-292007-12-261...有限公司 Method and apparatus for directing sound
USD432518S (en)1999-10-012000-10-24Keiko MutoAudio system
US6868377B1 (en)1999-11-232005-03-15Creative Technology Ltd.Multiband phase-vocoder for the modification of audio or speech signals
US6704423B2 (en)1999-12-292004-03-09Etymotic Research, Inc.Hearing aid assembly having external directional microphone
US6449593B1 (en)2000-01-132002-09-10Nokia Mobile Phones Ltd.Method and system for tracking human speakers
US20020140633A1 (en)2000-02-032002-10-03Canesta, Inc.Method and system to present immersion virtual simulations using three-dimensional measurement
US6488367B1 (en)2000-03-142002-12-03Eastman Kodak CompanyElectroformed metal diaphragm
US6741720B1 (en)2000-04-192004-05-25Russound/Fmp, Inc.In-wall loudspeaker system
US6993126B1 (en)2000-04-282006-01-31Clearsonics Pty LtdApparatus and method for detecting far end speech
US7561700B1 (en)2000-05-112009-07-14Plantronics, Inc.Auto-adjust noise canceling microphone with position sensor
ATE370608T1 (en)2000-05-262007-09-15Koninkl Philips Electronics Nv METHOD AND DEVICE FOR ACOUSTIC ECH CANCELLATION WITH ADAPTIVE BEAM FORMATION
US6944312B2 (en)2000-06-152005-09-13Valcom, Inc.Lay-in ceiling speaker
US6329908B1 (en)2000-06-232001-12-11Armstrong World Industries, Inc.Addressable speaker system
US6622030B1 (en)2000-06-292003-09-16Ericsson Inc.Echo suppression using adaptive gain based on residual echo energy
US8019091B2 (en)2000-07-192011-09-13Aliphcom, Inc.Voice activity detector (VAD) -based multiple-microphone acoustic noise suppression
US8280072B2 (en)2003-03-272012-10-02Aliphcom, Inc.Microphone array with rear venting
USD453016S1 (en)2000-07-202002-01-22B & W Loudspeakers LimitedLoudspeaker unit
US6386315B1 (en)2000-07-282002-05-14Awi Licensing CompanyFlat panel sound radiator and assembly system
US6481173B1 (en)2000-08-172002-11-19Awi Licensing CompanyFlat panel sound radiator with special edge details
US6510919B1 (en)2000-08-302003-01-28Awi Licensing CompanyFacing system for a flat panel radiator
EP1184676B1 (en)2000-09-022004-05-06Nokia CorporationSystem and method for processing a signal being emitted from a target signal source into a noisy environment
US6968064B1 (en)2000-09-292005-11-22Forgent Networks, Inc.Adaptive thresholds in acoustic echo canceller for use during double talk
EP1330940B1 (en)2000-10-052012-03-07Etymotic Research, IncDirectional microphone assembly
GB2367730B (en)2000-10-062005-04-27Mitel CorpMethod and apparatus for minimizing far-end speech effects in hands-free telephony systems using acoustic beamforming
US6963649B2 (en)2000-10-242005-11-08Adaptive Technologies, Inc.Noise cancelling microphone
EP1202602B1 (en)2000-10-252013-05-15Panasonic CorporationZoom microphone device
US6704422B1 (en)2000-10-262004-03-09Widex A/SMethod for controlling the directionality of the sound receiving characteristic of a hearing aid a hearing aid for carrying out the method
US6757393B1 (en)2000-11-032004-06-29Marie L. SpitzerWall-hanging entertainment system
JP4110734B2 (en)2000-11-272008-07-02沖電気工業株式会社 Voice packet communication quality control device
US7092539B2 (en)2000-11-282006-08-15University Of Florida Research Foundation, Inc.MEMS based acoustic array
US7092882B2 (en)2000-12-062006-08-15Ncr CorporationNoise suppression in beam-steered microphone array
JP4734714B2 (en)2000-12-222011-07-27ヤマハ株式会社 Sound collection and reproduction method and apparatus
US6768795B2 (en)2001-01-112004-07-27Telefonaktiebolaget Lm Ericsson (Publ)Side-tone control within a telecommunication instrument
DE60142583D1 (en)2001-01-232010-08-26Koninkl Philips Electronics Nv ASYMMETRIC MULTICHANNEL FILTER
USD480923S1 (en)2001-02-202003-10-21Dester.Acs Holding B.V.Tray
US20020126861A1 (en)2001-03-122002-09-12Chester ColbyAudio expander
US20020131580A1 (en)2001-03-162002-09-19Shure IncorporatedSolid angle cross-talk cancellation for beamforming arrays
WO2002078388A2 (en)2001-03-272002-10-031... LimitedMethod and apparatus to create a sound field
JP3506138B2 (en)2001-07-112004-03-15ヤマハ株式会社 Multi-channel echo cancellation method, multi-channel audio transmission method, stereo echo canceller, stereo audio transmission device, and transfer function calculation device
WO2003010996A2 (en)2001-07-202003-02-06Koninklijke Philips Electronics N.V.Sound reinforcement system having an echo suppressor and loudspeaker beamformer
KR20040019362A (en)2001-07-202004-03-05코닌클리케 필립스 일렉트로닉스 엔.브이.Sound reinforcement system having an multi microphone echo suppressor as post processor
US7013267B1 (en)2001-07-302006-03-14Cisco Technology, Inc.Method and apparatus for reconstructing voice information
US7068796B2 (en)2001-07-312006-06-27Moorer James AUltra-directional microphones
JP3727258B2 (en)2001-08-132005-12-14富士通株式会社 Echo suppression processing system
GB2379148A (en)2001-08-212003-02-26Mitel Knowledge CorpVoice activity detection
GB0121206D0 (en)2001-08-312001-10-24Mitel Knowledge CorpSystem and method of indicating and controlling sound pickup direction and location in a teleconferencing system
US7298856B2 (en)2001-09-052007-11-20Nippon Hoso KyokaiChip microphone and method of making same
US20030059061A1 (en)2001-09-142003-03-27Sony CorporationAudio input unit, audio input method and audio input and output unit
JP2003087890A (en)2001-09-142003-03-20Sony CorpVoice input device and voice input method
USD469090S1 (en)2001-09-172003-01-21Sharp Kabushiki KaishaMonitor for a computer
JP3568922B2 (en)2001-09-202004-09-22三菱電機株式会社 Echo processing device
US7065224B2 (en)2001-09-282006-06-20Sonionmicrotronic Nederland B.V.Microphone for a hearing aid or listening device with improved internal damping and foreign material protection
US7120269B2 (en)2001-10-052006-10-10Lowell Manufacturing CompanyLay-in tile speaker system
US7239714B2 (en)2001-10-092007-07-03Sonion Nederland B.V.Microphone having a flexible printed circuit board for mounting components
GB0124352D0 (en)2001-10-112001-11-281 LtdSignal processing device for acoustic transducer array
CA2359771A1 (en)2001-10-222003-04-22Dspfactory Ltd.Low-resource real-time audio synthesis system and method
JP4282260B2 (en)2001-11-202009-06-17株式会社リコー Echo canceller
US6665971B2 (en)2001-11-272003-12-23Fast Industries, Ltd.Label holder with dust cover
AU2002365352A1 (en)2001-11-272003-06-10Corporation For National Research InitiativesA miniature condenser microphone and fabrication method therefor
US20030107478A1 (en)2001-12-062003-06-12Hendricks Richard S.Architectural sound enhancement system
US7130430B2 (en)2001-12-182006-10-31Milsap Jeffrey PPhased array sound system
US6592237B1 (en)2001-12-272003-07-15John M. PledgerPanel frame to draw air around light fixtures
US20030122777A1 (en)2001-12-312003-07-03Grover Andrew S.Method and apparatus for configuring a computer system based on user distance
WO2003061167A2 (en)2002-01-182003-07-24Polycom, Inc.Digital linking of multiple microphone systems
US8098844B2 (en)2002-02-052012-01-17Mh Acoustics, LlcDual-microphone spatial noise suppression
WO2007106399A2 (en)2006-03-102007-09-20Mh Acoustics, LlcNoise-reducing directional microphone array
US7130309B2 (en)2002-02-202006-10-31Intel CorporationCommunication device with dynamic delay compensation and method for communicating voice over a packet-switched network
US20030161485A1 (en)2002-02-272003-08-28Shure IncorporatedMultiple beam automatic mixing microphone array processing via speech detection
DE10208465A1 (en)2002-02-272003-09-18Bsh Bosch Siemens Hausgeraete Electrical device, in particular extractor hood
US20030169888A1 (en)2002-03-082003-09-11Nikolas SuboticFrequency dependent acoustic beam forming and nulling
DK174558B1 (en)2002-03-152003-06-02Bruel & Kjaer Sound & VibratioTransducers two-dimensional array, has set of sub arrays of microphones in circularly symmetric arrangement around common center, each sub-array with three microphones arranged in straight line
ITMI20020566A1 (en)2002-03-182003-09-18Daniele Ramenzoni DEVICE TO CAPTURE EVEN SMALL MOVEMENTS IN THE AIR AND IN FLUIDS SUITABLE FOR CYBERNETIC AND LABORATORY APPLICATIONS AS TRANSDUCER
US7245733B2 (en)2002-03-202007-07-17Siemens Hearing Instruments, Inc.Hearing instrument microphone arrangement with improved sensitivity
US7518737B2 (en)2002-03-292009-04-14Georgia Tech Research Corp.Displacement-measuring optical device with orifice
ITBS20020043U1 (en)2002-04-122003-10-13Flos Spa JOINT FOR THE MECHANICAL AND ELECTRICAL CONNECTION OF IN-LINE AND / OR CORNER LIGHTING EQUIPMENT
US6912178B2 (en)2002-04-152005-06-28Polycom, Inc.System and method for computing a location of an acoustic source
US20030198339A1 (en)2002-04-192003-10-23Roy Kenneth P.Enhanced sound processing system for use with sound radiators
US20030202107A1 (en)2002-04-302003-10-30Slattery E. MichaelAutomated camera view control system
US7852369B2 (en)2002-06-272010-12-14Microsoft Corp.Integrated design for omni-directional camera and microphone array
US6882971B2 (en)2002-07-182005-04-19General Instrument CorporationMethod and apparatus for improving listener differentiation of talkers during a conference call
GB2393601B (en)2002-07-192005-09-211 LtdDigital loudspeaker system
US8947347B2 (en)2003-08-272015-02-03Sony Computer Entertainment Inc.Controlling actions in a video game unit
US7050576B2 (en)2002-08-202006-05-23Texas Instruments IncorporatedDouble talk, NLP and comfort noise
DE60305716T2 (en)2002-09-172007-05-31Koninklijke Philips Electronics N.V. METHOD FOR SYNTHETIZING AN UNMATCHED LANGUAGE SIGNAL
EP1557071A4 (en)2002-10-012009-09-30Donnelly Corp MICROPHONE SYSTEM FOR A VEHICLE
US7106876B2 (en)2002-10-152006-09-12Shure IncorporatedMicrophone for simultaneous noise sensing and speech pickup
US20080056517A1 (en)2002-10-182008-03-06The Regents Of The University Of CaliforniaDynamic binaural sound capture and reproduction in focued or frontal applications
US7003099B1 (en)2002-11-152006-02-21Fortmedia, Inc.Small array microphone for acoustic echo cancellation and noise suppression
US7672445B1 (en)2002-11-152010-03-02Fortemedia, Inc.Method and system for nonlinear echo suppression
GB2395878A (en)2002-11-292004-06-02Mitel Knowledge CorpMethod of capturing constant echo path information using default coefficients
US6990193B2 (en)2002-11-292006-01-24Mitel Knowledge CorporationMethod of acoustic echo cancellation in full-duplex hands free audio conferencing with spatial directivity
US7359504B1 (en)2002-12-032008-04-15Plantronics, Inc.Method and apparatus for reducing echo and noise
GB0229059D0 (en)2002-12-122003-01-15Mitel Knowledge CorpMethod of broadband constant directivity beamforming for non linear and non axi-symmetric sensor arrays embedded in an obstacle
US7333476B2 (en)2002-12-232008-02-19Broadcom CorporationSystem and method for operating a packet voice far-end echo cancellation system
KR100480789B1 (en)2003-01-172005-04-06삼성전자주식회사Method and apparatus for adaptive beamforming using feedback structure
GB2397990A (en)2003-01-312004-08-04Mitel Networks CorpEcho cancellation/suppression and double-talk detection in communication paths
USD489707S1 (en)2003-02-172004-05-11Pioneer CorporationSpeaker
GB0304126D0 (en)2003-02-242003-03-261 LtdSound beam loudspeaker system
KR100493172B1 (en)2003-03-062005-06-02삼성전자주식회사Microphone array structure, method and apparatus for beamforming with constant directivity and method and apparatus for estimating direction of arrival, employing the same
US20040240664A1 (en)2003-03-072004-12-02Freed Evan LawrenceFull-duplex speakerphone
US7466835B2 (en)2003-03-182008-12-16Sonion A/SMiniature microphone with balanced termination
US9099094B2 (en)2003-03-272015-08-04AliphcomMicrophone array with rear venting
US6988064B2 (en)2003-03-312006-01-17Motorola, Inc.System and method for combined frequency-domain and time-domain pitch extraction for speech signals
US7643641B2 (en)2003-05-092010-01-05Nuance Communications, Inc.System for communication enhancement in a noisy environment
US8724822B2 (en)2003-05-092014-05-13Nuance Communications, Inc.Noisy environment communication enhancement system
DE60325699D1 (en)2003-05-132009-02-26Harman Becker Automotive Sys Method and system for adaptive compensation of microphone inequalities
JP2004349806A (en)2003-05-202004-12-09Nippon Telegr & Teleph Corp <Ntt> Multi-channel acoustic echo canceling method, its apparatus, its program and its recording medium
US6993145B2 (en)2003-06-262006-01-31Multi-Service CorporationSpeaker grille frame
US20050005494A1 (en)2003-07-112005-01-13Way Franklin B.Combination display frame
CA2475282A1 (en)2003-07-172005-01-17Her Majesty The Queen In Right Of Canada As Represented By The Minister Of Industry Through The Communications Research CentreVolume hologram
GB0317158D0 (en)2003-07-232003-08-27Mitel Networks CorpA method to reduce acoustic coupling in audio conferencing systems
US8244536B2 (en)2003-08-272012-08-14General Motors LlcAlgorithm for intelligent speech recognition
US7412376B2 (en)2003-09-102008-08-12Microsoft CorporationSystem and method for real-time detection and preservation of speech onset in a signal
CA2452945C (en)2003-09-232016-05-10Mcmaster UniversityBinaural adaptive hearing system
US7162041B2 (en)2003-09-302007-01-09Etymotic Research, Inc.Noise canceling microphone with acoustically tuned ports
US20050213747A1 (en)2003-10-072005-09-29Vtel Products, Inc.Hybrid monaural and multichannel audio for conferencing
USD510729S1 (en)2003-10-232005-10-18Benq CorporationTV tuner box
US7190775B2 (en)2003-10-292007-03-13Broadcom CorporationHigh quality audio conferencing with adaptive beamforming
US8270585B2 (en)2003-11-042012-09-18Stmicroelectronics, Inc.System and method for an endpoint participating in and managing multipoint audio conferencing in a packet network
US8331582B2 (en)2003-12-012012-12-11Wolfson Dynamic Hearing Pty LtdMethod and apparatus for producing adaptive directional signals
EP1695453A1 (en)2003-12-102006-08-30Koninklijke Philips Electronics N.V.Echo canceller having a series arrangement of adaptive filters with individual update control strategy
KR101086398B1 (en)2003-12-242011-11-25삼성전자주식회사 Directional control capable speaker system using multiple microphones and method
US7778425B2 (en)2003-12-242010-08-17Nokia CorporationMethod for generating noise references for generalized sidelobe canceling
JP4251077B2 (en)2004-01-072009-04-08ヤマハ株式会社 Speaker device
JP2007522705A (en)2004-01-072007-08-09コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Audio distortion compression system and filter device thereof
US7387151B1 (en)2004-01-232008-06-17Payne Donald LCabinet door with changeable decorative panel
DK176894B1 (en)2004-01-292010-03-08Dpa Microphones As Microphone structure with directional effect
TWI289020B (en)2004-02-062007-10-21Fortemedia IncApparatus and method of a dual microphone communication device applied for teleconference system
US7515721B2 (en)2004-02-092009-04-07Microsoft CorporationSelf-descriptive microphone array
JP2007523792A (en)2004-02-272007-08-23ダイムラークライスラー・アクチェンゲゼルシャフト Car with microphone
EP1721312B1 (en)2004-03-012008-03-26Dolby Laboratories Licensing CorporationMultichannel audio coding
US7415117B2 (en)2004-03-022008-08-19Microsoft CorporationSystem and method for beamforming using a microphone array
US7826205B2 (en)2004-03-082010-11-02Originatic LlcElectronic device having a movable input assembly with multiple input sides
USD504889S1 (en)2004-03-172005-05-10Apple Computer, Inc.Electronic device
US7346315B2 (en)2004-03-302008-03-18Motorola IncHandheld device loudspeaker system
JP2005311988A (en)2004-04-262005-11-04Onkyo Corp Speaker system
US20050271221A1 (en)2004-05-052005-12-08Southwest Research InstituteAirborne collection of acoustic data using an unmanned aerial vehicle
JP2005323084A (en)2004-05-072005-11-17Nippon Telegr & Teleph Corp <Ntt> Acoustic echo cancellation method, acoustic echo cancellation device, acoustic echo cancellation program
US8031853B2 (en)2004-06-022011-10-04Clearone Communications, Inc.Multi-pod conference systems
US7856097B2 (en)2004-06-172010-12-21Panasonic CorporationEcho canceling apparatus, telephone set using the same, and echo canceling method
US7352858B2 (en)2004-06-302008-04-01Microsoft CorporationMulti-channel echo cancellation with round robin regularization
TWI241790B (en)2004-07-162005-10-11Ind Tech Res InstHybrid beamforming apparatus and method for the same
JP4396449B2 (en)2004-08-252010-01-13パナソニック電工株式会社 Reverberation removal method and apparatus
EP1633121B1 (en)2004-09-032008-11-05Harman Becker Automotive Systems GmbHSpeech signal processing with combined adaptive noise reduction and adaptive echo compensation
KR20070050058A (en)2004-09-072007-05-14코닌클리케 필립스 일렉트로닉스 엔.브이. Telephony Devices with Improved Noise Suppression
JP2006094389A (en)2004-09-272006-04-06Yamaha CorpIn-vehicle conversation assisting device
EP1643798B1 (en)2004-10-012012-12-05AKG Acoustics GmbHMicrophone comprising two pressure-gradient capsules
US7667728B2 (en)2004-10-152010-02-23Lifesize Communications, Inc.Video and audio conferencing system with spatial audio
US8116500B2 (en)2004-10-152012-02-14Lifesize Communications, Inc.Microphone orientation and size in a speakerphone
US7720232B2 (en)2004-10-152010-05-18Lifesize Communications, Inc.Speakerphone
US7760887B2 (en)2004-10-152010-07-20Lifesize Communications, Inc.Updating modeling information based on online data gathering
US7970151B2 (en)2004-10-152011-06-28Lifesize Communications, Inc.Hybrid beamforming
USD526643S1 (en)2004-10-192006-08-15Pioneer CorporationSpeaker
US7660428B2 (en)2004-10-252010-02-09Polycom, Inc.Ceiling microphone assembly
CN1780495A (en)2004-10-252006-05-31宝利通公司 canopy microphone assembly
WO2006049260A1 (en)2004-11-082006-05-11Nec CorporationSignal processing method, signal processing device, and signal processing program
US20060109983A1 (en)2004-11-192006-05-25Young Randall KSignal masking and method thereof
US20060147063A1 (en)2004-12-222006-07-06Broadcom CorporationEcho cancellation in telephones with multiple microphones
USD526648S1 (en)2004-12-232006-08-15Apple Computer, Inc.Computing device
NO328256B1 (en)2004-12-292010-01-18Tandberg Telecom As Audio System
KR20060081076A (en)2005-01-072006-07-12이재호 Elevator specifying floors by voice recognition
US7830862B2 (en)2005-01-072010-11-09At&T Intellectual Property Ii, L.P.System and method for modifying speech playout to compensate for transmission delay jitter in a voice over internet protocol (VoIP) network
TWD111206S1 (en)2005-01-122006-06-01聲學英國有限公司Loudspeaker
EP1681670A1 (en)2005-01-142006-07-19Dialog Semiconductor GmbHVoice activation
JP4196956B2 (en)2005-02-282008-12-17ヤマハ株式会社 Loudspeaker system
JP4258472B2 (en)2005-01-272009-04-30ヤマハ株式会社 Loudspeaker system
JP4120646B2 (en)2005-01-272008-07-16ヤマハ株式会社 Loudspeaker system
US7995768B2 (en)2005-01-272011-08-09Yamaha CorporationSound reinforcement system
JP2008532422A (en)2005-03-012008-08-14トッド・ヘンリー Electromagnetic lever diaphragm audio transducer
WO2006096959A1 (en)2005-03-162006-09-21James CoxMicrophone array and digital signal processing system
US8406435B2 (en)2005-03-182013-03-26Microsoft CorporationAudio submix management
US7522742B2 (en)2005-03-212009-04-21Speakercraft, Inc.Speaker assembly with moveable baffle
US20060222187A1 (en)2005-04-012006-10-05Scott JarrettMicrophone and sound image processing system
DE602005003643T2 (en)2005-04-012008-11-13Mitel Networks Corporation, Ottawa A method of accelerating the training of an acoustic echo canceller in a full duplex audio conference system by acoustic beamforming
USD542543S1 (en)2005-04-062007-05-15Foremost Group Inc.Mirror
CA2505496A1 (en)2005-04-272006-10-27Universite De SherbrookeRobust localization and tracking of simultaneously moving sound sources using beamforming and particle filtering
US7991167B2 (en)2005-04-292011-08-02Lifesize Communications, Inc.Forming beams with nulls directed at noise sources
ATE491503T1 (en)2005-05-052011-01-15Sony Computer Entertainment Inc VIDEO GAME CONTROL USING JOYSTICK
EP1722545B1 (en)2005-05-092008-08-13Mitel Networks CorporationA method and a system to reduce training time of an acoustic echo canceller in a full-duplex beamforming-based audio conferencing system
GB2426168B (en)2005-05-092008-08-27Sony Comp Entertainment EuropeAudio processing
JP4654777B2 (en)2005-06-032011-03-23パナソニック株式会社 Acoustic echo cancellation device
JP4735956B2 (en)2005-06-222011-07-27アイシン・エィ・ダブリュ株式会社 Multiple bolt insertion tool
DE602005003342T2 (en)2005-06-232008-09-11Akg Acoustics Gmbh Method for modeling a microphone
EP1737268B1 (en)2005-06-232012-02-08AKG Acoustics GmbHSound field microphone
US8139782B2 (en)2005-06-232012-03-20Paul HughesModular amplification system
TWD119718S1 (en)2005-06-292007-11-01新力股份有限公司 TV Receiver
JP4760160B2 (en)2005-06-292011-08-31ヤマハ株式会社 Sound collector
JP2007019907A (en)2005-07-082007-01-25Yamaha CorpSpeech transmission system, and communication conference apparatus
US8045728B2 (en)2005-07-272011-10-25Kabushiki Kaisha Audio-TechnicaConference audio system
CN101238511B (en)2005-08-112011-09-07旭化成株式会社 Sound source separation device, audio recognition device, mobile phone, sound source separation method
US7702116B2 (en)2005-08-222010-04-20Stone Christopher LMicrophone bleed simulator
JP4752403B2 (en)2005-09-062011-08-17ヤマハ株式会社 Loudspeaker system
JP4724505B2 (en)2005-09-092011-07-13株式会社日立製作所 Ultrasonic probe and manufacturing method thereof
US20080253589A1 (en)2005-09-212008-10-16Koninklijke Philips Electronics N.V.Ultrasound Imaging System with Voice Activated Controls Using Remotely Positioned Microphone
JP2007089058A (en)2005-09-262007-04-05Yamaha CorpMicrophone array controller
US7565949B2 (en)2005-09-272009-07-28Casio Computer Co., Ltd.Flat panel display module having speaker function
WO2007037700A1 (en)2005-09-302007-04-05Squarehead Technology AsDirectional audio capturing
USD546318S1 (en)2005-10-072007-07-10Koninklijke Philips Electronics N.V.Subwoofer for home theatre system
EP1775989B1 (en)2005-10-122008-12-10Yamaha CorporationSpeaker array and microphone array
US20070174047A1 (en)2005-10-182007-07-26Anderson Kyle DMethod and apparatus for resynchronizing packetized audio streams
US7970123B2 (en)2005-10-202011-06-28Mitel Networks CorporationAdaptive coupling equalization in beamforming-based communication systems
USD546814S1 (en)2005-10-242007-07-17Teac CorporationGuitar amplifier with digital audio disc player
WO2007049556A1 (en)2005-10-262007-05-03Matsushita Electric Industrial Co., Ltd.Video audio output device
CN101268715B (en)2005-11-022012-04-18雅马哈株式会社 Teleconferencing device
JP4867579B2 (en)2005-11-022012-02-01ヤマハ株式会社 Remote conference equipment
US8135143B2 (en)2005-11-152012-03-13Yamaha CorporationRemote conference apparatus and sound emitting/collecting apparatus
US20070120029A1 (en)2005-11-292007-05-31Rgb Systems, Inc.A Modular Wall Mounting Apparatus
USD552570S1 (en)2005-11-302007-10-09Sony CorporationMonitor television receiver
US20120106755A1 (en)2005-12-072012-05-03Fortemedia, Inc.Handheld electronic device with microphone array
USD547748S1 (en)2005-12-082007-07-31Sony CorporationSpeaker box
EP1965603B1 (en)2005-12-192017-01-11Yamaha CorporationSound emission and collection device
US8130977B2 (en)2005-12-272012-03-06Polycom, Inc.Cluster of first-order microphones and method of operation for stereo input of videoconferencing system
US8644477B2 (en)2006-01-312014-02-04Shure Acquisition Holdings, Inc.Digital Microphone Automixer
JP4929740B2 (en)2006-01-312012-05-09ヤマハ株式会社 Audio conferencing equipment
USD581510S1 (en)2006-02-102008-11-25American Power Conversion CorporationWiring closet ventilation unit
JP4946090B2 (en)2006-02-212012-06-06ヤマハ株式会社 Integrated sound collection and emission device
JP2007228070A (en)2006-02-212007-09-06Yamaha CorpVideo conference apparatus
US8730156B2 (en)2010-03-052014-05-20Sony Computer Entertainment America LlcMaintaining multiple views on a shared stable virtual space
JP4779748B2 (en)2006-03-272011-09-28株式会社デンソー Voice input / output device for vehicle and program for voice input / output device
JP2007274131A (en)2006-03-302007-10-18Yamaha CorpLoudspeaking system, and sound collection apparatus
JP2007274463A (en)2006-03-312007-10-18Yamaha CorpRemote conference apparatus
US8670581B2 (en)2006-04-142014-03-11Murray R. HarmanElectrostatic loudspeaker capable of dispersing sound both horizontally and vertically
EP1848243B1 (en)2006-04-182009-02-18Harman/Becker Automotive Systems GmbHMulti-channel echo compensation system and method
JP2007288679A (en)2006-04-192007-11-01Yamaha CorpSound emitting and collecting apparatus
JP4816221B2 (en)2006-04-212011-11-16ヤマハ株式会社 Sound pickup device and audio conference device
US20070253561A1 (en)2006-04-272007-11-01Tsp Systems, Inc.Systems and methods for audio enhancement
US7831035B2 (en)2006-04-282010-11-09Microsoft CorporationIntegration of a microphone array with acoustic echo cancellation and center clipping
WO2007129731A1 (en)2006-05-102007-11-15Honda Motor Co., Ltd.Sound source tracking system, method and robot
EP1855457B1 (en)2006-05-102009-07-08Harman Becker Automotive Systems GmbHMulti channel echo compensation using a decorrelation stage
US20070269066A1 (en)2006-05-192007-11-22Phonak AgMethod for manufacturing an audio signal
WO2006114015A2 (en)2006-05-192006-11-02Phonak AgMethod for manufacturing an audio signal
JP4747949B2 (en)2006-05-252011-08-17ヤマハ株式会社 Audio conferencing equipment
US8275120B2 (en)2006-05-302012-09-25Microsoft Corp.Adaptive acoustic echo cancellation
JP2008005347A (en)2006-06-232008-01-10Yamaha CorpVoice communication apparatus and composite plug
JP2008005293A (en)2006-06-232008-01-10Matsushita Electric Ind Co Ltd Echo suppression device
USD559553S1 (en)2006-06-232008-01-15Electric Mirror, L.L.C.Backlit mirror with TV
JP4984683B2 (en)2006-06-292012-07-25ヤマハ株式会社 Sound emission and collection device
US8184801B1 (en)2006-06-292012-05-22Nokia CorporationAcoustic echo cancellation for time-varying microphone array beamsteering systems
US20080008339A1 (en)2006-07-052008-01-10Ryan James GAudio processing system and method
US8189765B2 (en)2006-07-062012-05-29Panasonic CorporationMultichannel echo canceller
KR100883652B1 (en)2006-08-032009-02-18삼성전자주식회사 Speech section detection method and apparatus, and speech recognition system using same
US8213634B1 (en)2006-08-072012-07-03Daniel Technology, Inc.Modular and scalable directional audio array with novel filtering
JP4887968B2 (en)2006-08-092012-02-29ヤマハ株式会社 Audio conferencing equipment
US8280728B2 (en)2006-08-112012-10-02Broadcom CorporationPacket loss concealment for a sub-band predictive coder based on extrapolation of excitation waveform
US8346546B2 (en)2006-08-152013-01-01Broadcom CorporationPacket loss concealment based on forced waveform alignment after packet loss
US8898633B2 (en)2006-08-242014-11-25Siemens Industry, Inc.Devices, systems, and methods for configuring a programmable logic controller
USD566685S1 (en)2006-10-042008-04-15Lightspeed Technologies, Inc.Combined wireless receiver, amplifier and speaker
GB0619825D0 (en)2006-10-062006-11-15Craven Peter GMicrophone array
WO2008115284A2 (en)2006-10-162008-09-25Thx Ltd.Loudspeaker line array configurations and related sound processing
JP5028944B2 (en)2006-10-172012-09-19ヤマハ株式会社 Audio conference device and audio conference system
US8103030B2 (en)2006-10-232012-01-24Siemens Audiologische Technik GmbhDifferential directional microphone system and hearing aid device with such a differential directional microphone system
JP4928922B2 (en)2006-12-012012-05-09株式会社東芝 Information processing apparatus and program
ATE522078T1 (en)2006-12-182011-09-15Harman Becker Automotive Sys LOW COMPLEXITY ECHO COMPENSATION
CN101207468B (en)2006-12-192010-07-21华为技术有限公司 Dropped frame concealment method, system and device
JP2008154056A (en)2006-12-192008-07-03Yamaha CorpAudio conference device and audio conference system
US8335685B2 (en)2006-12-222012-12-18Qnx Software Systems LimitedAmbient noise compensation system robust to high excitation noise
US20080152167A1 (en)2006-12-222008-06-26Step Communications CorporationNear-field vector signal enhancement
CN101212828A (en)2006-12-272008-07-02鸿富锦精密工业(深圳)有限公司 Electronic equipment and sound modules used therein
US7941677B2 (en)2007-01-052011-05-10Avaya Inc.Apparatus and methods for managing power distribution over Ethernet
KR101365988B1 (en)2007-01-052014-02-21삼성전자주식회사Method and apparatus for processing set-up automatically in steer speaker system
CA2675999C (en)2007-01-222015-12-15Bell Helicopter Textron Inc.System and method for the interactive display of data in a motion capture environment
KR101297300B1 (en)2007-01-312013-08-16삼성전자주식회사Front Surround system and method for processing signal using speaker array
US20080188965A1 (en)2007-02-062008-08-07Rane CorporationRemote audio device network system and method
GB2446619A (en)2007-02-162008-08-20Audiogravity Holdings LtdReduction of wind noise in an omnidirectional microphone array
JP5139111B2 (en)2007-03-022013-02-06本田技研工業株式会社 Method and apparatus for extracting sound from moving sound source
USD578509S1 (en)2007-03-122008-10-14The Professional Monitor Company LimitedAudio speaker
US7651390B1 (en)2007-03-122010-01-26Profeta Jeffery LCeiling vent air diverter
EP1970894A1 (en)2007-03-122008-09-17France TélécomMethod and device for modifying an audio signal
US8654955B1 (en)2007-03-142014-02-18Clearone Communications, Inc.Portable conferencing device with videoconferencing option
US8005238B2 (en)2007-03-222011-08-23Microsoft CorporationRobust adaptive beamforming with enhanced noise suppression
US8098842B2 (en)2007-03-292012-01-17Microsoft Corp.Enhanced beamforming for arrays of directional microphones
USD587709S1 (en)2007-04-062009-03-03Sony CorporationMonitor display
JP5050616B2 (en)2007-04-062012-10-17ヤマハ株式会社 Sound emission and collection device
US8155304B2 (en)2007-04-102012-04-10Microsoft CorporationFilter bank optimization for acoustic echo cancellation
JP2008263336A (en)2007-04-112008-10-30Oki Electric Ind Co LtdEcho canceler and residual echo suppressing method thereof
EP2381580A1 (en)2007-04-132011-10-26Global IP Solutions (GIPS) ABAdaptive, scalable packet loss recovery
US20080259731A1 (en)2007-04-172008-10-23Happonen Aki PMethods and apparatuses for user controlled beamforming
ATE473603T1 (en)2007-04-172010-07-15Harman Becker Automotive Sys ACOUSTIC LOCALIZATION OF A SPEAKER
ITTV20070070A1 (en)2007-04-202008-10-21Swing S R L SOUND TRANSDUCER DEVICE.
US20080279400A1 (en)2007-05-102008-11-13Reuven KnollSystem and method for capturing voice interactions in walk-in environments
JP2008288785A (en)2007-05-162008-11-27Yamaha CorpVideo conference apparatus
ATE524015T1 (en)2007-05-222011-09-15Harman Becker Automotive Sys METHOD AND APPARATUS FOR PROCESSING AT LEAST TWO MICROPHONE SIGNALS FOR TRANSMITTING AN OUTPUT SIGNAL WITH REDUCED INTERFERENCE
US8229134B2 (en)2007-05-242012-07-24University Of MarylandAudio camera using microphone arrays for real time capture of audio images and method for jointly processing the audio images with video images
JP5338040B2 (en)2007-06-042013-11-13ヤマハ株式会社 Audio conferencing equipment
CN101779476B (en)2007-06-132015-02-25爱利富卡姆公司 Omnidirectional dual microphone array
CN101325631B (en)2007-06-142010-10-20华为技术有限公司Method and device for estimating pitch period
CN101833954B (en)2007-06-142012-07-11华为终端有限公司Method and device for realizing packet loss concealment
CN101325537B (en)2007-06-152012-04-04华为技术有限公司Method and apparatus for frame-losing hide
JP2008312002A (en)2007-06-152008-12-25Yamaha CorpTelevision conference apparatus
WO2008155708A1 (en)2007-06-212008-12-24Koninklijke Philips Electronics N.V.A device for and a method of processing audio signals
US20090003586A1 (en)2007-06-282009-01-01Fortemedia, Inc.Signal processor and method for canceling echo in a communication device
EP2168396B1 (en)2007-07-092019-01-16MH Acoustics, LLCAugmented elliptical microphone array
US8285554B2 (en)2007-07-272012-10-09Dsp Group LimitedMethod and system for dynamic aliasing suppression
USD589605S1 (en)2007-08-012009-03-31Trane International Inc.Air inlet grille
JP2009044600A (en)2007-08-102009-02-26Panasonic Corp Microphone device and manufacturing method thereof
US20090052715A1 (en)2007-08-232009-02-26Fortemedia, Inc.Electronic device with an internal microphone array
US20090052686A1 (en)2007-08-232009-02-26Fortemedia, Inc.Electronic device with an internal microphone array
CN101119323A (en)2007-09-212008-02-06腾讯科技(深圳)有限公司Method and device for solving network jitter
US8064629B2 (en)2007-09-272011-11-22Peigen JiangDecorative loudspeaker grille
US8175871B2 (en)2007-09-282012-05-08Qualcomm IncorporatedApparatus and method of noise and echo reduction in multiple microphone audio systems
US8095120B1 (en)2007-09-282012-01-10Avaya Inc.System and method of synchronizing multiple microphone and speaker-equipped devices to create a conferenced area network
KR101434200B1 (en)2007-10-012014-08-26삼성전자주식회사Method and apparatus for identifying sound source from mixed sound
KR101292206B1 (en)2007-10-012013-08-01삼성전자주식회사Array speaker system and the implementing method thereof
JP5012387B2 (en)2007-10-052012-08-29ヤマハ株式会社 Speech processing system
US7832080B2 (en)2007-10-112010-11-16Etymotic Research, Inc.Directional microphone assembly
US8428661B2 (en)2007-10-302013-04-23Broadcom CorporationSpeech intelligibility in telephones with multiple microphones
US8199927B1 (en)2007-10-312012-06-12ClearOnce Communications, Inc.Conferencing system implementing echo cancellation and push-to-talk microphone detection using two-stage frequency filter
ATE512553T1 (en)2007-11-122011-06-15Univ Graz Tech HOUSINGS FOR MICROPHONE ARRAYS AND MULTI-SENSOR ARRANGEMENTS FOR YOUR SIZE OPTIMIZATION
US8290142B1 (en)2007-11-122012-10-16Clearone Communications, Inc.Echo cancellation in a portable conferencing device with externally-produced audio
WO2009062213A1 (en)2007-11-132009-05-22Akg Acoustics GmbhMicrophone arrangement, having two pressure gradient transducers
KR101415026B1 (en)2007-11-192014-07-04삼성전자주식회사Method and apparatus for acquiring the multi-channel sound with a microphone array
ATE554481T1 (en)2007-11-212012-05-15Nuance Communications Inc TALKER LOCALIZATION
KR101449433B1 (en)2007-11-302014-10-13삼성전자주식회사Noise cancelling method and apparatus from the sound signal through the microphone
JP5097523B2 (en)2007-12-072012-12-12船井電機株式会社 Voice input device
US8744069B2 (en)2007-12-102014-06-03Microsoft CorporationRemoving near-end frequencies from far-end sound
US8433061B2 (en)2007-12-102013-04-30Microsoft CorporationReducing echo
US8219387B2 (en)2007-12-102012-07-10Microsoft CorporationIdentifying far-end sound
US8175291B2 (en)2007-12-192012-05-08Qualcomm IncorporatedSystems, methods, and apparatus for multi-microphone based speech enhancement
US20090173570A1 (en)2007-12-202009-07-09Levit Natalia VAcoustically absorbent ceiling tile having barrier facing with diffuse reflectance
USD604729S1 (en)2008-01-042009-11-24Apple Inc.Electronic device
US7765762B2 (en)2008-01-082010-08-03Usg Interiors, Inc.Ceiling panel
USD582391S1 (en)2008-01-172008-12-09Roland CorporationSpeaker
USD595402S1 (en)2008-02-042009-06-30Panasonic CorporationVentilating fan for a ceiling
WO2009105793A1 (en)2008-02-262009-09-03Akg Acoustics GmbhTransducer assembly
JP5003531B2 (en)2008-02-272012-08-15ヤマハ株式会社 Audio conference system
CN101960865A (en)2008-03-032011-01-26诺基亚公司 Apparatus for capturing and rendering multiple audio channels
US8503653B2 (en)2008-03-032013-08-06Alcatel LucentMethod and apparatus for active speaker selection using microphone arrays and speaker recognition
US8873543B2 (en)2008-03-072014-10-28Arcsoft (Shanghai) Technology Company, Ltd.Implementing a high quality VOIP device
US8626080B2 (en)2008-03-112014-01-07Intel CorporationBidirectional iterative beam forming
US8559611B2 (en)2008-04-072013-10-15Polycom, Inc.Audio signal routing
US8379823B2 (en)2008-04-072013-02-19Polycom, Inc.Distributed bridging
EP2279628B1 (en)2008-04-072013-10-30Dolby Laboratories Licensing CorporationSurround sound generation from a microphone array
US9142221B2 (en)2008-04-072015-09-22Cambridge Silicon Radio LimitedNoise reduction
WO2009129008A1 (en)2008-04-172009-10-22University Of Utah Research FoundationMulti-channel acoustic echo cancellation system and method
US8385557B2 (en)2008-06-192013-02-26Microsoft CorporationMultichannel acoustic echo reduction
US8276706B2 (en)2008-06-272012-10-02Rgb Systems, Inc.Method and apparatus for a loudspeaker assembly
US8631897B2 (en)2008-06-272014-01-21Rgb Systems, Inc.Ceiling loudspeaker system
US7861825B2 (en)2008-06-272011-01-04Rgb Systems, Inc.Method and apparatus for a loudspeaker assembly
US8672087B2 (en)2008-06-272014-03-18Rgb Systems, Inc.Ceiling loudspeaker support system
US8286749B2 (en)2008-06-272012-10-16Rgb Systems, Inc.Ceiling loudspeaker system
US8109360B2 (en)2008-06-272012-02-07Rgb Systems, Inc.Method and apparatus for a loudspeaker assembly
JP4991649B2 (en)2008-07-022012-08-01パナソニック株式会社 Audio signal processing device
KR100901464B1 (en)2008-07-032009-06-08(주)기가바이트씨앤씨 Sound collector and sound collector set
EP2146519B1 (en)2008-07-162012-06-06Nuance Communications, Inc.Beamforming pre-processing for speaker localization
US20100011644A1 (en)2008-07-172010-01-21Kramer Eric JMemorabilia display system
JP5075042B2 (en)2008-07-232012-11-14日本電信電話株式会社 Echo canceling apparatus, echo canceling method, program thereof, and recording medium
USD613338S1 (en)2008-07-312010-04-06Chris MarukosInterchangeable advertising sign
USD595736S1 (en)2008-08-152009-07-07Samsung Electronics Co., Ltd.DVD player
AU2009287421B2 (en)2008-08-292015-09-17Biamp Systems, LLCA microphone array system and method for sound acquisition
US8605890B2 (en)2008-09-222013-12-10Microsoft CorporationMultichannel acoustic echo cancellation
EP2350683B1 (en)2008-10-062017-01-04Raytheon BBN Technologies Corp.Wearable shooter localization system
WO2010043998A1 (en)2008-10-162010-04-22Nxp B.V.Microphone system and method of operating the same
US8724829B2 (en)2008-10-242014-05-13Qualcomm IncorporatedSystems, methods, apparatus, and computer-readable media for coherence detection
US8041054B2 (en)2008-10-312011-10-18Continental Automotive Systems, Inc.Systems and methods for selectively switching between multiple microphones
JP5386936B2 (en)2008-11-052014-01-15ヤマハ株式会社 Sound emission and collection device
US20100123785A1 (en)2008-11-172010-05-20Apple Inc.Graphic Control for Directional Audio Input
US8150063B2 (en)2008-11-252012-04-03Apple Inc.Stabilizing directional audio input from a moving microphone array
KR20100060457A (en)2008-11-272010-06-07삼성전자주식회사Apparatus and method for controlling operation mode of mobile terminal
US8744101B1 (en)2008-12-052014-06-03Starkey Laboratories, Inc.System for controlling the primary lobe of a hearing instrument's directional sensitivity pattern
EP2197219B1 (en)2008-12-122012-10-24Nuance Communications, Inc.Method for determining a time delay for time delay compensation
US8842851B2 (en)2008-12-122014-09-23Broadcom CorporationAudio source localization system and method
US8259959B2 (en)2008-12-232012-09-04Cisco Technology, Inc.Toroid microphone apparatus
NO332961B1 (en)2008-12-232013-02-11Cisco Systems Int Sarl Elevated toroid microphone
JP5446275B2 (en)2009-01-082014-03-19ヤマハ株式会社 Loudspeaker system
NO333056B1 (en)2009-01-212013-02-25Cisco Systems Int Sarl Directional microphone
EP2211564B1 (en)2009-01-232014-09-10Harman Becker Automotive Systems GmbHPassenger compartment communication system
US8116499B2 (en)2009-01-232012-02-14John GrantMicrophone adaptor for altering the geometry of a microphone without altering its frequency response characteristics
DE102009007891A1 (en)2009-02-072010-08-12Willsingh Wilson Resonance sound absorber in multilayer design
US8654990B2 (en)2009-02-092014-02-18Waves Audio Ltd.Multiple microphone based directional sound filter
JP5304293B2 (en)2009-02-102013-10-02ヤマハ株式会社 Sound collector
DE102009010278B4 (en)2009-02-162018-12-20Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. speaker
EP2222091B1 (en)2009-02-232013-04-24Nuance Communications, Inc.Method for determining a set of filter coefficients for an acoustic echo compensation means
US20100217590A1 (en)2009-02-242010-08-26Broadcom CorporationSpeaker localization system and method
CN101510426B (en)2009-03-232013-03-27北京中星微电子有限公司Method and system for eliminating noise
US8184180B2 (en)2009-03-252012-05-22Broadcom CorporationSpatially synchronized audio and video capture
CN101854573B (en)2009-03-302014-12-24富准精密工业(深圳)有限公司Sound structure and electronic device using same
GB0906269D0 (en)2009-04-092009-05-20Ntnu Technology Transfer AsOptimal modal beamformer for sensor arrays
US8291670B2 (en)2009-04-292012-10-23E.M.E.H., Inc.Modular entrance floor system
US8483398B2 (en)2009-04-302013-07-09Hewlett-Packard Development Company, L.P.Methods and systems for reducing acoustic echoes in multichannel communication systems by reducing the dimensionality of the space of impulse responses
US8485700B2 (en)2009-05-052013-07-16Abl Ip Holding, LlcLow profile OLED luminaire for grid ceilings
CN102084650B (en)2009-05-122013-10-09华为终端有限公司Telepresence system, method and video capture device
JP5169986B2 (en)2009-05-132013-03-27沖電気工業株式会社 Telephone device, echo canceller and echo cancellation program
JP5246044B2 (en)2009-05-292013-07-24ヤマハ株式会社 Sound equipment
KR101676393B1 (en)2009-06-022016-11-29코닌클리케 필립스 엔.브이.Acoustic multi-channel cancellation
US9140054B2 (en)2009-06-052015-09-22Oberbroeckling Development CompanyInsert holding system
US20100314513A1 (en)2009-06-122010-12-16Rgb Systems, Inc.Method and apparatus for overhead equipment mounting
US8204198B2 (en)2009-06-192012-06-19Magor Communications CorporationMethod and apparatus for selecting an audio stream
JP2011015018A (en)2009-06-302011-01-20Clarion Co LtdAutomatic sound volume controller
EP2846279A1 (en)2009-07-142015-03-11Visionarist Co., LTD.Image data display system and image data display program
JP5347794B2 (en)2009-07-212013-11-20ヤマハ株式会社 Echo suppression method and apparatus
FR2948484B1 (en)2009-07-232011-07-29Parrot METHOD FOR FILTERING NON-STATIONARY SIDE NOISES FOR A MULTI-MICROPHONE AUDIO DEVICE, IN PARTICULAR A "HANDS-FREE" TELEPHONE DEVICE FOR A MOTOR VEHICLE
EP2457384B1 (en)2009-07-242020-09-09MediaTek Inc.Audio beamforming
USD614871S1 (en)2009-08-072010-05-04Hon Hai Precision Industry Co., Ltd.Digital photo frame
US8233352B2 (en)2009-08-172012-07-31Broadcom CorporationAudio source localization system and method
GB2473267A (en)2009-09-072011-03-09Nokia CorpProcessing audio signals to reduce noise
JP2011066805A (en)2009-09-182011-03-31Oki Electric Industry Co LtdSound collection device and sound collection method
JP5452158B2 (en)2009-10-072014-03-26株式会社日立製作所 Acoustic monitoring system and sound collection system
GB201011530D0 (en)2010-07-082010-08-25Berry Michael TEncasements comprising phase change materials
JP5347902B2 (en)2009-10-222013-11-20ヤマハ株式会社 Sound processor
US20110096915A1 (en)2009-10-232011-04-28Broadcom CorporationAudio spatialization for conference calls with multiple and moving talkers
USD643015S1 (en)2009-11-052011-08-09Lg Electronics Inc.Speaker for home theater
WO2011057346A1 (en)2009-11-122011-05-19Robert Henry FraterSpeakerphone and/or microphone arrays and methods and systems of using the same
US8515109B2 (en)2009-11-192013-08-20Gn Resound A/SHearing aid with beamforming capability
USD617441S1 (en)2009-11-302010-06-08Panasonic CorporationCeiling ventilating fan
CH702399B1 (en)2009-12-022018-05-15Veovox Sa Apparatus and method for capturing and processing the voice
US9058797B2 (en)2009-12-152015-06-16Smule, Inc.Continuous pitch-corrected vocal capture device cooperative with content server for backing track mix
US9307326B2 (en)2009-12-222016-04-05Mh Acoustics LlcSurface-mounted microphone arrays on flexible printed circuit boards
EP2629551B1 (en)2009-12-292014-11-19GN Resound A/SBinaural hearing aid
US8634569B2 (en)2010-01-082014-01-21Conexant Systems, Inc.Systems and methods for echo cancellation and echo suppression
EP2360940A1 (en)2010-01-192011-08-24Televic NV.Steerable microphone array system with a first order directional pattern
USD658153S1 (en)2010-01-252012-04-24Lg Electronics Inc.Home theater receiver
US8583481B2 (en)2010-02-122013-11-12Walter ViveirosPortable interactive modular selling room
US9113247B2 (en)2010-02-192015-08-18Sivantos Pte. Ltd.Device and method for direction dependent spatial noise reduction
JP5550406B2 (en)2010-03-232014-07-16株式会社オーディオテクニカ Variable directional microphone
USD642385S1 (en)2010-03-312011-08-02Samsung Electronics Co., Ltd.Electronic frame
CN101860776B (en)2010-05-072013-08-21中国科学院声学研究所Planar spiral microphone array
US8395653B2 (en)2010-05-182013-03-12Polycom, Inc.Videoconferencing endpoint having multiple voice-tracking cameras
US8515089B2 (en)2010-06-042013-08-20Apple Inc.Active noise cancellation decisions in a portable audio device
USD636188S1 (en)2010-06-172011-04-19Samsung Electronics Co., Ltd.Electronic frame
USD655271S1 (en)2010-06-172012-03-06Lg Electronics Inc.Home theater receiver
US9094496B2 (en)2010-06-182015-07-28Avaya Inc.System and method for stereophonic acoustic echo cancellation
WO2012009689A1 (en)2010-07-152012-01-19Aliph, Inc.Wireless conference call telephone
US8638951B2 (en)2010-07-152014-01-28Motorola Mobility LlcElectronic apparatus for generating modified wideband audio signals based on two or more wideband microphone signals
US9769519B2 (en)2010-07-162017-09-19Enseo, Inc.Media appliance and method for use of same
US8755174B2 (en)2010-07-162014-06-17Ensco, Inc.Media appliance and method for use of same
US8965546B2 (en)2010-07-262015-02-24Qualcomm IncorporatedSystems, methods, and apparatus for enhanced acoustic imaging
US9172345B2 (en)2010-07-272015-10-27Bitwave Pte LtdPersonalized adjustment of an audio device
CN101894558A (en)2010-08-042010-11-24华为技术有限公司Lost frame recovering method and equipment as well as speech enhancing method, equipment and system
BR112012031656A2 (en)2010-08-252016-11-08Asahi Chemical Ind device, and method of separating sound sources, and program
KR101750338B1 (en)2010-09-132017-06-23삼성전자주식회사Method and apparatus for microphone Beamforming
KR101782050B1 (en)2010-09-172017-09-28삼성전자주식회사Apparatus and method for enhancing audio quality using non-uniform configuration of microphones
US8861756B2 (en)2010-09-242014-10-14LI Creative Technologies, Inc.Microphone array system
US9008302B2 (en)2010-10-082015-04-14Optical Fusion, Inc.Audio acoustic echo cancellation for video conferencing
US8553904B2 (en)2010-10-142013-10-08Hewlett-Packard Development Company, L.P.Systems and methods for performing sound source localization
US8976977B2 (en)2010-10-152015-03-10King's College LondonMicrophone array
US8855341B2 (en)2010-10-252014-10-07Qualcomm IncorporatedSystems, methods, apparatus, and computer-readable media for head tracking based on recorded sound signals
US9031256B2 (en)2010-10-252015-05-12Qualcomm IncorporatedSystems, methods, apparatus, and computer-readable media for orientation-sensitive recording control
US9552840B2 (en)2010-10-252017-01-24Qualcomm IncorporatedThree-dimensional sound capturing and reproducing with multi-microphones
EP2448289A1 (en)2010-10-282012-05-02Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.Apparatus and method for deriving a directional information and computer program product
KR101715779B1 (en)2010-11-092017-03-13삼성전자주식회사Apparatus for sound source signal processing and method thereof
JP4945675B2 (en)2010-11-122012-06-06株式会社東芝 Acoustic signal processing apparatus, television apparatus, and program
WO2012063103A1 (en)2010-11-122012-05-18Nokia CorporationAn Audio Processing Apparatus
US9578440B2 (en)2010-11-152017-02-21The Regents Of The University Of CaliforniaMethod for controlling a speaker array to provide spatialized, localized, and binaural virtual surround sound
US8761412B2 (en)2010-12-162014-06-24Sony Computer Entertainment Inc.Microphone array steering with image-based source location
CN103329566A (en)2010-12-202013-09-25峰力公司Method and system for speech enhancement in a room
US9084038B2 (en)2010-12-222015-07-14Sony CorporationMethod of controlling audio recording and electronic device
KR101761312B1 (en)2010-12-232017-07-25삼성전자주식회사Directonal sound source filtering apparatus using microphone array and controlling method thereof
KR101852569B1 (en)2011-01-042018-06-12삼성전자주식회사Microphone array apparatus having hidden microphone placement and acoustic signal processing apparatus including the microphone array apparatus
US8525868B2 (en)2011-01-132013-09-03Qualcomm IncorporatedVariable beamforming with a mobile platform
JP5395822B2 (en)2011-02-072014-01-22日本電信電話株式会社 Zoom microphone device
US9100735B1 (en)2011-02-102015-08-04Dolby Laboratories Licensing CorporationVector noise cancellation
US20120207335A1 (en)2011-02-142012-08-16Nxp B.V.Ported mems microphone
US8929564B2 (en)2011-03-032015-01-06Microsoft CorporationNoise adaptive beamforming for microphone arrays
US9354310B2 (en)2011-03-032016-05-31Qualcomm IncorporatedSystems, methods, apparatus, and computer-readable media for source localization using audible sound and ultrasound
EP2681929A1 (en)2011-03-032014-01-08David Clark Company IncorporatedVoice activation system and method and communication system and method using the same
WO2012122132A1 (en)2011-03-042012-09-13University Of WashingtonDynamic distribution of acoustic energy in a projected sound field and associated systems and methods
US8942382B2 (en)2011-03-222015-01-27Mh Acoustics LlcDynamic beamformer processing for acoustic echo cancellation in systems with high acoustic coupling
US8676728B1 (en)2011-03-302014-03-18Rawles LlcSound localization with artificial neural network
US8620650B2 (en)2011-04-012013-12-31Bose CorporationRejecting noise with paired microphones
US8811601B2 (en)2011-04-042014-08-19Qualcomm IncorporatedIntegrated echo cancellation and noise suppression
GB2494849A (en)2011-04-142013-03-27Orbitsound LtdMicrophone assembly
US20120262536A1 (en)2011-04-142012-10-18Microsoft CorporationStereophonic teleconferencing using a microphone array
US9007871B2 (en)2011-04-182015-04-14Apple Inc.Passive proximity detection
EP2710788A1 (en)2011-05-172014-03-26Google, Inc.Using echo cancellation information to limit gain control adaptation
USD682266S1 (en)2011-05-232013-05-14Arcadyan Technology CorporationWLAN ADSL device
US9635474B2 (en)2011-05-232017-04-25Sonova AgMethod of processing a signal in a hearing instrument, and hearing instrument
WO2012160459A1 (en)2011-05-242012-11-29Koninklijke Philips Electronics N.V.Privacy sound system
KR101248971B1 (en)2011-05-262013-04-09주식회사 마이티웍스Signal separation system using directionality microphone array and providing method thereof
USD656473S1 (en)2011-06-112012-03-27Amx LlcWall display
US9226088B2 (en)2011-06-112015-12-29Clearone Communications, Inc.Methods and apparatuses for multiple configurations of beamforming microphone arrays
US9215327B2 (en)2011-06-112015-12-15Clearone Communications, Inc.Methods and apparatuses for multi-channel acoustic echo cancelation
EP2721837A4 (en)2011-06-142014-10-01Rgb Systems IncCeiling loudspeaker system
CN102833664A (en)2011-06-152012-12-19Rgb系统公司Ceiling loudspeaker system
JP5799619B2 (en)2011-06-242015-10-28船井電機株式会社 Microphone unit
DE102011051727A1 (en)2011-07-112013-01-17Pinta Acoustic Gmbh Method and device for active sound masking
US9066055B2 (en)2011-07-272015-06-23Texas Instruments IncorporatedPower supply architectures for televisions and other powered devices
JP5289517B2 (en)2011-07-282013-09-11株式会社半導体理工学研究センター Sensor network system and communication method thereof
EP2552128A1 (en)2011-07-292013-01-30Sonion Nederland B.V.A dual cartridge directional microphone
CN102915737B (en)2011-07-312018-01-19中兴通讯股份有限公司The compensation method of frame losing and device after a kind of voiced sound start frame
US9253567B2 (en)2011-08-312016-02-02Stmicroelectronics S.R.L.Array microphone apparatus for generating a beam forming signal and beam forming method thereof
US10015589B1 (en)2011-09-022018-07-03Cirrus Logic, Inc.Controlling speech enhancement algorithms using near-field spatial statistics
USD678329S1 (en)2011-09-212013-03-19Samsung Electronics Co., Ltd.Portable multimedia terminal
USD686182S1 (en)2011-09-262013-07-16Nakayo Telecommunications, Inc.Audio equipment for audio teleconferences
EP2575378A1 (en)2011-09-272013-04-03Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.Apparatus and method for listening room equalization using a scalable filtering structure in the wave domain
KR101751749B1 (en)2011-09-272017-07-03한국전자통신연구원Two dimensional directional speaker array module
GB2495130B (en)2011-09-302018-10-24SkypeProcessing audio signals
JP5685173B2 (en)2011-10-042015-03-18Toa株式会社 Loudspeaker system
JP5668664B2 (en)2011-10-122015-02-12船井電機株式会社 MICROPHONE DEVICE, ELECTRONIC DEVICE EQUIPPED WITH MICROPHONE DEVICE, MICROPHONE DEVICE MANUFACTURING METHOD, MICROPHONE DEVICE SUBSTRATE, AND MICROPHONE DEVICE SUBSTRATE MANUFACTURING METHOD
CN103052001B (en)*2011-10-172015-06-24联想(北京)有限公司Intelligent device and control method thereof
US9402117B2 (en)2011-10-192016-07-26Wave Sciences, LLCWearable directional microphone array apparatus and system
US9143879B2 (en)2011-10-192015-09-22James Keith McElveenDirectional audio array apparatus and system
US9330672B2 (en)2011-10-242016-05-03Zte CorporationFrame loss compensation method and apparatus for voice frame signal
USD693328S1 (en)2011-11-092013-11-12Sony CorporationSpeaker box
GB201120392D0 (en)2011-11-252012-01-11Skype LtdProcessing signals
US8983089B1 (en)2011-11-282015-03-17Rawles LlcSound source localization using multiple microphone arrays
KR101282673B1 (en)2011-12-092013-07-05현대자동차주식회사Method for Sound Source Localization
US9408011B2 (en)2011-12-192016-08-02Qualcomm IncorporatedAutomated user/sensor location recognition to customize audio performance in a distributed multi-sensor environment
USD687432S1 (en)2011-12-282013-08-06Hon Hai Precision Industry Co., Ltd.Tablet personal computer
US9197974B1 (en)2012-01-062015-11-24Audience, Inc.Directional audio capture adaptation based on alternative sensory input
US8511429B1 (en)2012-02-132013-08-20Usg Interiors, LlcCeiling panels made from corrugated cardboard
JP3175622U (en)2012-02-232012-05-24株式会社ラクテル Japanese paper label
USD699712S1 (en)2012-02-292014-02-18Clearone Communications, Inc.Beamforming microphone
JP5741487B2 (en)2012-02-292015-07-01オムロン株式会社 microphone
EP2832111B1 (en)2012-03-262018-05-23University of SurreyAcoustic source separation
CN102646418B (en)2012-03-292014-07-23北京华夏电通科技股份有限公司Method and system for eliminating multi-channel acoustic echo of remote voice frequency interaction
US20130282372A1 (en)2012-04-232013-10-24Qualcomm IncorporatedSystems and methods for audio signal processing
EP2845189B1 (en)2012-04-302018-09-05Creative Technology Ltd.A universal reconfigurable echo cancellation system
US9336792B2 (en)2012-05-072016-05-10Marvell World Trade Ltd.Systems and methods for voice enhancement in audio conference
US9423870B2 (en)2012-05-082016-08-23Google Inc.Input determination method
US9736604B2 (en)2012-05-112017-08-15Qualcomm IncorporatedAudio user interaction recognition and context refinement
US20130329908A1 (en)2012-06-082013-12-12Apple Inc.Adjusting audio beamforming settings based on system state
US20130332156A1 (en)2012-06-112013-12-12Apple Inc.Sensor Fusion to Improve Speech/Audio Processing in a Mobile Device
US20130343549A1 (en)2012-06-222013-12-26Verisilicon Holdings Co., Ltd.Microphone arrays for generating stereo and surround channels, method of operation thereof and module incorporating the same
US9560446B1 (en)2012-06-272017-01-31Amazon Technologies, Inc.Sound source locator with distributed microphone array
US20140003635A1 (en)2012-07-022014-01-02Qualcomm IncorporatedAudio signal processing device calibration
US9065901B2 (en)2012-07-032015-06-23Harris CorporationElectronic communication devices with integrated microphones
SG11201407474VA (en)2012-07-132014-12-30Razer Asia Pacific Pte LtdAn audio signal output device and method of processing an audio signal
US20140016794A1 (en)2012-07-132014-01-16Conexant Systems, Inc.Echo cancellation system and method with multiple microphones and multiple speakers
EP2879402A4 (en)2012-07-272016-03-23Sony Corp INFORMATION PROCESSING SYSTEM AND STORAGE MEDIUM
US9258644B2 (en)2012-07-272016-02-09Nokia Technologies OyMethod and apparatus for microphone beamforming
US9094768B2 (en)2012-08-022015-07-28Crestron Electronics Inc.Loudspeaker calibration using multiple wireless microphones
US9264524B2 (en)2012-08-032016-02-16The Penn State Research FoundationMicrophone array transducer for acoustic musical instrument
CN102821336B (en)2012-08-082015-01-21英爵音响(上海)有限公司Ceiling type flat-panel sound box
US9113243B2 (en)2012-08-162015-08-18Cisco Technology, Inc.Method and system for obtaining an audio signal
USD725059S1 (en)2012-08-292015-03-24Samsung Electronics Co., Ltd.Television receiver
US9031262B2 (en)2012-09-042015-05-12Avid Technology, Inc.Distributed, self-scaling, network-based architecture for sound reinforcement, mixing, and monitoring
US8873789B2 (en)2012-09-062014-10-28Audix CorporationArticulating microphone mount
US9088336B2 (en)2012-09-062015-07-21Imagination Technologies LimitedSystems and methods of echo and noise cancellation in voice communication
TWI606731B (en)2012-09-102017-11-21博世股份有限公司Microphone package and method of manufacturing the microphone package
US10051396B2 (en)2012-09-102018-08-14Nokia Technologies OyAutomatic microphone switching
USD685346S1 (en)2012-09-142013-07-02Research In Motion LimitedSpeaker
US8987842B2 (en)2012-09-142015-03-24Solid State System Co., Ltd.Microelectromechanical system (MEMS) device and fabrication method thereof
US9549253B2 (en)2012-09-262017-01-17Foundation for Research and Technology—Hellas (FORTH) Institute of Computer Science (ICS)Sound source localization and isolation apparatuses, methods and systems
EP2759147A1 (en)2012-10-022014-07-30MH Acoustics, LLCEarphones having configurable microphone arrays
US9615172B2 (en)2012-10-042017-04-04Siemens AktiengesellschaftBroadband sensor location selection using convex optimization in very large scale arrays
US9264799B2 (en)2012-10-042016-02-16Siemens AktiengesellschaftMethod and apparatus for acoustic area monitoring by exploiting ultra large scale arrays of microphones
US20140098233A1 (en)2012-10-052014-04-10Sensormatic Electronics, LLCAccess Control Reader with Audio Spatial Filtering
US9232310B2 (en)2012-10-152016-01-05Nokia Technologies OyMethods, apparatuses and computer program products for facilitating directional audio capture with multiple microphones
PL401372A1 (en)2012-10-262014-04-28Ivona Software Spółka Z Ograniczoną OdpowiedzialnościąHybrid compression of voice data in the text to speech conversion systems
US9247367B2 (en)2012-10-312016-01-26International Business Machines CorporationManagement system with acoustical measurement for monitoring noise levels
US9232185B2 (en)2012-11-202016-01-05Clearone Communications, Inc.Audio conferencing system for all-in-one displays
US9237391B2 (en)2012-12-042016-01-12Northwestern Polytechnical UniversityLow noise differential microphone arrays
US9565507B2 (en)2012-12-172017-02-07Panamax35 LLCDestructive interference microphone
CN103888630A (en)2012-12-202014-06-25杜比实验室特许公司Method used for controlling acoustic echo cancellation, and audio processing device
CN103903627B (en)2012-12-272018-06-19中兴通讯股份有限公司The transmission method and device of a kind of voice data
JP6074263B2 (en)2012-12-272017-02-01キヤノン株式会社 Noise suppression device and control method thereof
JP2014143678A (en)2012-12-272014-08-07Panasonic CorpVoice processing system and voice processing method
USD735717S1 (en)2012-12-292015-08-04Intel CorporationElectronic display device
TWI593294B (en)2013-02-072017-07-21晨星半導體股份有限公司Sound collecting system and associated method
EP2958339B1 (en)2013-02-152019-09-18Panasonic Intellectual Property Management Co., Ltd.Directionality control system and directionality control method
US9167326B2 (en)2013-02-212015-10-20Core Brands, LlcIn-wall multiple-bay loudspeaker system
TWM457212U (en)2013-02-212013-07-11Chi Mei Comm Systems IncCover assembly
US9294839B2 (en)2013-03-012016-03-22Clearone, Inc.Augmentation of a beamforming microphone array with non-beamforming microphones
EP3879523A1 (en)2013-03-052021-09-15Apple Inc.Adjusting the beam pattern of a plurality of speaker arrays based on the locations of two listeners
CN104053088A (en)2013-03-112014-09-17联想(北京)有限公司Microphone array adjustment method, microphone array and electronic device
US9877580B2 (en)2013-03-142018-01-30Rgb Systems, Inc.Suspended ceiling-mountable enclosure
US20140357177A1 (en)2013-03-142014-12-04Rgb Systems, Inc.Suspended ceiling-mountable enclosure
US9516428B2 (en)2013-03-142016-12-06Infineon Technologies AgMEMS acoustic transducer, MEMS microphone, MEMS microspeaker, array of speakers and method for manufacturing an acoustic transducer
US9319799B2 (en)2013-03-142016-04-19Robert Bosch GmbhMicrophone package with integrated substrate
US20170206064A1 (en)2013-03-152017-07-20JIBO, Inc.Persistent companion device configuration and deployment platform
US9661418B2 (en)2013-03-152017-05-23Loud Technologies IncMethod and system for large scale audio system
US8861713B2 (en)2013-03-172014-10-14Texas Instruments IncorporatedClipping based on cepstral distance for acoustic echo canceller
EP2976893A4 (en)2013-03-202016-12-14Nokia Technologies OySpatial audio apparatus
CN104065798B (en)2013-03-212016-08-03华为技术有限公司Audio signal processing method and equipment
TWI486002B (en)2013-03-292015-05-21Hon Hai Prec Ind Co LtdElectronic device capable of eliminating interference
WO2014156292A1 (en)2013-03-292014-10-02日産自動車株式会社Microphone support device for sound source localization
US9491561B2 (en)2013-04-112016-11-08Broadcom CorporationAcoustic echo cancellation with internal upmixing
US9038301B2 (en)2013-04-152015-05-26Rose Displays Ltd.Illuminable panel frame assembly arrangement
KR102172718B1 (en)2013-04-292020-11-02유니버시티 오브 서레이Microphone array for acoustic source separation
US9936290B2 (en)2013-05-032018-04-03Qualcomm IncorporatedMulti-channel echo cancellation and noise suppression
US20160155455A1 (en)2013-05-222016-06-02Nokia Technologies OyA shared audio scene apparatus
US9905243B2 (en)2013-05-232018-02-27Nec CorporationSpeech processing system, speech processing method, speech processing program, vehicle including speech processing system on board, and microphone placing method
GB201309781D0 (en)2013-05-312013-07-17Microsoft CorpEcho cancellation
US9357080B2 (en)2013-06-042016-05-31Broadcom CorporationSpatial quiescence protection for multi-channel acoustic echo cancellation
US20140363008A1 (en)2013-06-052014-12-11DSP GroupUse of vibration sensor in acoustic echo cancellation
JP6132910B2 (en)2013-06-112017-05-24Toa株式会社 Microphone device
SG11201510418PA (en)2013-06-182016-01-28Creative Tech LtdHeadset with end-firing microphone array and automatic calibration of end-firing array
USD717272S1 (en)2013-06-242014-11-11Lg Electronics Inc.Speaker
USD743376S1 (en)2013-06-252015-11-17Lg Electronics Inc.Speaker
EP2819430A1 (en)2013-06-272014-12-31Speech Processing Solutions GmbHHandheld mobile recording device with microphone characteristic selection means
DE102013213717A1 (en)2013-07-122015-01-15Robert Bosch Gmbh MEMS device with a microphone structure and method for its manufacture
WO2015009748A1 (en)2013-07-152015-01-22Dts, Inc.Spatial calibration of surround sound systems including listener position estimation
US9257132B2 (en)2013-07-162016-02-09Texas Instruments IncorporatedDominant speech extraction in the presence of diffused and directional noise sources
USD756502S1 (en)2013-07-232016-05-17Applied Materials, Inc.Gas diffuser assembly
JP2015027124A (en)2013-07-242015-02-05船井電機株式会社Power-feeding system, electronic apparatus, cable, and program
US9445196B2 (en)2013-07-242016-09-13Mh Acoustics LlcInter-channel coherence reduction for stereophonic and multichannel acoustic echo cancellation
USD725631S1 (en)2013-07-312015-03-31Sol Republic Inc.Speaker
CN104347076B (en)2013-08-092017-07-14中国电信股份有限公司Network audio packet loss covering method and device
US9319532B2 (en)2013-08-152016-04-19Cisco Technology, Inc.Acoustic echo cancellation for audio system with bring your own devices (BYOD)
US9203494B2 (en)2013-08-202015-12-01Broadcom CorporationCommunication device with beamforming and methods for use therewith
USD726144S1 (en)2013-08-232015-04-07Panasonic Intellectual Property Management Co., Ltd.Wireless speaker
GB2517690B (en)2013-08-262017-02-08Canon KkMethod and device for localizing sound sources placed within a sound environment comprising ambient noise
USD729767S1 (en)2013-09-042015-05-19Samsung Electronics Co., Ltd.Speaker
US9549079B2 (en)2013-09-052017-01-17Cisco Technology, Inc.Acoustic echo cancellation for microphone array with dynamically changing beam forming
US20150070188A1 (en)2013-09-092015-03-12Soil IQ, Inc.Monitoring device and method of use
US9763004B2 (en)2013-09-172017-09-12Alcatel LucentSystems and methods for audio conferencing
GB2512155B (en)2013-09-182015-05-06Imagination Tech LtdAcoustic echo cancellation
CN104464739B (en)2013-09-182017-08-11华为技术有限公司Acoustic signal processing method and device, Difference Beam forming method and device
US9591404B1 (en)2013-09-272017-03-07Amazon Technologies, Inc.Beamformer design using constrained convex optimization in three-dimensional space
US20150097719A1 (en)2013-10-032015-04-09Sulon Technologies Inc.System and method for active reference positioning in an augmented reality environment
US9466317B2 (en)2013-10-112016-10-11Facebook, Inc.Generating a reference audio fingerprint for an audio signal associated with an event
WO2015057922A1 (en)2013-10-162015-04-23The Arizona Board Of Regents On Behalf Of The University Of ArizonaMultispectral imaging based on computational imaging and a narrow-band absorptive filter array
US9633671B2 (en)*2013-10-182017-04-25Apple Inc.Voice quality enhancement techniques, speech recognition techniques, and related systems
EP2866465B1 (en)2013-10-252020-07-22Harman Becker Automotive Systems GmbHSpherical microphone array
US20150118960A1 (en)2013-10-282015-04-30AliphcomWearable communication device
CN104681038B (en)*2013-11-292018-03-09清华大学Audio signal quality detection method and device
US9215543B2 (en)2013-12-032015-12-15Cisco Technology, Inc.Microphone mute/unmute notification
USD727968S1 (en)2013-12-172015-04-28Panasonic Intellectual Property Management Co., Ltd.Digital video disc player
US20150185825A1 (en)2013-12-302015-07-02Daqri, LlcAssigning a virtual user interface to a physical object
USD718731S1 (en)2014-01-022014-12-02Samsung Electronics Co., Ltd.Television receiver
US20150195644A1 (en)2014-01-092015-07-09Microsoft CorporationStructural element for sound field estimation and production
JP6289121B2 (en)2014-01-232018-03-07キヤノン株式会社 Acoustic signal processing device, moving image photographing device, and control method thereof
WO2015120475A1 (en)2014-02-102015-08-13Bose CorporationConversation assistance system
US9351060B2 (en)2014-02-142016-05-24Sonic Blocks, Inc.Modular quick-connect A/V system and methods thereof
JP6281336B2 (en)2014-03-122018-02-21沖電気工業株式会社 Speech decoding apparatus and program
US9226062B2 (en)2014-03-182015-12-29Cisco Technology, Inc.Techniques to mitigate the effect of blocked sound at microphone arrays in a telepresence device
US9432768B1 (en)2014-03-282016-08-30Amazon Technologies, Inc.Beam forming for a wearable computer
US9516412B2 (en)2014-03-282016-12-06Panasonic Intellectual Property Management Co., Ltd.Directivity control apparatus, directivity control method, storage medium and directivity control system
US20150281832A1 (en)2014-03-282015-10-01Panasonic Intellectual Property Management Co., Ltd.Sound processing apparatus, sound processing system and sound processing method
JP2015194753A (en)2014-03-282015-11-05船井電機株式会社microphone device
GB2519392B (en)2014-04-022016-02-24Imagination Tech LtdAuto-tuning of an acoustic echo canceller
GB2521881B (en)2014-04-022016-02-10Imagination Tech LtdAuto-tuning of non-linear processor threshold
JP6349899B2 (en)2014-04-142018-07-04ヤマハ株式会社 Sound emission and collection device
US10182280B2 (en)2014-04-232019-01-15Panasonic Intellectual Property Management Co., Ltd.Sound processing apparatus, sound processing system and sound processing method
USD743939S1 (en)2014-04-282015-11-24Samsung Electronics Co., Ltd.Speaker
EP2942975A1 (en)2014-05-082015-11-11Panasonic CorporationDirectivity control apparatus, directivity control method, storage medium and directivity control system
US9414153B2 (en)2014-05-082016-08-09Panasonic Intellectual Property Management Co., Ltd.Directivity control apparatus, directivity control method, storage medium and directivity control system
CN106416292A (en)2014-05-262017-02-15弗拉迪米尔·谢尔曼 Methods, circuits, devices, systems, and related computer-executable code for acquiring acoustic signals
USD740279S1 (en)2014-05-292015-10-06Compal Electronics, Inc.Chromebook with trapezoid shape
DE102014217344A1 (en)2014-06-052015-12-17Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. SPEAKER SYSTEM
CN104036784B (en)2014-06-062017-03-08华为技术有限公司A kind of echo cancel method and device
US9451362B2 (en)2014-06-112016-09-20Honeywell International Inc.Adaptive beam forming devices, methods, and systems
JP1525681S (en)2014-06-182017-05-22
US9589556B2 (en)2014-06-192017-03-07Yang GaoEnergy adjustment of acoustic echo replica signal for speech enhancement
USD737245S1 (en)2014-07-032015-08-25Wall Audio, Inc.Planar loudspeaker
USD754092S1 (en)2014-07-112016-04-19Harman International Industries, IncorporatedPortable loudspeaker
JP6149818B2 (en)2014-07-182017-06-21沖電気工業株式会社 Sound collecting / reproducing system, sound collecting / reproducing apparatus, sound collecting / reproducing method, sound collecting / reproducing program, sound collecting system and reproducing system
CN107155344A (en)2014-07-232017-09-12澳大利亚国立大学 planar sensor array
US9762742B2 (en)2014-07-242017-09-12Conexant Systems, LlcRobust acoustic echo cancellation for loosely paired devices based on semi-blind multichannel demixing
JP6210458B2 (en)2014-07-302017-10-11パナソニックIpマネジメント株式会社 Failure detection system and failure detection method
JP6446893B2 (en)2014-07-312019-01-09富士通株式会社 Echo suppression device, echo suppression method, and computer program for echo suppression
US20160031700A1 (en)2014-08-012016-02-04Pixtronix, Inc.Microelectromechanical microphone
US9326060B2 (en)2014-08-042016-04-26Apple Inc.Beamforming in varying sound pressure level
JP6202277B2 (en)2014-08-052017-09-27パナソニックIpマネジメント株式会社 Voice processing system and voice processing method
DE112014006865B4 (en)2014-08-132022-06-09Mitsubishi Electric Corporation echo canceller
US9940944B2 (en)2014-08-192018-04-10Qualcomm IncorporatedSmart mute for a communication device
EP2988527A1 (en)2014-08-212016-02-24Patents Factory Ltd. Sp. z o.o.System and method for detecting location of sound sources in a three-dimensional space
WO2016033269A1 (en)2014-08-282016-03-03Analog Devices, Inc.Audio processing using an intelligent microphone
JP2016051038A (en)2014-08-292016-04-11株式会社JvcケンウッドNoise gate device
US9953661B2 (en)2014-09-262018-04-24Cirrus Logic Inc.Neural network voice activity detection employing running range normalization
US10061009B1 (en)2014-09-302018-08-28Apple Inc.Robust confidence measure for beamformed acoustic beacon for device tracking and localization
US20160100092A1 (en)2014-10-012016-04-07Fortemedia, Inc.Object tracking device and tracking method thereof
US9521057B2 (en)2014-10-142016-12-13Amazon Technologies, Inc.Adaptive audio stream with latency compensation
GB2527865B (en)2014-10-302016-12-14Imagination Tech LtdControlling operational characteristics of an acoustic echo canceller
GB2525947B (en)2014-10-312016-06-22Imagination Tech LtdAutomatic tuning of a gain controller
US20160150315A1 (en)2014-11-202016-05-26GM Global Technology Operations LLCSystem and method for echo cancellation
KR101990370B1 (en)2014-11-262019-06-18한화테크윈 주식회사camera system and operating method for the same
US9654868B2 (en)2014-12-052017-05-16Stages LlcMulti-channel multi-domain source identification and tracking
US20160165339A1 (en)2014-12-052016-06-09Stages Pcs, LlcMicrophone array and audio source tracking system
US20160165341A1 (en)2014-12-052016-06-09Stages Pcs, LlcPortable microphone array
US20160161588A1 (en)2014-12-052016-06-09Stages Pcs, LlcBody-mounted multi-planar array
US9860635B2 (en)2014-12-152018-01-02Panasonic Intellectual Property Management Co., Ltd.Microphone array, monitoring system, and sound pickup setting method
CN105790806B (en)2014-12-192020-08-07株式会社Ntt都科摩Common signal transmission method and device in hybrid beam forming technology
CN105812598B (en)2014-12-302019-04-30展讯通信(上海)有限公司A kind of hypoechoic method and device of drop
CN105812969A (en)*2014-12-312016-07-27展讯通信(上海)有限公司Method, system and device for picking up sound signal
US9525934B2 (en)2014-12-312016-12-20Stmicroelectronics Asia Pacific Pte Ltd.Steering vector estimation for minimum variance distortionless response (MVDR) beamforming circuits, systems, and methods
USD754103S1 (en)2015-01-022016-04-19Harman International Industries, IncorporatedLoudspeaker
JP2016146547A (en)2015-02-062016-08-12パナソニックIpマネジメント株式会社 Sound collection system and sound collection method
US20160249132A1 (en)2015-02-232016-08-25Invensense, Inc.Sound source localization using sensor fusion
US20160275961A1 (en)2015-03-182016-09-22Qualcomm Technologies International, Ltd.Structure for multi-microphone speech enhancement system
CN106162427B (en)2015-03-242019-09-17青岛海信电器股份有限公司A kind of sound obtains the directive property method of adjustment and device of element
US9716944B2 (en)2015-03-302017-07-25Microsoft Technology Licensing, LlcAdjustable audio beamforming
US9924224B2 (en)2015-04-032018-03-20The Nielsen Company (Us), LlcMethods and apparatus to determine a state of a media presentation device
DE112016001672A5 (en)2015-04-102018-01-04Sennheiser Electronic Gmbh & Co. Kg Method for acquisition and synchronization of audio and video signals and audio / video acquisition and synchronization system
USD784299S1 (en)2015-04-302017-04-18Shure Acquisition Holdings, Inc.Array microphone assembly
US9554207B2 (en)2015-04-302017-01-24Shure Acquisition Holdings, Inc.Offset cartridge microphones
WO2016179211A1 (en)2015-05-042016-11-10Rensselaer Polytechnic InstituteCoprime microphone array system
US10028053B2 (en)2015-05-052018-07-17Wave Sciences, LLCPortable computing device microphone array
WO2016183791A1 (en)2015-05-192016-11-24华为技术有限公司Voice signal processing method and device
USD801285S1 (en)2015-05-292017-10-31Optical Cable CorporationCeiling mount box
US10412483B2 (en)2015-05-302019-09-10Audix CorporationMulti-element shielded microphone and suspension system
US10452339B2 (en)2015-06-052019-10-22Apple Inc.Mechanism for retrieval of previously captured audio
TWD179475S (en)2015-07-142016-11-11宏碁股份有限公司Portion of notebook computer
US10909384B2 (en)2015-07-142021-02-02Panasonic Intellectual Property Management Co., Ltd.Monitoring system and monitoring method
CN106403016B (en)2015-07-302019-07-26Lg电子株式会社The indoor unit of air conditioner
EP3131311B1 (en)2015-08-142019-06-19Nokia Technologies OyMonitoring
US20170064451A1 (en)2015-08-252017-03-02New York UniversityUbiquitous sensing environment
US9655001B2 (en)2015-09-242017-05-16Cisco Technology, Inc.Cross mute for native radio channels
US20180292079A1 (en)2015-10-072018-10-11Tony J. BranhamLighted mirror with sound system
US9961437B2 (en)2015-10-082018-05-01Signal Essence, LLCDome shaped microphone array with circularly distributed microphones
USD787481S1 (en)2015-10-212017-05-23Cisco Technology, Inc.Microphone support
CN105355210B (en)2015-10-302020-06-23百度在线网络技术(北京)有限公司Preprocessing method and device for far-field speech recognition
JP6636633B2 (en)2015-11-182020-01-29ホアウェイ・テクノロジーズ・カンパニー・リミテッド Acoustic signal processing apparatus and method for improving acoustic signal
US9894434B2 (en)2015-12-042018-02-13Sennheiser Electronic Gmbh & Co. KgConference system with a microphone array system and a method of speech acquisition in a conference system
US11064291B2 (en)2015-12-042021-07-13Sennheiser Electronic Gmbh & Co. KgMicrophone array system
US9479885B1 (en)2015-12-082016-10-25Motorola Mobility LlcMethods and apparatuses for performing null steering of adaptive microphone array
US20170164102A1 (en)*2015-12-082017-06-08Motorola Mobility LlcReducing multiple sources of side interference with adaptive microphone arrays
US9641935B1 (en)2015-12-092017-05-02Motorola Mobility LlcMethods and apparatuses for performing adaptive equalization of microphone arrays
US9479627B1 (en)2015-12-292016-10-25Gn Audio A/SDesktop speakerphone
USD788073S1 (en)2015-12-292017-05-30Sdi Technologies, Inc.Mono bluetooth speaker
CN105548998B (en)2016-02-022018-03-30北京地平线机器人技术研发有限公司Sound positioner and method based on microphone array
US9721582B1 (en)2016-02-032017-08-01Google Inc.Globally optimized least-squares post-filtering for speech enhancement
JP6574529B2 (en)2016-02-042019-09-11ゾン シンシァォZENG Xinxiao Voice communication system and method
EP3420735B1 (en)*2016-02-252020-06-10Dolby Laboratories Licensing CorporationMultitalker optimised beamforming system and method
DK3430821T3 (en)2016-03-172022-04-04Sonova Ag HEARING AID SYSTEM IN AN ACOUSTIC NETWORK WITH SEVERAL SOURCE SOURCES
KR101767467B1 (en)*2016-04-192017-08-11주식회사 오르페오사운드웍스Noise shielding earset and method for manufacturing the earset
US10537300B2 (en)2016-04-252020-01-21Wisconsin Alumni Research FoundationHead mounted microphone array for tinnitus diagnosis
US9851938B2 (en)2016-04-262017-12-26Analog Devices, Inc.Microphone arrays and communication systems for directional reception
USD819607S1 (en)2016-04-262018-06-05Samsung Electronics Co., Ltd.Microphone
DK3509325T3 (en)2016-05-302021-03-22Oticon As HEARING AID WHICH INCLUDES A RADIATOR FILTER UNIT WHICH INCLUDES A SMOOTH UNIT
GB201609784D0 (en)2016-06-032016-07-20Craven Peter G And Travis ChristopherMicrophone array providing improved horizontal directivity
US9659576B1 (en)2016-06-132017-05-23Biamp Systems CorporationBeam forming and acoustic echo cancellation with mutual adaptation control
US9818425B1 (en)*2016-06-172017-11-14Amazon Technologies, Inc.Parallel output paths for acoustic echo cancellation
ITUA20164622A1 (en)2016-06-232017-12-23St Microelectronics Srl BEAMFORMING PROCEDURE BASED ON MICROPHONE DIES AND ITS APPARATUS
KR20190029516A (en)2016-07-132019-03-20광동 오포 모바일 텔레커뮤니케이션즈 코포레이션 리미티드 System information transmission method and apparatus
CN109478400B (en)2016-07-222023-07-07杜比实验室特许公司 Web-based processing and distribution of multimedia content for live music performances
USD841589S1 (en)2016-08-032019-02-26Gedia Gebrueder Dingerkus GmbhHousings for electric conductors
CN106251857B (en)2016-08-162019-08-20青岛歌尔声学科技有限公司Sounnd source direction judgment means, method and microphone directive property regulating system, method
JP6548619B2 (en)2016-08-312019-07-24ミネベアミツミ株式会社 Motor control device and method for detecting out-of-step condition
US9628596B1 (en)2016-09-092017-04-18Sorenson Ip Holdings, LlcElectronic device including a directional microphone
US10454794B2 (en)2016-09-202019-10-22Cisco Technology, Inc.3D wireless network monitoring using virtual reality and augmented reality
US9794720B1 (en)2016-09-222017-10-17Sonos, Inc.Acoustic position measurement
JP1580363S (en)2016-09-272017-07-03
US10820097B2 (en)2016-09-292020-10-27Dolby Laboratories Licensing CorporationMethod, systems and apparatus for determining audio representation(s) of one or more audio sources
US10475471B2 (en)2016-10-112019-11-12Cirrus Logic, Inc.Detection of acoustic impulse events in voice applications using a neural network
US10242696B2 (en)2016-10-112019-03-26Cirrus Logic, Inc.Detection of acoustic impulse events in voice applications
US9930448B1 (en)2016-11-092018-03-27Northwestern Polytechnical UniversityConcentric circular differential microphone arrays and associated beamforming
US10080088B1 (en)*2016-11-102018-09-18Amazon Technologies, Inc.Sound zone reproduction system
US9980042B1 (en)2016-11-182018-05-22Stages LlcBeamformer direction of arrival and orientation analysis system
US20190273988A1 (en)2016-11-212019-09-05Harman Becker Automotive Systems GmbhBeamsteering
GB2557219A (en)2016-11-302018-06-20Nokia Technologies OyDistributed audio capture and mixing controlling
CN106710603B (en)*2016-12-232019-08-06云知声(上海)智能科技有限公司Utilize the audio recognition method and system of linear microphone array
USD811393S1 (en)2016-12-282018-02-27Samsung Display Co., Ltd.Display device
US10930297B2 (en)2016-12-302021-02-23Harman Becker Automotive Systems GmbhAcoustic echo canceling
US10552014B2 (en)2017-01-102020-02-04Cast Group Of Companies Inc.Systems and methods for tracking and interacting with zones in 3D space
US10021515B1 (en)2017-01-122018-07-10Oracle International CorporationMethod and system for location estimation
US10097920B2 (en)2017-01-132018-10-09Bose CorporationCapturing wide-band audio using microphone arrays and passive directional acoustic elements
US10367948B2 (en)2017-01-132019-07-30Shure Acquisition Holdings, Inc.Post-mixing acoustic echo cancellation systems and methods
CN106851036B (en)2017-01-202019-08-30广州广哈通信股份有限公司A kind of conllinear voice conferencing dispersion mixer system
US20180210704A1 (en)2017-01-262018-07-26Wal-Mart Stores, Inc.Shopping Cart and Associated Systems and Methods
JP7051876B6 (en)2017-01-272023-08-18シュアー アクイジッション ホールディングス インコーポレイテッド Array microphone module and system
US10389885B2 (en)2017-02-012019-08-20Cisco Technology, Inc.Full-duplex adaptive echo cancellation in a conference endpoint
US10791153B2 (en)2017-02-022020-09-29Bose CorporationConference room audio setup
US10366702B2 (en)2017-02-082019-07-30Logitech Europe, S.A.Direction detection device for acquiring and processing audible input
EP3593345A1 (en)2017-03-092020-01-15Avnera CorporationReal-time acoustic processor
USD860319S1 (en)2017-04-212019-09-17Any Pte. LtdElectronic display unit
US20180313558A1 (en)2017-04-272018-11-01Cisco Technology, Inc.Smart ceiling and floor tiles
US10395667B2 (en)*2017-05-122019-08-27Cirrus Logic, Inc.Correlation-based near-field detector
CN107221336B (en)2017-05-132020-08-21深圳海岸语音技术有限公司Device and method for enhancing target voice
US10165386B2 (en)2017-05-162018-12-25Nokia Technologies OyVR audio superzoom
JP7004332B2 (en)2017-05-192022-01-21株式会社オーディオテクニカ Audio signal processor
CN107205196A (en)*2017-05-192017-09-26歌尔科技有限公司Method of adjustment and device that microphone array is pointed to
US9992585B1 (en)2017-05-242018-06-05Starkey Laboratories, Inc.Hearing assistance system incorporating directional microphone customization
GB2563857A (en)2017-06-272019-01-02Nokia Technologies OyRecording and rendering sound spaces
US10153744B1 (en)2017-08-022018-12-112236008 Ontario Inc.Automatically tuning an audio compressor to prevent distortion
US11798544B2 (en)2017-08-072023-10-24Polycom, LlcReplying to a spoken command
KR102478951B1 (en)2017-09-042022-12-20삼성전자주식회사Method and apparatus for removimg an echo signal
US9966059B1 (en)2017-09-062018-05-08Amazon Technologies, Inc.Reconfigurale fixed beam former using given microphone array
JP6644197B2 (en)2017-09-072020-02-12三菱電機株式会社 Noise removal device and noise removal method
USD883952S1 (en)2017-09-112020-05-12Clean Energy Labs, LlcAudio speaker
EP4459410A3 (en)2017-09-272025-01-15Engineered Controls International, LLCCombination regulator valve
US10674303B2 (en)2017-09-292020-06-02Apple Inc.System and method for maintaining accuracy of voice recognition
USD888020S1 (en)2017-10-232020-06-23Raven Technology (Beijing) Co., Ltd.Speaker cover
US20190166424A1 (en)2017-11-282019-05-30Invensense, Inc.Microphone mesh network
USD860997S1 (en)2017-12-112019-09-24Crestron Electronics, Inc.Lid and bezel of flip top unit
EP3499915B1 (en)2017-12-132023-06-21Oticon A/sA hearing device and a binaural hearing system comprising a binaural noise reduction system
CN108172235B (en)2017-12-262021-05-14南京信息工程大学LS wave beam forming reverberation suppression method based on wiener post filtering
US10979805B2 (en)2018-01-042021-04-13Stmicroelectronics, Inc.Microphone array auto-directive adaptive wideband beamforming using orientation information from MEMS sensors
USD864136S1 (en)2018-01-052019-10-22Samsung Electronics Co., Ltd.Television receiver
US10720173B2 (en)2018-02-212020-07-21Bose CorporationVoice capture processing modified by back end audio processing state
JP7022929B2 (en)2018-02-262022-02-21パナソニックIpマネジメント株式会社 Wireless microphone system, receiver and wireless synchronization method
US10566008B2 (en)2018-03-022020-02-18Cirrus Logic, Inc.Method and apparatus for acoustic echo suppression
USD857873S1 (en)2018-03-022019-08-27Panasonic Intellectual Property Management Co., Ltd.Ceiling ventilation fan
US20190297422A1 (en)2018-03-202019-09-263Dio, LlcBinaural recording device with directional enhancement
US20190295540A1 (en)2018-03-232019-09-26Cirrus Logic International Semiconductor Ltd.Voice trigger validator
CN208190895U (en)2018-03-232018-12-04阿里巴巴集团控股有限公司Pickup mould group, electronic equipment and vending machine
CN108510987B (en)2018-03-262020-10-23北京小米移动软件有限公司 Voice processing method and device
EP3553968A1 (en)2018-04-132019-10-16Peraso Technologies Inc.Single-carrier wideband beamforming method and system
WO2019232235A1 (en)2018-05-312019-12-05Shure Acquisition Holdings, Inc.Systems and methods for intelligent voice activation for auto-mixing
US11494158B2 (en)2018-05-312022-11-08Shure Acquisition Holdings, Inc.Augmented reality microphone pick-up pattern visualization
CN112335261B (en)2018-06-012023-07-18舒尔获得控股公司Patterned microphone array
EP3808067B1 (en)2018-06-152024-06-12Shure Acquisition Holdings, Inc.Systems and methods for integrated conferencing platform
US11297423B2 (en)2018-06-152022-04-05Shure Acquisition Holdings, Inc.Endfire linear array microphone
EP4622301A2 (en)2018-06-252025-09-24Oticon A/sA hearing device comprising a feedback reduction system
US11408963B2 (en)2018-06-252022-08-09Nec CorporationWave-source-direction estimation device, wave-source-direction estimation method, and program storage medium
CN109087664B (en)2018-08-222022-09-02中国科学技术大学Speech enhancement method
US11310596B2 (en)2018-09-202022-04-19Shure Acquisition Holdings, Inc.Adjustable lobe shape for array microphones
US11109133B2 (en)2018-09-212021-08-31Shure Acquisition Holdings, Inc.Array microphone module and system
US11218802B1 (en)2018-09-252022-01-04Amazon Technologies, Inc.Beamformer rotation
EP4593419A3 (en)2018-09-272025-10-08Oticon A/sA hearing device and a hearing system comprising a multitude of adaptive two channel beamformers
JP7334406B2 (en)2018-10-242023-08-29ヤマハ株式会社 Array microphones and sound pickup methods
US10972835B2 (en)2018-11-012021-04-06Sennheiser Electronic Gmbh & Co. KgConference system with a microphone array system and a method of speech acquisition in a conference system
US10887467B2 (en)2018-11-202021-01-05Shure Acquisition Holdings, Inc.System and method for distributed call processing and audio reinforcement in conferencing environments
CN109727604B (en)2018-12-142023-11-10上海蔚来汽车有限公司Frequency domain echo cancellation method for speech recognition front end and computer storage medium
US10959018B1 (en)2019-01-182021-03-23Amazon Technologies, Inc.Method for autonomous loudspeaker room adaptation
CN109862200B (en)2019-02-222021-02-12北京达佳互联信息技术有限公司Voice processing method and device, electronic equipment and storage medium
US11172291B2 (en)2019-02-272021-11-09Crestron Electronics, Inc.Millimeter wave sensor used to optimize performance of a beamforming microphone array
JPWO2020184301A1 (en)2019-03-112021-11-04株式会社カネカ Solar cell devices and solar cell modules, and methods for manufacturing solar cell devices
CN110010147B (en)2019-03-152021-07-27厦门大学 Method and system for microphone array speech enhancement
CN113841419B (en)2019-03-212024-11-12舒尔获得控股公司 Ceiling array microphone enclosure and associated design features
WO2020191380A1 (en)2019-03-212020-09-24Shure Acquisition Holdings,Inc.Auto focus, auto focus within regions, and auto placement of beamformed microphone lobes with inhibition functionality
US11558693B2 (en)2019-03-212023-01-17Shure Acquisition Holdings, Inc.Auto focus, auto focus within regions, and auto placement of beamformed microphone lobes with inhibition and voice activity detection functionality
USD924189S1 (en)2019-04-292021-07-06Lg Electronics Inc.Television receiver
USD900070S1 (en)2019-05-152020-10-27Shure Acquisition Holdings, Inc.Housing for a ceiling array microphone
USD900074S1 (en)2019-05-152020-10-27Shure Acquisition Holdings, Inc.Housing for a ceiling array microphone
USD900073S1 (en)2019-05-152020-10-27Shure Acquisition Holdings, Inc.Housing for a ceiling array microphone
USD900072S1 (en)2019-05-152020-10-27Shure Acquisition Holdings, Inc.Housing for a ceiling array microphone
USD900071S1 (en)2019-05-152020-10-27Shure Acquisition Holdings, Inc.Housing for a ceiling array microphone
US11127414B2 (en)2019-07-092021-09-21Blackberry LimitedSystem and method for reducing distortion and echo leakage in hands-free communication
US10984815B1 (en)2019-09-272021-04-20Cypress Semiconductor CorporationTechniques for removing non-linear echo in acoustic echo cancellers
KR102647154B1 (en)2019-12-312024-03-14삼성전자주식회사Display apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9973848B2 (en)*2011-06-212018-05-15Amazon Technologies, Inc.Signal-enhancing beamforming in an augmented reality environment
US20160323668A1 (en)*2015-04-302016-11-03Shure Acquisition Holdings, Inc.Array microphone system and method of assembling the same
US9565493B2 (en)2015-04-302017-02-07Shure Acquisition Holdings, Inc.Array microphone system and method of assembling the same
US10210882B1 (en)*2018-06-252019-02-19Biamp Systems, LLCMicrophone array with automated adaptive beam tracking

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2023059655A1 (en)*2021-10-042023-04-13Shure Acquisition Holdings, Inc.Networked automixer systems and methods

Also Published As

Publication numberPublication date
US20210051397A1 (en)2021-02-18
US12284479B2 (en)2025-04-22
US20240244367A1 (en)2024-07-18
JP2022526761A (en)2022-05-26
CN113841421A (en)2021-12-24
CN113841421B (en)2025-02-11
TWI865506B (en)2024-12-11
CN118803494A (en)2024-10-18
US20230262378A1 (en)2023-08-17
TW202044236A (en)2020-12-01
US11778368B2 (en)2023-10-03
CN118803494B (en)2025-09-19
EP3942845A1 (en)2022-01-26
US11438691B2 (en)2022-09-06
JP7572964B2 (en)2024-10-24

Similar Documents

PublicationPublication DateTitle
US11778368B2 (en)Auto focus, auto focus within regions, and auto placement of beamformed microphone lobes with inhibition functionality
US12425766B2 (en)Auto focus, auto focus within regions, and auto placement of beamformed microphone lobes with inhibition and voice activity detection functionality
US20220386022A1 (en)Adjustable lobe shape for array microphones
EP3217653B1 (en)An apparatus
US9666175B2 (en)Noise cancelation system and techniques
US20160165338A1 (en)Directional audio recording system
TWI441525B (en)Indoor receiving voice system and indoor receiving voice method
US11889261B2 (en)Adaptive beamformer for enhanced far-field sound pickup
US12395794B2 (en)Conferencing systems and methods for room intelligence
JP2019062435A (en)Equipment control device, equipment control program, equipment control method, dialog device, and communication system
US11785380B2 (en)Hybrid audio beamforming system
US20240249742A1 (en)Partially adaptive audio beamforming systems and methods
US20240007592A1 (en)Conferencing systems and methods for talker tracking and camera positioning
US12289528B2 (en)System and method for camera motion stabilization using audio localization
US20250030947A1 (en)Systems and methods for talker tracking and camera positioning in the presence of acoustic reflections
US20250247646A1 (en)Automatic lobe gain adjustment for array microphones
US20250234092A1 (en)Flexible room environment configuration systems and methods
WO2023133513A1 (en)Audio beamforming with nulling control system and methods

Legal Events

DateCodeTitleDescription
121Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number:20719861

Country of ref document:EP

Kind code of ref document:A1

ENPEntry into the national phase

Ref document number:2021556732

Country of ref document:JP

Kind code of ref document:A

NENPNon-entry into the national phase

Ref country code:DE

WWEWipo information: entry into national phase

Ref document number:2020719861

Country of ref document:EP

ENPEntry into the national phase

Ref document number:2020719861

Country of ref document:EP

Effective date:20211021

WWGWipo information: grant in national office

Ref document number:202080036963.0

Country of ref document:CN


[8]ページ先頭

©2009-2025 Movatter.jp