CROSS REFERENCE TO RELATED APPLICATIONSThis application is a reissue application of U.S. patent application Ser. No. 13/049,877, filed Mar. 16, 2011 (now U.S. Pat. No. 8,861,756), which claims the benefit of provisional patent application No. 61/403,952 titled “Microphone array design and implementation for telecommunications and handheld devices”, filed on Sep. 24, 2010 in the United States Patent and Trademark Office.
The specification of the above referenced patent application is incorporated herein by reference in its entirety.
BACKGROUNDMicrophones constitute an important element in today's speech acquisition devices. Currently, most of the hands-free speech acquisition devices, for example, mobile devices, lapels, headsets, etc., convert sound into electrical signals by using a microphone embedded within the speech acquisition device. However, the paradigm of a single microphone often does not work effectively because the microphone picks up many ambient noise signals in addition to the desired sound, specifically when the distance between a user and the microphone is more than a few inches. Therefore, there is a need for a microphone system that operates under a variety of different ambient noise conditions and that places fewer constraints on the user with respect to the microphone, thereby eliminating the need to wear the microphone or be in close proximity to the microphone.
To mitigate the drawbacks of the single microphone system, there is a need for a microphone array that achieves directional gain in a preferred spatial direction while suppressing ambient noise from other directions. Conventional microphone arrays include arrays that are typically developed for applications such as radar and sonar, but are generally not suitable for hands-free or handheld speech acquisition devices. The main reason is that the desired sound signal has an extremely wide bandwidth relative to its center frequency, thereby rendering conventional narrowband techniques employed in the conventional microphone arrays unsuitable. In order to cater to such broadband speech applications, the array size needs to be vastly increased, making the conventional microphone arrays large and bulky, and precluding the conventional microphone arrays from having broader applications, for example, in mobile and handheld communication devices. There is a need for a microphone array system that provides an effective response over a wide spectrum of frequencies while being unobtrusive in terms of size.
Hence, there is a long felt but unresolved need for a broadband microphone array and broadband beamforming system that enhances acoustics of a desired sound signal while suppressing ambient noise signals.
SUMMARY OF THE INVENTIONThis summary is provided to introduce a selection of concepts in a simplified form that are further described in the detailed description of the invention. This summary is not intended to identify key or essential inventive concepts of the claimed subject matter, nor is it intended for determining the scope of the claimed subject matter.
The method and system disclosed herein addresses the above stated need for enhancing acoustics of a target sound signal received from a target sound source, while suppressing ambient noise signals. As used herein, the term “target sound signal” refers to a sound signal from a desired or target sound source, for example, a person's speech that needs to be enhanced. A microphone array system comprising an array of sound sensors positioned in an arbitrary configuration, a sound source localization unit, an adaptive beamforming unit, and a noise reduction unit, is provided. The sound source localization unit, the adaptive beamforming unit, and the noise reduction unit are in operative communication with the array of sound sensors. The array of sound sensors is, for example, a linear array of sound sensors, a circular array of sound sensors, or an arbitrarily distributed coplanar array of sound sensors. The array of sound sensors herein referred to as a “microphone array” receives sound signals from multiple disparate sound sources. The method disclosed herein can be applied on a microphone array with an arbitrary number of sound sensors having, for example, an arbitrary two dimensional (2D) configuration. The sound signals received by the sound sensors in the microphone array comprise the target sound signal from the target sound source among the disparate sound sources, and ambient noise signals.
The sound source localization unit estimates a spatial location of the target sound signal from the received sound signals, for example, using a steered response power-phase transform. The adaptive beamforming unit performs adaptive beamforming for steering a directivity pattern of the microphone array in a direction of the spatial location of the target sound signal. The adaptive beamforming unit thereby enhances the target sound signal from the target sound source and partially suppresses the ambient noise signals. The noise reduction unit suppresses the ambient noise signals for further enhancing the target sound signal received from the target sound source.
In an embodiment where the target sound source that emits the target sound signal is in a two dimensional plane, a delay between each of the sound sensors and an origin of the microphone array is determined as a function of distance between each of the sound sensors and the origin, a predefined angle between each of the sound sensors and a reference axis, and an azimuth angle between the reference axis and the target sound signal. In another embodiment where the target sound source that emits the target sound signal is in a three dimensional plane, the delay between each of the sound sensors and the origin of the microphone array is determined as a function of distance between each of the sound sensors and the origin, a predefined angle between each of the sound sensors and a first reference axis, an elevation angle between a second reference axis and the target sound signal, and an azimuth angle between the first reference axis and the target sound signal. This method of determining the delay enables beamforming for arbitrary numbers of sound sensors and multiple arbitrary microphone array configurations. The delay is determined, for example, in terms of number of samples. Once the delay is determined, the microphone array can be aligned to enhance the target sound signal from a specific direction.
The adaptive beamforming unit comprises a fixed beamformer, a blocking matrix, and an adaptive filter. The fixed beamformer steers the directivity pattern of the microphone array in the direction of the spatial location of the target sound signal from the target sound source for enhancing the target sound signal, when the target sound source is in motion. The blocking matrix feeds the ambient noise signals to the adaptive filter by blocking the target sound signal from the target sound source. The adaptive filter adaptively filters the ambient noise signals in response to detecting the presence or absence of the target sound signal in the sound signals received from the disparate sound sources. The fixed beamformer performs fixed beamforming, for example, by filtering and summing output sound signals from the sound sensors.
In an embodiment, the adaptive filtering comprises sub-band adaptive filtering. The adaptive filter comprises an analysis filter bank, an adaptive filter matrix, and a synthesis filter bank. The analysis filter bank splits the enhanced target sound signal from the fixed beamformer and the ambient noise signals from the blocking matrix into multiple frequency sub-bands. The adaptive filter matrix adaptively filters the ambient noise signals in each of the frequency sub-bands in response to detecting the presence or absence of the target sound signal in the sound signals received from the disparate sound sources. The synthesis filter bank synthesizes a full-band sound signal using the frequency sub-bands of the enhanced target sound signal. In an embodiment, the adaptive beamforming unit further comprises an adaptation control unit for detecting the presence of the target sound signal and adjusting a step size for the adaptive filtering in response to detecting the presence or the absence of the target sound signal in the sound signals received from the disparate sound sources.
The noise reduction unit suppresses the ambient noise signals for further enhancing the target sound signal from the target sound source. The noise reduction unit performs noise reduction, for example, by using a Wiener-filter based noise reduction algorithm, a spectral subtraction noise reduction algorithm, an auditory transform based noise reduction algorithm, or a model based noise reduction algorithm. The noise reduction unit performs noise reduction in multiple frequency sub-bands employed for sub-band adaptive beamforming by the analysis filter bank of the adaptive beamforming unit.
The microphone array system disclosed herein comprising the microphone array with an arbitrary number of sound sensors positioned in arbitrary configurations can be implemented in handheld devices, for example, the iPad® of Apple Inc., the iPhone® of Apple Inc., smart phones, tablet computers, laptop computers, etc. The microphone array system disclosed herein can further be implemented in conference phones, video conferencing applications, or any device or equipment that needs better speech inputs.
BRIEF DESCRIPTION OF THE DRAWINGSThe foregoing summary, as well as the following detailed description of the invention, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the invention, exemplary constructions of the invention are shown in the drawings. However, the invention is not limited to the specific methods and instrumentalities disclosed herein.
FIG. 1 illustrates a method for enhancing a target sound signal from multiple sound signals.
FIG. 2 illustrates a system for enhancing a target sound signal from multiple sound signals.
FIG. 3 exemplarily illustrates a microphone array configuration showing a microphone array having N sound sensors arbitrarily distributed on a circle.
FIG. 4 exemplarily illustrates a graphical representation of a filter-and-sum beamforming algorithm for determining output of the microphone array having N sound sensors.
FIG. 5 exemplarily illustrates distances between an origin of the microphone array and sound sensor M1and sound sensor M3in the circular microphone array configuration, when the target sound signal is at an angle θ from the Y-axis.
FIG. 6A exemplarily illustrates a table showing the distance between each sound sensor in a circular microphone array configuration from the origin of the microphone array, when the target sound source is in the same plane as that of the microphone array.
FIG. 6B exemplarily illustrates a table showing the relationship of the position of each sound sensor in the circular microphone array configuration and its distance to the origin of the microphone array, when the target sound source is in the same plane as that of the microphone array.
FIG. 7A exemplarily illustrates a graphical representation of a microphone array, when the target sound source is in a three dimensional plane.
FIG. 7B exemplarily illustrates a table showing delay between each sound sensor in a circular microphone array configuration and the origin of the microphone array, when the target sound source is in a three dimensional plane.
FIG. 7C exemplarily illustrates a three dimensional working space of the microphone array, where the target sound signal is incident at an elevation angle Ψ<Ω
FIG. 8 exemplarily illustrates a method for estimating a spatial location of the target sound signal from the target sound source by a sound source localization unit using a steered response power-phase transform.
FIG. 9A exemplarily illustrates a graph showing the value of the steered response power-phase transform for every 10°.
FIG. 9B exemplarily illustrates a graph representing the estimated target sound signal from the target sound source.
FIG. 10 exemplarily illustrates a system for performing adaptive beamforming by an adaptive beamforming unit.
FIG. 11 exemplarily illustrates a system for sub-band adaptive filtering.
FIG. 12 exemplarily illustrates a graphical representation showing the performance of a perfect reconstruction filter bank.
FIG. 13 exemplarily illustrates a block diagram of a noise reduction unit that performs noise reduction using a Wiener-filter based noise reduction algorithm.
FIG. 14 exemplarily illustrates a hardware implementation of the microphone array system.
FIGS. 15A-15C exemplarily illustrate a conference phone comprising an eight-sensor microphone array.
FIG. 16A exemplarily illustrates a layout of an eight-sensor microphone array for a conference phone.
FIG. 16B exemplarily illustrates a graphical representation of eight spatial regions to which the eight-sensor microphone array ofFIG. 16A responds.
FIGS. 16C-16D exemplarily illustrate computer simulations showing the steering of the directivity patterns of the eight-sensor microphone array ofFIG. 16A in the directions of 15° and 60° respectively, in the frequency range 300 Hz to 5 kHz.
FIGS. 16E-16L exemplarily illustrate graphical representations showing the directivity patterns of the eight-sensor microphone array ofFIG. 16A in each of the eight spatial regions, where each directivity pattern is an average response from 300 Hz to 5000 Hz.
FIG. 17A exemplarily illustrates a graphical representation of four spatial regions to which a four-sensor microphone array for a wireless handheld device responds.
FIGS. 17B-17I exemplarily illustrate computer simulations showing the directivity patterns of the four-sensor microphone array ofFIG. 17A with respect to azimuth and frequency.
FIGS. 18A-18B exemplarily illustrate a microphone array configuration for a tablet computer.
FIG. 18C exemplarily illustrates an acoustic beam formed using the microphone array configuration ofFIGS. 18A-18B according to the method and system disclosed herein.
FIGS. 18D-18G exemplarily illustrate graphs showing processing results of the adaptive beamforming unit and the noise reduction unit for the microphone array configuration ofFIG. 18B, in both a time domain and a spectral domain for the tablet computer.
FIGS. 19A-19F exemplarily illustrate tables showing different microphone array configurations and the corresponding values of delay τn, for the sound sensors in each of the microphone array configurations.
DETAILED DESCRIPTION OF THE INVENTIONFIG. 1 illustrates a method for enhancing a target sound signal from multiple sound signals. As used herein, the term “target sound signal” refers to a desired sound signal from a desired or target sound source, for example, a person's speech that needs to be enhanced. The method disclosed herein provides101 a microphone array system comprising an array of sound sensors positioned in an arbitrary configuration, a sound source localization unit, an adaptive beamforming unit, and a noise reduction unit. The sound source localization unit, the adaptive beamforming unit, and the noise reduction unit are in operative communication with the array of sound sensors. The microphone array system disclosed herein employs the array of sound sensors positioned in an arbitrary configuration, the sound source localization unit, the adaptive beamforming unit, and the noise reduction unit for enhancing a target sound signal by acoustic beam forming in the direction of the target sound signal in the presence of ambient noise signals.
The array of sound sensors herein referred to as a “microphone array” comprises multiple or an arbitrary number of sound sensors, for example, microphones, operating in tandem. The microphone array refers to an array of an arbitrary number of sound sensors positioned in an arbitrary configuration. The sound sensors are transducers that detect sound and convert the sound into electrical signals. The sound sensors are, for example, condenser microphones, piezoelectric microphones, etc.
The sound sensors receive102 sound signals from multiple disparate sound sources and directions. The target sound source that emits the target sound signal is one of the disparate sound sources. As used herein, the term “sound signals” refers to composite sound energy from multiple disparate sound sources in an environment of the microphone array. The sound signals comprise the target sound signal from the target sound source and the ambient noise signals. The sound sensors are positioned in an arbitrary planar configuration herein referred to as a “microphone array configuration”, for example, a linear configuration, a circular configuration, any arbitrarily distributed coplanar array configuration, etc. By employing beamforming according to the method disclosed herein, the microphone array provides a higher response to the target sound signal received from a particular direction than to the sound signals from other directions. A plot of the response of the microphone array versus frequency and direction of arrival of the sound signals is referred to as a directivity pattern of the microphone array.
The sound source localization unit estimates103 a spatial location of the target sound signal from the received sound signals. In an embodiment, the sound source localization unit estimates the spatial location of the target sound signal from the target sound source, for example, using a steered response power-phase transform as disclosed in the detailed description ofFIG. 8.
The adaptive beamforming unit performsadaptive beamforming104 by steering the directivity pattern of the microphone array in a direction of the spatial location of the target sound signal, thereby enhancing the target sound signal, and partially suppressing the ambient noise signals. Beamforming refers to a signal processing technique used in the microphone array for directional signal reception, that is, spatial filtering. This spatial filtering is achieved by using adaptive or fixed methods. Spatial filtering refers to separating two signals with overlapping frequency content that originate from different spatial locations.
The noise reduction unit performs noise reduction by further suppressing105 the ambient noise signals and thereby further enhancing the target sound signal. The noise reduction unit performs the noise reduction, for example, by using a Wiener-filter based noise reduction algorithm, a spectral subtraction noise reduction algorithm, an auditory transform based noise reduction algorithm, or a model based noise reduction algorithm.
FIG. 2 illustrates asystem200 for enhancing a target sound signal from multiple sound signals. Thesystem200, herein referred to as a “microphone array system”, comprises thearray201 of sound sensors positioned in an arbitrary configuration, the soundsource localization unit202, theadaptive beamforming unit203, and thenoise reduction unit207.
Thearray201 of sound sensors, herein referred to as the “microphone array” is in operative communication with the soundsource localization unit202, theadaptive beamforming unit203, and thenoise reduction unit207. Themicrophone array201 is, for example, a linear array of sound sensors, a circular array of sound sensors, or an arbitrarily distributed coplanar array of sound sensors. Themicrophone array201 achieves directional gain in any preferred spatial direction and frequency band while suppressing signals from other spatial directions and frequency bands. The sound sensors receive the sound signals comprising the target sound signal and ambient noise signals from multiple disparate sound sources, where one of the disparate sound sources is the target sound source that emits the target sound signal.
The soundsource localization unit202 estimates the spatial location of the target sound signal from the received sound signals. In an embodiment, the soundsource localization unit202 uses, for example, a steered response power-phase transform, for estimating the spatial location of the target sound signal from the target sound source.
Theadaptive beamforming unit203 steers the directivity pattern of themicrophone array201 in a direction of the spatial location of the target sound signal, thereby enhancing the target sound signal and partially suppressing the ambient noise signals. Theadaptive beamforming unit203 comprises a fixedbeamformer204, a blockingmatrix205, and anadaptive filter206 as disclosed in the detailed description ofFIG. 10. The fixedbeamformer204 performs fixed beamforming by filtering and summing output sound signals from each of the sound sensors in themicrophone array201 as disclosed in the detailed description ofFIG. 4. In an embodiment, theadaptive filter206 is implemented as a set of sub-band adaptive filters. Theadaptive filter206 comprises ananalysis filter bank206a, anadaptive filter matrix206b, and asynthesis filter bank206c as disclosed in the detailed description ofFIG. 11.
Thenoise reduction unit207 further suppresses the ambient noise signals for further enhancing the target sound signal. Thenoise reduction unit207 is, for example, a Wiener-filter based noise reduction unit, a spectral subtraction noise reduction unit, an auditory transform based noise reduction unit, or a model based noise reduction unit.
FIG. 3 exemplarily illustrates a microphone array configuration showing amicrophone array201 havingN sound sensors301 arbitrarily distributed on acircle302 with a diameter “d”, where “N” refers to the number ofsound sensors301 in themicrophone array201. Consider an example where N=4, that is, there are four sound sensors301 M0, M1, M2, and M3in themicrophone array201. Each of thesound sensors301 is positioned at an acute angle “Φn” from a Y-axis, where Φ1≥0 and n=0, 1, 2, . . . N−1. In an example, the sound sensor301 M0is positioned at an acute angle Φ0from the Y-axis; the sound sensor301 M1is positioned at an acute angle Φ1from the Y-axis; the sound sensor301 M2is positioned at an acute angle Φ2from the Y-axis; and the sound sensor301 M3is positioned at an acute angle Φ3from the Y-axis. A filter-and-sum beamforming algorithm determines the output “y” of themicrophone array201 havingN sound sensors301 as disclosed in the detailed description ofFIG. 4.
FIG. 4 exemplarily illustrates a graphical representation of the filter-and-sum beamforming algorithm for determining the output of themicrophone array201 havingN sound sensors301. Consider an example where the target sound signal from the target sound source is at an angle θ with a normalized frequency w. The microphone array configuration is arbitrary in a two dimensional plane, for example, a circular array configuration where the sound sensors301 M0, M1, M2, . . . , MN, MN−1of themicrophone array201 are arbitrarily positioned on acircle302. The sound signals received by each of thesound sensors301 in themicrophone array201 are inputs to themicrophone array201. Theadaptive beamforming unit203 employs the filter-and-sum beamforming algorithm that applies independent weights to each of the inputs to themicrophone array201 such that directivity pattern of themicrophone array201 is steered to the spatial location of the target sound signal as determined by the soundsource localization unit202.
The output “y” of themicrophone array201 havingN sound sensors301 is the filter-and-sum of the outputs of theN sound sensors301. That is, y=Σn=0N−1wnTxn, where xnis the output of the (n+1)thsound sensor301, and wnTdenotes a transpose of a length-L filter applied to the (n+1)thsound sensor301.
The spatial directivity pattern H (ω, θ) for the target sound signal from angle θ with normalized frequency w is defined as:
whereX is the signal received at the origin of thecircular microphone array201 and W is the frequency response of the real-valued finite impulse response (FIR) filter w. If the target sound source is far enough away from themicrophone array201, the difference between the signal received by the (n+1)thsound sensor301 “xn” and the origin of themicrophone array201 is a delay τn; that is, Xn(ω,τ)=X(ω, θ)e−jωτn.
FIG. 5 exemplarily illustrates distances between an origin of themicrophone array201 and the sound sensor301 M1and the sound sensor301 M3in the circular microphone array configuration, when the target sound signal is at an angle θ from the Y-axis. Themicrophone array system200 disclosed herein can be used with an arbitrary directivity pattern for arbitrarily distributedsound sensors301. For any specific microphone array configuration, the parameter that is defined to achieve beamformer coefficients is the value of delay τnfor eachsound sensor301. To define the value of τn, an origin or a reference point of themicrophone array201 is defined; and then the distance dnbetween eachsound sensor301 and the origin is measured, and then the angle Φnof eachsound sensor301 biased from a vertical axis is measured.
For example, the angle between the Y-axis and the line joining the origin and the sound sensor301 M0is Φ0, the angle between the Y-axis and the line joining the origin and the sound sensor301 M1is Φ1, the angle between the Y-axis and the line joining the origin and the sound sensor301 M2is Φ2, and the angle between the Y-axis and the line joining the origin and the sound sensor301 M3is Φ3. The distance between the origin ◯ and the sound sensor301 M1, and the origin ◯ and the sound sensor301 M3when the incoming target sound signal from the target sound source is at an angle θ from the Y-axis is denoted as τ1and τ3, respectively.
For purposes of illustration, the detailed description refers to a circular microphone array configuration; however, the scope of themicrophone array system200 disclosed herein is not limited to the circular microphone array configuration but may be extended to include a linear array configuration, an arbitrarily distributed coplanar array configuration, or a microphone array configuration with any arbitrary geometry.
FIG. 6A exemplarily illustrates a table showing the distance between eachsound sensor301 in a circular microphone array configuration from the origin of themicrophone array201, when the target sound source is in the same plane as that of themicrophone array201. The distance measured in meters and the corresponding delay (τ) measured in number of samples is exemplarily illustrated inFIG. 6A. In an embodiment where the target sound source that emits the target sound signal is in a two dimensional plane, the delay (τ) between each of thesound sensors301 and the origin of themicrophone array201 is determined as a function of distance (d) between each of thesound sensors301 and the origin, a predefined angle (Φ) between each of thesound sensors301 and a reference axis (Y) as exemplarily illustrated inFIG. 5, and an azimuth angle (θ) between the reference axis (Y) and the target sound signal. The determined delay (τ) is represented in terms of number of samples.
If the target sound source is far enough from themicrophone array201, the time delay between the signal received by the (n+1)thsound sensor301 “xn,” and the origin of themicrophone array201 is herein denoted as “t” measured in seconds. The sound signals received by themicrophone array201, which are in analog form are converted into digital sound signals by sampling the analog sound signals at a particular frequency, for example, 8000 Hz. That is, the number of samples in each second is 8000. The delay τ can be represented as the product of the sampling frequency (fs) and the time delay (t). That is, τ=fs*t. Therefore, the distance between thesound sensors301 in themicrophone array201 corresponds to the time used for the target sound signal to travel the distance and is measured by the number of samples within that time period.
Consider an example where “d” is the radius of thecircle302 of the circular microphone array configuration, “fs” is the sampling frequency, and “c” is the speed of sound.FIG. 6B exemplarily illustrates a table showing the relationship of the position of eachsound sensor301 in the circular microphone array configuration and its distance to the origin of themicrophone array201, when the target sound source is in the same plane as that of themicrophone array201. The distance measured in meters and the corresponding delay (τ) measured in number of samples is exemplarily illustrated inFIG. 6B.
The method of determining the delay (τ) enables beamforming for arbitrary numbers ofsound sensors301 and multiple arbitrary microphone array configurations. Once the delay (τ) is determined, themicrophone array201 can be aligned to enhance the target sound signal from a specific direction.
Therefore, the spatial directivity pattern H can be re-written as:
H(ω,θ)=Σn=0N−1Wn(ω)e−jωτn(θ)=wTg(ω,θ) (2)
where wT=[w0T, w1T, w2T, w3T, . . . , wN−1T] and g(ω,θ)={gi(ω, θ)}i=1 . . . NL={e−jω(k+τn(θ))}i=1 . . . NLis the steering vector, i=1 . . . NL, and k=mod(i−1,L) and n=floor ((i−1)/L).
FIGS. 7A-7C exemplarily illustrate an embodiment of amicrophone array201 when the target sound source is in a three dimensional plane. In an embodiment where the target sound source that emits the target sound signal is in a three dimensional plane, the delay (τ) between each of thesound sensors301 and the origin of themicrophone array201 is determined as a function of distance (d) between each of thesound sensors301 and the origin, a predefined angle (Φ) between each of thesound sensors301 and a first reference axis (Y), an elevation angle (Ψ) between a second reference axis (Z) and the target sound signal, and an azimuth angle (θ) between the first reference axis (Y) and the target sound signal. The determined delay (τ) is represented in terms of number of samples. The determination of the delay enables beamforming for arbitrary numbers of thesound sensors301 and multiple arbitrary configurations of themicrophone array201.
Consider an example of a microphone array configuration with four sound sensors301 M0, M1, M2, and M3.FIG. 7A exemplarily illustrates a graphical representation of amicrophone array201, when the target sound source in a three dimensional plane. As exemplarily illustrated inFIG. 7A, the target sound signal from the target sound source is received from the direction (Ψ, θ) with reference to the origin of themicrophone array201, where Ψ is the elevation angle and θ is the azimuth.
FIG. 7B exemplarily illustrates a table showing delay between eachsound sensor301 in a circular microphone array configuration and the origin of themicrophone array201, when the target sound source is in a three dimensional plane. The target sound source in a three dimensional plane emits a target sound signal from a spatial location (Ψ, θ). The distances between the origin ◯ and the sound sensors301 M0, M1, M2, and M3when the incoming target sound signal from the target sound source is at an angle (Ψ, θ) from the Z-axis and the Y-axis respectively, are denoted as τ0, τ1, τ2, and τ3respectively. When the spatial location of the target sound signal moves from the location Ψ=90° to a location Ψ=0°, sin(Ψ) changes from 1 to 0, and as a result, the difference between eachsound sensor301 in themicrophone array201 becomes smaller and smaller. When Ψ=0°, there is no difference between thesound sensors301, which implies that the target sound signal reaches eachsound sensor301 at the same time. Taking into account that the sample delay between thesound sensors301 can only be an integer, the range where all thesound sensors301 are identical is determined.
FIG. 7C exemplarily illustrates a three dimensional working space of themicrophone array201, where the target sound signal is incident at an elevation angle Ψ<Ω, where Ω is a specific angle and is a variable representing the elevation angle. When the target sound signal is incident at an elevation angle Ψ<Ω, all four sound sensors301 M0, M1, M2, and M3receive the same target sound signal for 0°<0<360°. The delay τ is a function of both the elevation angle Ψ and the azimuth angle θ. That is, τ=τ(θ, Ψ). As used herein, Ω refers to the elevation angle such that all τi(θ, Ω) are equal to each other, where i=0, 1, 2, 3, etc. The value of Ω is determined by the sample delay between each of thesound sensors301 and the origin of themicrophone array201. Theadaptive beamforming unit203 enhances sound from this range and suppresses sound signals from other directions, for example, S1and S2treating them as ambient noise signals.
Consider a least mean square solution for beamforming according to the method disclosed herein. Let the spatial directivity pattern be 1 in the passband and 0 in the stopband. The least square cost function is defined as:
Replacing
|H(ω,θ)|2=wTg(ω,θ)gH(ω,θ)w=wT(GR(ω,θ)+jG1(ω,θ))w=wTGR(ω,θ)w and Re(H(ω,θ))=wTgR(ω,θ), J(ω) becomes
J(ω)=wTQw−2wTα+d, where
Q=∫ΩP∫73PGR(ω,θ)dωdθ+αθΩS∫ΘSGR(ω,θ)dωdθ
α=∫ΩP∫ΘPgR(ω,θ)dωdθ
d=∫ΩP∫ΘP1dωdθ
where gR(ω,θ)=cos [w(k+τn)] and GR(ω,θ)=cos [w(k−1+τn−τm)].
When ∂J/∂w=0, the cost function J is minimized. The least-square estimate of w is obtained by:
w=Q−1α (5)
Applying linear constrains Cw=b, the spatial response is further constrained to a predefined value b at angle θfusing following equation:
Now, the design problem becomes:
and the solution of the constrained minimization problem is equal to:
w=Q−1CT(CQ−1CT)−1(b−CQ−1α)+Q−1α (8)
where w is the filter parameter for the designedadaptive beamforming unit203.
In an embodiment, the beamforming is performed by a delay-sum method. In another embodiment, the beamforming is performed by a filter-sum method.
FIG. 8 exemplarily illustrates a method for estimating a spatial location of the target sound signal from the target sound source by the soundsource localization unit202 using a steered response power-phase transform (SRP-PHAT). The SRP-PHAT combines the advantages of sound source localization methods, for example, the time difference of arrival (TDOA) method and the steered response power (SRP) method. The TDOA method performs the time delay estimation of the sound signals relative to a pair of spatially separatedsound sensors301. The estimated time delay is a function of both the location of the target sound source and the position of each of thesound sensors301 in themicrophone array201. Because the position of each of thesound sensors301 in themicrophone array201 is predefined, once the time delay is estimated, the location of the target sound source can be determined. In the SRP method, a filter-and-sum beamforming algorithm is applied to themicrophone array201 for sound signals in the direction of each of the disparate sound sources. The location of the target sound source corresponds to the direction in which the output of the filter-and-sum beamforming has the largest response power. The TDOA based localization is suitable under low to moderate reverberation conditions. The SRP method requires shorter analysis intervals and exhibits an elevated insensitivity to environmental conditions while not allowing for use under excessive multi-path. The SRP-PHAT method disclosed herein combines the advantages of the TDOA method and the SRP method, has a decreased sensitivity to noise and reverberations compared to the TDOA method, and provides more precise location estimates than existing localization methods.
For direction i (0≤t≤360), the delay Ditis calculated801 between the tthpair of the sound sensors301 (t=1: all pairs). The correlation value corr(Dit) between the tthpair of thesound sensors301 corresponding to the delay of Ditis then calculated802. For the direction i (0≤i≤360), the correlation value is given803 by:
Therefore, the spatial location of the target sound signal is given804 by:
FIGS. 9A-9B exemplarily illustrate graphs showing the results of sound source localization performed using the steered response power-phase transform (SRP-PHAT).FIG. 9A exemplarily illustrates a graph showing the value of the SRP-PHAT for every 10° The maximum value corresponds to the location of the target sound signal from the target sound source.FIG. 9B exemplarily illustrates a graph representing the estimated target sound signal from the target sound source and a ground truth.
FIG. 10 exemplarily illustrates a system for performing adaptive beamforming by theadaptive beamforming unit203. The algorithm for fixed beamforming is disclosed with reference to equations (3) through (8) in the detailed description ofFIG. 4,FIGS. 6A-6B, andFIGS. 7A-7C, which is extended herein to adaptive beamforming. Adaptive beamforming refers to a beamforming process where the directivity pattern of themicrophone array201 is adaptively steered in the direction of a target sound signal emitted by a target sound source in motion. Adaptive beamforming achieves better ambient noise suppression than fixed beamforming. This is because the target direction of arrival, which is assumed to be stable in fixed beamforming, changes with the movement of the target sound source. Moreover, the gains of thesound sensors301 which are assumed uniform in fixed beamforming, exhibit significant distribution. All these factors reduce speech quality. On the other hand, adaptive beamforming adaptively performs beam steering and null steering; therefore, the adaptive beamforming method is more robust against steering error caused by the array imperfection mentioned above.
As exemplarily illustrated inFIG. 10, theadaptive beamforming unit203 disclosed herein comprises a fixedbeamformer204, a blockingmatrix205, an adaptation control unit208, and anadaptive filter206. The fixedbeamformer204 adaptively steers the directivity pattern of themicrophone array201 in the direction of the spatial location of the target sound signal from the target sound source for enhancing the target sound signal, when the target sound source is in motion. Thesound sensors301 in themicrophone array201 receive the sound signals S1, . . . , S4, which comprise both the target sound signal from the target sound source and the ambient noise signals. The received sound signals are fed as input to the fixedbeamformer204 and the blockingmatrix205. The fixedbeamformer204 outputs a signal “b”. In an embodiment, the fixedbeamformer204 performs fixed beamforming by filtering and summing output sound signals from thesound sensors301. The blockingmatrix205 outputs a signal “z” which primarily comprises the ambient noise signals. The blockingmatrix205 blocks the target sound signal from the target sound source and feeds the ambient noise signals to theadaptive filter206 to minimize the effect of the ambient noise signals on the enhanced target sound signal.
The output “z” of the blockingmatrix205 may contain some weak target sound signals due to signal leakage. If the adaptation is active when the target sound signal, for example, speech is present, the speech is cancelled out with the noise. Therefore, the adaptation control unit208 determines when the adaptation should be applied. The adaptation control unit208 comprises a targetsound signal detector208a and a step size adjusting module208b. The targetsound signal detector208a of the adaptation control unit208 detects the presence or absence of the target sound signal, for example, speech. The step size adjusting module208b adjusts the step size for the adaptation process such that when the target sound signal is present, the adaptation is slow for preserving the target sound signal, and when the target sound signal is absent, adaptation is quick for better cancellation of the ambient noise signals.
Theadaptive filter206 is a filter that adaptively updates filter coefficients of theadaptive filter206 so that theadaptive filter206 can be operated in an unknown and changing environment. Theadaptive filter206 adaptively filters the ambient noise signals in response to detecting presence or absence of the target sound signal in the sound signals received from the disparate sound sources. Theadaptive filter206 adapts its filter coefficients with the changes in the ambient noise signals, thereby eliminating distortion in the target sound signal, when the target sound source and the ambient noise signals are in motion. In an embodiment, the adaptive filtering is performed by a set of sub-band adaptive filters using sub-band adaptive filtering as disclosed in the detailed description ofFIG. 11.
FIG. 11 exemplarily illustrates a system for sub-band adaptive filtering. Sub-band adaptive filtering involves separating a full-band signal into different frequency ranges called sub-bands prior to the filtering process. The sub-band adaptive filtering using sub-band adaptive filters lead to a higher convergence speed compared to using a full-band adaptive filter. Moreover, thenoise reduction unit207 disclosed herein is developed in a sub-band, whereby applying sub-band adaptive filtering provides the same sub-band framework for both beamforming and noise reduction, and thus saves on computational cost.
As exemplarily illustrated inFIG. 11, theadaptive filter206 comprises ananalysis filter bank206a, anadaptive filter matrix206b, and asynthesis filter bank206c. Theanalysis filter bank206a splits the enhanced target sound signal (b) from the fixedbeamformer204 and the ambient noise signals (z) from the blockingmatrix205 exemplarily illustrated inFIG. 10 into multiple frequency sub-bands. Theanalysis filter bank206a performs an analysis step where the outputs of the fixedbeamformer204 and the blockingmatrix205 are split into frequency sub bands. The sub-bandadaptive filter206 typically has a shorter impulse response than its full band counterpart. The step size of the sub-bands can be adjusted individually for each sub-band by the step-size adjusting module208b, which leads to a higher convergence speed compared to using a full band adaptive filter.
Theadaptive filter matrix206b adaptively filters the ambient noise signals in each of the frequency sub-bands in response to detecting the presence or absence of the target sound signal in the sound signals received from the disparate sound sources. Theadaptive filter matrix206b performs an adaptation step, where theadaptive filter206 is adapted such that the filter output only contains the target sound signal, for example, speech. Thesynthesis filter bank206c synthesizes a full-band sound signal using the frequency sub-bands of the enhanced target sound signal. Thesynthesis filter bank206c performs a synthesis step where the sub-band sound signal is synthesized into a full-band sound signal. Since the noise reduction and the beamforming are performed in the same sub-band framework, the noise reduction as disclosed in the detailed description ofFIG. 13, by thenoise reduction unit207 is performed prior to the synthesis step, thereby reducing computation.
In an embodiment, theanalysis filter bank206a is implemented as a perfect-reconstruction filter bank, where the output of thesynthesis filter bank206c after the analysis and synthesis steps perfectly matches the input to theanalysis filter bank206a. That is, all the sub-bandanalysis filter banks206a are factorized to operate on prototype filter coefficients and a modulation matrix is used to take advantage of the fast Fourier transform (FFT). Both analysis and synthesize steps require performing frequency shifts in each sub-band, which involves complex value computations with cosines and sinusoids. The method disclosed herein employs the FFT to perform the frequency shifts required in each sub-band, thereby minimizing the amount of multiply-accumulate operations. The implementation of the sub-bandanalysis filter bank206a as a perfect-reconstruction filter bank ensures the quality of the target sound signal by ensuring that the sub-bandanalysis filter banks206a do not distort the target sound signal itself.
FIG. 12 exemplarily illustrates a graphical representation showing the performance of a perfect-reconstruction filter bank. The solid line represents the input signal to theanalysis filter bank206a, and the circles represent the output of thesynthesis filter bank206c after analysis and synthesis. As exemplarily illustrated inFIG. 12, the output of thesynthesis filter bank206c perfectly matches the input, and is therefore referred to as the perfect-reconstruction filter bank.
FIG. 13 exemplarily illustrates a block diagram of anoise reduction unit207 for performing noise reduction using, for example, a Wiener-filter based noise reduction algorithm. Thenoise reduction unit207 performs noise reduction for further suppressing the ambient noise signals after adaptive beamforming, for example, by using a Wiener-filter based noise reduction algorithm, a spectral subtraction noise reduction algorithm, an auditory transform based noise reduction algorithm, or a model based noise reduction algorithm. In an embodiment, thenoise reduction unit207 performs noise reduction in multiple frequency sub-bands employed by ananalysis filter bank206a of theadaptive beamforming unit203 for sub-band adaptive beamforming.
In an embodiment, the noise reduction is performed using the Wiener-filter based noise reduction algorithm. Thenoise reduction unit207 explores the short-term and long-term statistics of the target sound signal, for example, speech, and the ambient noise signals, and the wide-band and narrow-band signal-to-noise ratio (SNR) to support a Wiener gain filtering. Thenoise reduction unit207 comprises a target sound signal statistics analyzer207a, anoise statistics analyzer207b, a signal-to-noise ratio (SNR) analyzer207c, and a Wiener filter207d. The target soundsignal statistics analyzer207a explores the short-term and long-term statistics of the target sound signal, for example, speech. Similarly, thenoise statistics analyzer207b explores the short-term and long-term statistics of the ambient noise signals. The SNR analyzer207c of thenoise reduction unit207 explores the wide-band and narrow-band signal-to-noise ratio (SNR). After the spectrum of noisy-speech passes through the Wiener filter207d, an estimation of the clean-speech spectrum is generated. Thesynthesis filter bank206c, by an inverse process of theanalysis filter bank206a, reconstructs the signals of the clean speech into a full-band signal, given the estimated spectrum of the clean speech.
FIG. 14 exemplarily illustrates a hardware implementation of themicrophone array system200 disclosed herein. The hardware implementation of themicrophone array system200 disclosed in the detailed description ofFIG. 2 comprises themicrophone array201 having an arbitrary number ofsound sensors301 positioned in an arbitrary configuration,multiple microphone amplifiers1401, one or moreaudio codecs1402, a digital signal processor (DSP)1403, aflash memory1404, one ormore power regulators1405 and1406, abattery1407, a loudspeaker or aheadphone1408, and acommunication interface1409. Themicrophone array201 comprises, for example, four or eightsound sensors301 arranged in a linear or a circular microphone array configuration. Themicrophone array201 receives the sound signals.
Consider an example where themicrophone array201 comprises foursound sensors301 that pick up the sound signals. Fourmicrophone amplifiers1401 receive the output sound signals from the foursound sensors301. Themicrophone amplifiers1401 also referred to as preamplifiers provide a gain to boost the power of the received sound signals for enhancing the sensitivity of thesound sensors301. In an example, the gain of the preamplifiers is 20 dB.
Theaudio codec1402 receives the amplified output from themicrophone amplifiers1401. Theaudio codec1402 provides an adjustable gain level, for example, from about −74 dB to about 6 dB. The received sound signals are in an analog form. Theaudio codec1402 converts the four channels of the sound signals in the analog form into digital sound signals. The pre-amplifiers may not be required for some applications. Theaudio codec1402 then transmits the digital sound signals to theDSP1403 for processing of the digital sound signals. TheDSP1403 implements the soundsource localization unit202, theadaptive beamforming unit203, and thenoise reduction unit207.
After the processing, theDSP1403 either stores the processed signal from theDSP1403 in a memory device for a recording application, or transmits the processed signal to thecommunication interface1409. The recording application comprises, for example, storing the processed signal onto the memory device for the purposes of playing back the processed signal at a later time. Thecommunication interface1409 transmits the processed signal, for example, to a computer, the internet, or a radio for communicating the processed signal. In an embodiment, themicrophone array system200 disclosed herein implements a two-way communication device where the signal received from thecommunication interface1409 is processed by theDSP1403 and the processed signal is then played through the loudspeaker or theheadphone1408.
Theflash memory1404 stores the code for theDSP1403 and compressed audio signals. When themicrophone array system200 boots up, theDSP1403 reads the code from theflash memory1404 into an internal memory of theDSP1403 and then starts executing the code. In an embodiment, theaudio codec1402 can be configured for encoding and decoding audio or sound signals during the start up stage by writing to registers of theDSP1403. For an eight-sensor microphone array201, two four-channel audio codec1402 chips may be used. Thepower regulators1405 and1406, for example,linear power regulators1405 and switchpower regulators1406 provide appropriate voltage and current supply for all the components, for example,201,1401,1402,1403, etc., mechanically supported and electrically connected on a circuit board. A universal serial bus (USB) control is built into theDSP1403. Thebattery1407 is used for powering themicrophone array system200.
Consider an example where themicrophone array system200 disclosed herein is implemented on a mixed signal circuit board having a six-layer printed circuit board (PCB). Noisy digital signals easily contaminate the low voltage analog sound signals from thesound sensors301. Therefore, the layout of the mixed signal circuit board is carefully partitioned to isolate the analog circuits from the digital circuits. Although both the inputs and outputs of themicrophone amplifiers1401 are in analog form, themicrophone amplifiers1401 are placed in a digital region of the mixed signal circuit board because of theirhigh power consumption1401 and switch amplifier nature.
Thelinear power regulators1405 are deployed in an analog region of the mixed signal circuit board due to the low noise property exhibited by thelinear power regulators1405. Five power regulators, for example,1405 are designed in themicrophone array system200 circuits to ensure quality. Theswitch power regulators1406 achieve an efficiency of about 95% of the input power and have high output current capacity; however their outputs are too noisy for analog circuits. The efficiency of thelinear power regulators1405 is determined by the ratio of the output voltage to the input voltage, which is lower than that of theswitch power regulators1406 in most cases. The regulator outputs utilized in themicrophone array system200 circuits are stable, quiet, and suitable for the low power analog circuits.
In an example, themicrophone array system200 is designed with amicrophone array201 having dimensions of 10 cm×2.5 cm×1.5 cm, a USB interface, and an assembled PCB supporting themicrophone array201 and aDSP1403 having a low power consumption design devised for portable devices, a four-channel codec1402, and aflash memory1404. TheDSP1403 chip is powerful enough to handle theDSP1403 computations in themicrophone array system200 disclosed herein. The hardware configuration of this example can be used for any microphone array configuration, with suitable modifications to the software. In an embodiment, theadaptive beamforming unit203 of themicrophone array system200 is implemented as hardware with software instructions programmed on theDSP1403. TheDSP1403 is programmed for beamforming, noise reduction, echo cancellation, and USB interfacing according to the method disclosed herein, and fine tuned for optimal performance.
FIGS. 15A-15C exemplarily illustrate aconference phone1500 comprising an eight-sensor microphone array201. The eight-sensor microphone array201 comprises eightsound sensors301 arranged in a configuration as exemplarily illustrated inFIG. 15A. A top view of theconference phone1500 comprising the eight-sensor microphone array201 is exemplarily illustrated inFIG. 15A. A front view of theconference phone1500 comprising the eight-sensor microphone array201 is exemplarily illustrated inFIG. 15B. Aheadset1502 that can be placed in abase holder1501 of theconference phone1500 having the eight-sensor microphone array201 is exemplarily illustrated inFIG. 15C. In addition to aconference phone1500, themicrophone array system200 disclosed herein with broadband beamforming can be configured for a mobile phone, a tablet computer, etc., for speech enhancement and noise reduction.
FIG. 16A exemplarily illustrates a layout of an eight-sensor microphone array201 for aconference phone1500. Consider an example of acircular microphone array201 in which eightsound sensors301 are mounted on the surface of theconference phone1500 as exemplarily illustrated inFIG. 15A. Theconference phone1500 has aremovable handset1502 on top, and hence themicrophone array system200 is configured to accommodate thehandset1502 as exemplarily illustrated inFIGS. 15A-15C. In an example, thecircular microphone array201 has a diameter of about four inches. Eightsound sensors301, for example, microphones, M0, M1, M2, M3, M4, M5, M6, and M7are distributed along acircle302 on theconference phone1500. Microphones M4-M7are separated by 90 degrees from each other, and microphones M0-M3are rotated counterclockwise by 60 degrees from microphone M4-M7respectively.
FIG. 16B exemplarily illustrates a graphical representation of eight spatial regions to which the eight-sensor microphone array201 ofFIG. 16A responds. The space is divided into eight spatial regions with equal spaces centered at 15°, 60°, 105°, 150°, 195°, 240°, 285°, and 330° respectively. Theadaptive beamforming unit203 configures the eight-sensor microphone array201 to automatically point to one of these eight spatial regions according to the location of the target sound signal from the target sound source as estimated by the soundsource localization unit202.
FIGS. 16C-16D exemplarily illustrate computer simulations showing the steering of the directivity patterns of the eight-sensor microphone array201 ofFIG. 16A, in the directions 15° and 60° respectively, in the frequency range 300 Hz to 5 kHz.FIG. 16C exemplarily illustrates the computer simulation result showing the directivity pattern of themicrophone array201 when the target sound signal is received from the target sound source in the spatial region centered at 15°.
The computer simulation for verifying the performance of theadaptive beamforming unit203 when the target sound signal is received from the target sound source in the spatial region centered at 15° uses the following parameters:
Sampling frequency fs=16 k,
FIR filter taper length L=20
Passband (Θp, Ωp)={300-5000 Hz, −5°-35°}, designed spatial directivity pattern is 1.
Stopband (Θs, Ωs)={300˜5000 Hz,−180°˜−15°+45°˜180°}, the designed spatial directivity pattern is 0.
It can be seen that the directivity pattern of themicrophone array201 in the spatial region centered at 15° is enhanced while the sound signals from all other spatial regions are suppressed.
FIG. 16D exemplarily illustrates the computer simulation result showing the directivity pattern of themicrophone array201 when the target sound signal is received from the target sound source in the spatial region centered at 60°. The computer simulation for verifying the performance of theadaptive beamforming unit203 when the target sound signal is received from the target sound source in the spatial region centered at 60° uses the following parameters:
Sampling frequency fs=16 k,
FIR filter taper length L=20
Passband (Θp, Ωp)={300-5000 Hz, 40°-80°}, designed spatial directivity pattern is 1.
Stopband (Θs, Ωs)={300˜5000 Hz, −180°˜30°+90°˜180°}, the designed spatial directivity pattern is 0.
It can be seen that the directivity pattern of themicrophone array201 in the spatial region centered at 60° is enhanced while the sound signals from all other spatial regions are suppressed. The other six spatial regions have similar parameters. Moreover, in all frequencies, the main lobe has the same level, which means the target sound signal has little distortion in frequency.
FIGS. 16E-16L exemplarily illustrate graphical representations showing the directivity patterns of the eight-sensor microphone array201 ofFIG. 16A in each of the eight spatial regions, where each directivity pattern is an average response from 300 Hz to 5000 Hz. The main lobe is about 10 dB higher than the side lobe, and therefore the ambient noise signals from other directions are highly suppressed compared to the target sound signal in the pass direction. Themicrophone array system200 calculates the filter coefficients for the target sound signal, for example, speech signals from eachsound sensor301 and combines the filtered signals to enhance the speech from any specific direction. Since speech covers a large range of frequencies, the method andsystem200 disclosed herein covers broadband signals from 300 Hz to 5000 Hz.
FIG. 16E exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array201 when the target sound signal is received from the target sound source in the spatial region centered at 15°.FIG. 16F exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array201 when the target sound signal is received from the target sound source in the spatial region centered at 60°.FIG. 16G exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array201 when the target sound signal is received from the target sound source in the spatial region centered at 105°.FIG. 16H exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array201 when the target sound signal is received from the target sound source in the spatial region centered at 150°.FIG. 16I exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array201 when the target sound signal is received from the target sound source in the spatial region centered at 195°.FIG. 16J exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array201 when the target sound signal is received from the target sound source in the spatial region centered at 240°.FIG. 16K exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array201 when the target sound signal is received from the target sound source in the spatial region centered at 285°.FIG. 16L exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array201 when the target sound signal is received from the target sound source in the spatial region centered at 330°. Themicrophone array system200 disclosed herein enhances the target sound signal from each of the directions 15°, 60°, 105°, 150°, 195°, 240°, 285°, and 330°, while suppressing the ambient noise signals from the other directions.
Themicrophone array system200 disclosed herein can be implemented for a square microphone array configuration and a rectangular array configuration where asound sensor301 is positioned in each corner of the four-cornered array. Themicrophone array system200 disclosed herein implements beamforming from plane to three dimensional sound sources.
FIG. 17A exemplarily illustrates a graphical representation of four spatial regions to which a four-sensor microphone array201 for a wireless handheld device responds. The wireless handheld device is, for example, a mobile phone. Consider an example where themicrophone array201 comprises foursound sensors301, for example, microphones, uniformly distributed around acircle302 having diameter equal to about two inches. This configuration is identical to positioning foursound sensors301 or microphones on four corners of a square. The space is divided into four spatial regions with equal space centered at −90°, 0°, 90°, and 180° respectively. Theadaptive beamforming unit203 configures the four-sensor microphone array201 to automatically point to one of these spatial regions according to the location of the target sound signal from the target sound source as estimated by the soundsource localization unit202.
FIGS. 17B-17I exemplarily illustrate computer simulations showing the directivity patterns of the four-sensor microphone array201 ofFIG. 17A with respect to azimuth and frequency. The results of the computer simulations performed for verifying the performance of theadaptive beamforming unit203 of themicrophone array system200 disclosed herein for a sampling frequency fs=16 k and FIR filter taper length L=20, are as follows:
For the spatial region centered at 0°:
Passband (Θp, Ωp)={300-4000 Hz, −20°-20°}, designed spatial directivity pattern is 1.
Stopband (Θ, Ωs)={300˜4000 Hz, −180°˜−30°+30°˜180°}, the designed spatial directivity pattern is 0.
For the spatial region centered at 90°:
Passband (Θp, Ωp)={300-4000 Hz, 70°-110°}, designed spatial directivity pattern is 1.
Stopband (Θs, Ωs)={300˜4000 Hz, −180°˜60°+120°˜180°}, the designed spatial directivity pattern is 0. The directivity patterns for the spatial regions centered at −90° and 180° are similarly obtained.
FIG. 17B exemplarily illustrates the computer simulation result representing a three dimensional (3D) display of the directivity pattern of the four-sensor microphone array201 when the target sound signal is received from the target sound source in the spatial region centered at −90°.FIG. 17C exemplarily illustrates the computer simulation result representing a 2D display of the directivity pattern of the four-sensor microphone array201 when the target sound signal is received from the target sound source in the spatial region centered at −90°.
FIG. 17D exemplarily illustrates the computer simulation result representing a 3D display of the directivity pattern of the four-sensor microphone array201 when the target sound signal is received from the target sound source in the spatial region centered at 0°.FIG. 17E exemplarily illustrates the computer simulation result representing a 2D display of the directivity pattern of the four-sensor microphone array201 when the target sound signal is received from the target sound source in the spatial region centered at 0°.
FIG. 17F exemplarily illustrates the computer simulation result representing a 3D display of the directivity pattern of the four-sensor microphone array201 when the target sound signal is received from the target sound source in the spatial region centered at 90°.FIG. 17G exemplarily illustrates the computer simulation result representing a 2D display of the directivity pattern of the four-sensor microphone array201 when the target sound signal is received from the target sound source in the spatial region centered at 90°.
FIG. 17H exemplarily illustrates the computer simulation result representing a 3D display of the directivity pattern of the four-sensor microphone array201 when the target sound source is received from the target sound source in the spatial region centered at 180°.FIG. 17I exemplarily illustrates the computer simulation result representing a 2D display of the directivity pattern of the four-sensor microphone array201 when the target sound source is received from the target sound source in the spatial region centered at 180°. The 3D displays of the directivity patterns inFIG. 17B,FIG. 17D,FIG. 17F, andFIG. 17H demonstrate that the passbands have the same height. The 2D displays of the directivity patterns inFIG. 17C,FIG. 17E,FIG. 17G, andFIG. 17I demonstrate that the passbands have the same width along the frequency and demonstrates the broadband properties of themicrophone array201.
FIGS. 18A-18B exemplarily illustrates a microphone array configuration for a tablet computer. In this example, foursound sensors301 of themicrophone array201 are positioned on aframe1801 of the tablet computer, for example, the iPad® of Apple Inc. Geometrically, thesound sensors301 are distributed on thecircle302 as exemplarily inFIG. 18B. The radius of thecircle302 is equal to the width of the tablet computer. The angle θ between the sound sensors301 M2and M3is determined to avoid spatial aliasing up to 4000 Hz. This microphone array configuration enhances a front speaker's voice and suppresses background ambient noise. Theadaptive beamforming unit203 configures themicrophone array201 to form anacoustic beam1802 pointing frontwards using the method andsystem200 disclosed herein. The target sound signal, that is, the front speaker's voice within the range of Φ<30° is enhanced compared to the sound signals from other directions.
FIG. 18C exemplarily illustrates anacoustic beam1802 formed using the microphone array configuration ofFIGS. 18A-18B according to the method andsystem200 disclosed herein.
FIGS. 18D-18G exemplarily illustrates graphs showing processing results of theadaptive beamforming unit203 and thenoise reduction unit207 for the microphone array configuration ofFIG. 18B, in both a time domain and a spectral domain for the tablet computer. Consider an example where a speaker is talking in front of the tablet computer with ambient noise signals on the side.FIG. 18D exemplarily illustrates a graph showing the performance of themicrophone array201 before performing beamforming and noise reduction with a signal-to-noise ratio (SNR) of 15 dB.FIG. 18E exemplarily illustrates a graph showing the performance of themicrophone array201 after performing beamforming and noise reduction, according to the method disclosed herein, with an SNR of 15 dB.FIG. 18F exemplarily illustrates a graph showing the performance of themicrophone array201 before performing beamforming and noise reduction with an SNR of 0 dB.FIG. 18G exemplarily illustrates a graph showing the performance of themicrophone array201 after performing beamforming and noise reduction, according to the method disclosed herein, with an SNR of 0 dB.
It can be seen fromFIGS. 18D-18G that the performance graph is noisier for themicrophone array201 before the beamforming and noise reduction is performed. Therefore, theadaptive beamforming unit203 and thenoise reduction unit207 of themicrophone array system200 disclosed herein suppresses ambient noise signals while maintaining the clarity of the target sound signal, for example, the speech signal.
FIGS. 19A-19F exemplarily illustrate tables showing different microphone array configurations and the corresponding values of delay τnfor thesound sensors301 in each of the microphone array configurations. The broadband beamforming method disclosed herein can be used formicrophone arrays201 with arbitrary numbers ofsound sensors301 and arbitrary locations of thesound sensors301. Thesound sensors301 can be mounted on surfaces or edges of any speech acquisition device. For any specific microphone array configuration, the only parameter that needs to be defined to achieve the beamformer coefficients is the value of τnfor eachsound sensor301 as disclosed in the detailed description ofFIG. 5,FIGS. 6A-6B, andFIGS. 7A-7C and as exemplarily illustrated inFIGS. 19A-19F. In an example, the microphone array configuration exemplarily illustrated inFIG. 19F is implemented on a handheld device for hands-free speech acquisition. In a hands-free and non-close talking scenario, a user prefers to talk in distance rather than speaking close to thesound sensor301 and may want to talk while watching a screen of the handheld device. Themicrophone array system200 disclosed herein allows the handheld device to pick up sound signals from the direction of the speaker's mouth and suppress noise from other directions. The method andsystem200 disclosed herein may be implemented on any device or equipment, for example, a voice recorder where a target sound signal or speech needs to be enhanced.
The foregoing examples have been provided merely for the purpose of explanation and are in no way to be construed as limiting of the present invention disclosed herein. While the invention has been described with reference to various embodiments, it is understood that the words, which have been used herein, are words of description and illustration, rather than words of limitation. Further, although the invention has been described herein with reference to particular means, materials and embodiments, the invention is not intended to be limited to the particulars disclosed herein; rather, the invention extends to all functionally equivalent structures, methods and uses, such as are within the scope of the appended claims. Those skilled in the art, having the benefit of the teachings of this specification, may affect numerous modifications thereto and changes may be made without departing from the scope and spirit of the invention in its aspects.