Movatterモバイル変換


[0]ホーム

URL:


CN114495960B - Audio noise reduction filtering method, noise reduction filtering device, electronic device and storage medium - Google Patents

Audio noise reduction filtering method, noise reduction filtering device, electronic device and storage medium

Info

Publication number
CN114495960B
CN114495960BCN202111605349.4ACN202111605349ACN114495960BCN 114495960 BCN114495960 BCN 114495960BCN 202111605349 ACN202111605349 ACN 202111605349ACN 114495960 BCN114495960 BCN 114495960B
Authority
CN
China
Prior art keywords
signal
covariance matrix
audio
neural network
noise
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111605349.4A
Other languages
Chinese (zh)
Other versions
CN114495960A (en
Inventor
黄景标
陈庭威
林聚财
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co LtdfiledCriticalZhejiang Dahua Technology Co Ltd
Priority to CN202111605349.4ApriorityCriticalpatent/CN114495960B/en
Publication of CN114495960ApublicationCriticalpatent/CN114495960A/en
Application grantedgrantedCritical
Publication of CN114495960BpublicationCriticalpatent/CN114495960B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The application discloses an audio noise reduction filtering method, a noise reduction filter device, electronic equipment and a computer storage medium, and relates to the technical field of audio signal processing. The method comprises the steps of obtaining characteristic parameters of an audio input signal by using a preset neural network, calculating a filtering weight coefficient based on the characteristic parameters, processing the audio input signal based on the filtering weight coefficient to obtain a filtered audio signal, calculating a cost value based on the filtered audio signal and a real signal, and training the preset neural network by using the cost value. Through the mode, the audio noise reduction filtering method can effectively reduce noise in an audio system and improve voice quality.

Description

Audio noise reduction filtering method, noise reduction filtering device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of audio signal processing, in particular to an audio noise reduction filtering method, a noise reduction filtering device, electronic equipment and a computer storage medium.
Background
In real life, when people use mobile terminals such as mobile phones and the like to start hands-free phones or video conference terminals to perform video conferences, various noises exist in the field environment, and the microphone collects environmental noises besides collecting target signals, so that the noise needs to be suppressed by using a filtering technology, and when the signal-to-noise ratio becomes low, the voice loss is serious and the word breaking phenomenon of the processed voice occurs in the current noise reduction technology.
Disclosure of Invention
The application mainly solves the technical problem of providing an audio noise reduction filtering method, a noise reduction filtering device, electronic equipment and a computer storage medium, which are used for reducing noise and improving the quality of voice.
In order to solve the technical problems, the application adopts a technical scheme that an audio noise reduction filtering method is provided. The method comprises the following steps:
the method comprises the steps of obtaining characteristic parameters of an audio input signal by using a preset neural network, calculating a filtering weight coefficient based on the characteristic parameters, processing the audio input signal based on the filtering weight coefficient to obtain a filtered audio signal, calculating a cost value based on the filtered audio signal and a real signal, and training the preset neural network by using the cost value.
In order to solve the technical problems, the application adopts another technical scheme that a noise reduction filter device is provided. The noise reduction filter device includes:
the device comprises a preset neural network module, a calculation module, a filter module, a cost value generation module and a training module, wherein the preset neural network module is used for acquiring characteristic parameters of an audio input signal, the calculation module is connected with the preset neural network module and used for calculating a filtering weight coefficient based on the characteristic parameters, the filter module is connected with the calculation module and used for processing the audio input signal based on the filtering weight coefficient to acquire a filtered audio signal, and the calculation module is further used for calculating cost values of the filtered audio signal and a real signal and sending the cost values to the preset neural network module and training the cost values.
In order to solve the technical problems, the electronic equipment comprises a processor and a memory connected with the processor, program data are stored in the memory, the processor executes the program data stored in the memory to execute the implementation, the characteristic parameters of an audio input signal are obtained through a preset neural network, a filtering weight coefficient is calculated based on the characteristic parameters, the audio input signal is processed based on the filtering weight coefficient to obtain a filtered audio signal, a cost value is calculated based on the filtered audio signal and a real signal, and the preset neural network is trained through the cost value.
In order to solve the technical problems, the application adopts another technical scheme that a computer storage medium is provided, program instructions are stored in the computer storage medium, the program instructions are executed to realize that a preset neural network is utilized to acquire characteristic parameters of an audio input signal, filter weight coefficients are calculated based on the characteristic parameters, the audio input signal is processed based on the filter weight coefficients to acquire a filtered audio signal, a cost value is calculated based on the filtered audio signal and a real signal, and the preset neural network is trained by utilizing the cost value.
The audio noise reduction filtering method has the advantages that the audio noise reduction filtering method is different from the situation of the prior art, the audio sound is processed in a mode of combining the preset neural network with the traditional signal processing, the advantage that characteristic parameters which are difficult to estimate in the traditional signal processing can be better solved by utilizing the preset neural network, the preset neural network is trained, and a final preset neural network model is obtained, so that the traditional signal processing is carried out on the audio sound by acquiring the filtering weight coefficient, and the voice quality can be steadily ensured. The application combines the preset neural network with the traditional signal processing to complement the two, when the signal to noise ratio becomes low, the noise of the audio input signal can be reduced, and the voice quality of the final audio output is improved.
Drawings
FIG. 1 is a schematic diagram of an embodiment of a noise reduction filter device according to the present application;
FIG. 2 is a flow chart of an embodiment of an audio noise reduction filtering method according to the present application;
FIG. 3 is a flowchart illustrating the step S101 in FIG. 2;
FIG. 4 is a flowchart of step S101 in FIG. 2;
FIG. 5 is a flowchart illustrating the step S102 in FIG. 2;
FIG. 6 is a schematic diagram showing a specific flow of step S401 in FIG. 5;
FIG. 7 is a flowchart illustrating the step S402 in FIG. 5;
FIG. 8 is a flow chart of another embodiment of the audio noise reduction filtering method of the present application;
FIG. 9 is a schematic diagram of an embodiment of an audio noise reduction filtering method of the present application;
FIG. 10 is a schematic diagram of an embodiment of an electronic device of the present application;
FIG. 11 is a schematic diagram of a computer storage medium according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The present application firstly proposes a noise reduction filter device 100, as shown in fig. 1, fig. 1 is a schematic structural diagram of an embodiment of the noise reduction filter device of the present application, where the noise reduction filter device 100 includes:
The device comprises a preset neural network module 110, a calculation module 120, a filter module 130, a calculation module 120 and a training module, wherein the preset neural network module 110 is used for acquiring characteristic parameters of an audio input signal, the calculation module 120 is connected with the preset neural network module 110 and used for calculating a filtering weight coefficient based on the characteristic parameters, the filter module 130 is connected with the calculation module 120 and used for processing the audio input signal based on the filtering weight coefficient to acquire a filtered audio signal, the calculation module 120 is further used for calculating the filtered audio signal and a real signal cost value, the cost value is sent to the preset neural network module 110, and the cost value is used for training.
The preset neural network module 110 is configured to train the characteristic parameters of the audio input signal, input the characteristic parameters to be processed of the audio input signal into the preset neural network module 110 for training, and obtain three processed characteristic parameters, which are a noise covariance matrix, a received signal covariance matrix and a priori signal-to-noise ratio, respectively.
Alternatively, the preset neural network module 110 may employ various common neural networks, such as a recurrent neural network (Recurrent Neural Network, RNN), a convolutional neural network (Convolutional NeuralNetworks, CNN), and a convolutional recurrent neural network (Convolutional Recurrent Neural Network, CRNN).
Taking CNN as a preset neural network module for example, CNN can be regarded as an end-to-end black box, the middle is a hidden layer, the hidden layer can comprise a convolution layer and a pooling layer, one end is an input layer, and the other end is an output layer. When the characteristic parameters to be processed of the audio input signal are input in the input layer, normalization is adopted in the input layer for convenient calculation, and when the convolution layer in the hidden layer is used, the characteristic parameters to be processed are extracted for enhancing the characteristics of the original signal and reducing noise, and the pooling layer in the hidden layer is used for reducing the data volume as much as possible while preserving useful information. And finally, outputting the three processed characteristic parameters of the training at the output layer, wherein the parameters in the hidden layer are updated in each training.
And one end of the calculation module 120 is connected with the filter module 130, the inter-frame correlation coefficient and the filtering weight coefficient are calculated by using the noise covariance matrix, the received signal covariance matrix and the prior signal to noise ratio, the other end of the calculation module 120 is connected with the preset neural network module 110, the noise covariance matrix, the received signal covariance matrix and the prior signal to noise ratio are obtained from the preset neural network module 110, the audio signal and the real signal filtered by the filter module 130 are input into the cost function to calculate the cost value, and the cost value is output to the preset neural network module 110, so that the preset neural network module 110 continues training until the final preset neural network module 110 is obtained.
The filter module 130 is connected with the preset neural network module 110, trains the preset neural network module 110 to obtain three processed characteristic parameters, calculates to obtain filtering weight parameters based on a formula, and inputs the filtering weight parameters into the filter module 130 to process an audio input signal to obtain a filtered audio signal. The filter module 130 may be any filter of the audio filters, which is not limited herein.
The audio input signal is processed by combining the preset neural network module 110 and the filter module 130, so that the problem that certain filtering weight coefficients are difficult to estimate in the traditional signal processing method is solved, the noise reduction effect and the voice quality are effectively balanced, and the final effect of voice processing is improved.
The present application further provides an audio noise reduction filtering method, as shown in fig. 2, fig. 2 is a schematic flow chart of an embodiment of an audio noise reduction filtering method according to an embodiment of the present application, and the method may be used in the noise reduction filtering device 100, and specifically includes steps S101 to S105:
and step S101, acquiring characteristic parameters of an audio input signal by using a preset neural network.
The method comprises the steps of obtaining sample audio, obtaining characteristic parameters to be processed of an audio input signal, inputting the characteristic parameters to a preset neural network for training, obtaining the characteristic parameters after the processing of the audio input signal from the preset neural network, wherein the obtained characteristic parameters after the processing comprise a noise covariance matrix, a received signal covariance matrix and a priori signal to noise ratio.
Alternatively, the present embodiment may implement step S101 by a method as shown in fig. 3, where the specific implementation steps include steps S201 to S202:
step S201, based on the audio input signal, acquiring a real part and an imaginary part of the audio input signal.
Taking the microphone received signal model Yk.l=Xk,l+Nk,l as an example, where Xk,l represents a target signal, Nk,l represents a noise signal, Yk.l represents an audio input signal of the microphone, k represents a frequency point, and l represents a time frame, since the operation of each frequency point is the same, the sign of the frequency point is omitted hereinafter.
For the noise reduction algorithm, all noise reduction methods can be regarded as calculating a weight vector for the microphone audio input signal, and recovering the target signal by the weight vector, namely:
wl is the filtering weight coefficient,Is the filtered audio signal.
In the multi-frame algorithm, equation (1) can be modified as:
Wherein:
yl=[Yl,Yl-1,…,Yl-N+1]T (3)
WhereT denotes the transpose of the matrix,H denotes the conjugate transpose of the matrix, wl is as above, N is typically 4, and 4 frames of the history frame are taken.
Now, assuming that the target signal is incoherent with the noise signal in the signal received by the microphone, then:
Φy,l=Φx,ln.l (4)
Where Φx,l represents the target signal covariance matrix, Φn.l represents the noise covariance matrix, and Φy,l represents the received signal covariance matrix.
For the noise covariance matrix Φn.l and the received signal covariance matrix Φy,l, based on the audio input signal Yl, the real part and the imaginary part of the audio input signal values are obtained using equation (5).
yc,l=[Real(Yl),Imag(Yl)]T (5)
Where Real (Yl) represents the Real part of the audio input signal Yl, imag (Yl) represents the imaginary part of the audio input signal Yl, Yc,l represents the matrix of the Real and imaginary parts of the audio input signal Yl, andT represents the transpose of the matrix.
Step S202, obtaining a noise covariance matrix and a received signal covariance matrix based on the real part and the imaginary part.
Taking the microphone received signal model Yk.l=Xk,l+Nk,l as an example, based on the real part and the imaginary part of the audio input signal Yl, a mapping mode of a preset neural network may be adopted, and then the mapping mode is arranged according to the hermite matrix to obtain a noise covariance matrix and a received signal covariance matrix, and the formula (6) and the formula (7) may be adopted to obtain estimated values of the noise covariance matrix and the received signal covariance matrix:
wherein Hermitian {.cndot. } means that values in parentheses are arranged in accordance with the format of the Hermite matrix,Represented as estimates of the covariance matrix of the received signal,Representing an estimate of the noise covariance matrix, Yc,l representing a matrix of real and imaginary parts of the audio input signal Yl,The different mapping modes of the preset neural network are shown, and the preset neural network can adopt various common neural networks, such as RNN, CNN, CRNN and the like.
Alternatively, the present embodiment may implement step S101 by a method as shown in fig. 4, and the specific implementation steps include steps S301 to S302:
Step S301 is to acquire the absolute value of the audio input signal and calculate the base 10 logarithm of the absolute value.
Taking the microphone received signal model Yk.l=Xk,l+Nk,l as an example, a value of log10|Yl | is obtained based on the audio input signal Yl.
And step S302, obtaining the prior signal-to-noise ratio based on logarithms.
Based on the logarithm, a mapping mode of a preset neural network is adopted to obtain a priori signal to noise ratio, and an estimated value of the priori signal to noise ratio can be obtained by adopting a formula (8):
Wherein, theRepresenting an estimate of the a priori signal to noise ratio,The different mapping modes of the neural network are shown, and the preset neural network can adopt various common neural networks, such as RNNs, CNNs, CRNNs and the like.
And step S102, calculating a filtering weight coefficient based on the characteristic parameter.
The preset neural network module calculates a filtering weight coefficient through a formula based on characteristic parameters of an audio input signal, wherein the characteristic parameters of the audio input signal comprise a noise covariance matrix, a received signal covariance matrix and a priori signal to noise ratio.
Alternatively, the present embodiment may implement step S102 by a method as shown in fig. 5, where the specific implementation steps include steps S401 to S402:
step S401, calculating the inter-frame correlation coefficient based on the noise covariance matrix, the received signal covariance matrix and the prior signal-to-noise ratio.
Taking the microphone received signal model Yk.l=Xk,l+Nk,l as an example, it is assumed that the multi-frame target signal can be decomposed as follows:
xl=γx,lXl+x′l (9)
Where γx,lXl denotes a correlation component existing between signals in a multi-frame signal, x'l denotes an uncorrelated component in the multi-frame signal, and γx,l denotes an inter-frame correlation coefficient.
For voice signals, the quality of the voice signals can be ensured when the correlation components between the voice signals are ensured.
The calculation module calculates the inter-frame correlation coefficient based on the noise covariance matrix, the received signal covariance matrix and the estimated value of the prior signal-to-noise ratio obtained by the preset neural network as the true value.
Alternatively, the present embodiment may implement step S401 by a method as shown in fig. 6, and the specific implementation steps include steps S501 to S506:
Step S501, obtaining a sum of the prior signal-to-noise ratio and the reciprocal of the prior signal-to-noise ratio, and obtaining a first product of the sum, the covariance matrix of the received signal and a preset matrix.
Step S502, obtaining a second product of a transpose of the preset matrix, a covariance matrix of the received signal and the preset matrix.
Step S503, obtaining the third product of the reciprocal of the prior signal-to-noise ratio, the noise covariance matrix and the preset matrix.
Step S504, obtaining a fourth product of the transpose of the preset matrix, the noise covariance matrix and the preset matrix.
Step S505, a first quotient of the first product and the second product is obtained, and a second quotient of the third product and the fourth product is obtained.
And S506, obtaining a difference value between the first quotient value and the second quotient value to obtain an inter-frame correlation coefficient, wherein the preset matrix e= [1,0, ], 0]T.
Taking the microphone received signal model Yk.l=Xk,l+Nk,l as an example, steps S501 to S506 can be implemented by using formula (10):
Wherein, gammax,l is expressed as an inter-frame correlation coefficient, ζl is an a priori signal-to-noise ratio, gammax,l is a correlation coefficient between the frame signals, Φy,l is the received signal covariance matrix, Φx,l is a target received signal covariance matrix, Φn.l is the noise covariance matrix, and a preset matrix e= [1, 0..0 ]T,eT is a transpose of the preset matrix e.
Step S402, calculating a filtering weight coefficient based on the inter-frame correlation coefficient and the noise covariance matrix.
Taking the microphone receiving signal model Yk.l=Xk,l+Nk,l as an example, the calculation module calculates the filtering weight coefficient based on the inter-frame correlation coefficient obtained above and the estimated value of the noise covariance matrix obtained by the preset neural network as the true value.
Alternatively, the present embodiment may implement step S402 by a method as shown in fig. 7, where the specific implementation steps include steps S601 to S603:
Step S601, obtaining a fifth product of an inverse matrix of the noise covariance matrix and the inter-frame correlation coefficient.
Step S602, obtaining the conjugate transpose of the inter-frame correlation coefficient, the inverse matrix of the noise covariance matrix and the sixth product of the inter-frame correlation coefficient.
And step S603, obtaining a third quotient of the fifth product and the sixth product to obtain a filtering weight coefficient.
Taking the microphone received signal model Yk.l=Xk,l+Nk,l as an example, according to the definition of the minimum variance distortion-free response:
Steps S601 to S603 can be implemented using formula (12):
wherein phin.l denotes the noise covariance,Representing the inverse of the matrix phin.l,Representing the estimated values of the filter weight coefficients, gammax,l representing the correlation coefficients between the frame signals,Representing the conjugate transpose of the correlation coefficients between the frame signals.
And step 103, processing the audio input signal based on the filtering weight coefficient to obtain a filtered audio signal.
The filter module processes the audio input signal based on the filtering weight coefficient calculated by the calculation module to obtain a filtered audio signal.
Step S104, calculating the cost value based on the filtered audio signal and the real signal.
The calculation module calculates a cost value based on the filtered audio signal and the real signal.
Step 105, training the preset neural network by using the cost value.
The preset neural network module trains the preset neural network by using the cost value until the preset neural network module converges or reaches the preset training times, and the trained preset neural network module processes the subsequent audio input signals.
The present application further provides an audio noise reduction filtering method, as shown in fig. 8, fig. 8 is a flow chart of another embodiment of the audio noise reduction filtering method of the present application, and specific implementation steps include steps S701 to S706:
And step 701, acquiring characteristic parameters of an audio input signal by using a preset neural network.
Step S701 corresponds to step S101, and will not be described again.
Step S702, calculating filtering weight coefficients based on the characteristic parameters.
Step S702 corresponds to step S102, and will not be described again.
Step S703, processing the audio input signal based on the filtering weight coefficient to obtain a filtered audio signal.
Step S703 corresponds to step S103, and will not be described again.
And step S704, constructing a cost function for the preset neural network.
Wherein the cost function employs equation (13):
Wherein, theRepresenting the actual signal(s),Representing the filtered audio signal.
For the neural network, when the neural network is trained, the training targets are needed to exist, and the three characteristic parameters are not friendly to the network training because the three training targets exist, so in the scheme, the weight coefficient can be obtained by calculating the three characteristic parameters through a formula (12), and the weight coefficient is multiplied with an unprocessed signal to obtain a processed signal, and the real signal is used as the training target.
Step 705, calculating the cost value of the filtered audio signal and the real signal by using the cost function.
And inputting the filtered audio signal and the real signal into a cost function, thereby obtaining the cost value.
Step S706, training the preset neural network by using the cost value.
Step S706 corresponds to step S105, and will not be described again.
Optionally, the audio noise reduction filtering method of the present embodiment further includes step S707:
and step S707, responding to convergence of the preset neural network, and processing the audio input signal by utilizing the corresponding filtering weight coefficient to acquire a target signal.
When the preset neural network converges or reaches the preset training times, the time value is the current minimum value, a trained preset neural network model is obtained, and the audio input signal is processed through the filtering weight coefficient obtained by the preset neural network model at the time, so that the target signal can be obtained.
In an application scenario, as shown in fig. 9, fig. 9 is a schematic diagram illustrating an implementation of an embodiment of an audio noise reduction filtering method according to the present application. The dashed line part in the figure represents the flow direction that the preset neural network training needs to be increased, and the part is not needed in the actual inference process.
As shown in fig. 9, an audio input signal is sent to a preset neural network module 110 for training to obtain three different processed characteristic parameters, namely a noise covariance matrix, an input signal covariance matrix and a priori signal-to-noise ratio, the obtained processed characteristic parameters are calculated by a calculation module 120 according to a formula to obtain an inter-frame correlation coefficient, a filtering weight coefficient is calculated according to the inter-frame correlation coefficient and the a priori signal-to-noise ratio, a filter module 130 is used for filtering and outputting the signal based on the filtering weight coefficient, and the filtered signal and a real signal are sent to a cost function in the calculation module 120 for calculating cost and are reversely sent to the preset neural network module 110.
Optionally, the present application further proposes an electronic device 200. As shown in fig. 10, fig. 10 is a schematic structural diagram of an electronic device 200 according to an embodiment of the application, and the electronic device 200 includes a processor 201 and a memory 202 connected to the processor 201.
The processor 201 may also be referred to as a CPU (Central Processing Unit ). The processor 201 may be an integrated circuit chip with signal processing capabilities. Processor 201 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 202 is used to store program data required for the operation of the processor 201.
The processor 201 is configured to execute program data stored in the memory 202 to obtain feature parameters of an audio input signal by using a preset neural network, calculate a filtering weight coefficient based on the feature parameters, process the audio input signal based on the filtering weight coefficient to obtain a filtered audio signal, calculate a cost value based on the filtered audio signal and a real signal, and train the preset neural network by using the cost value.
Optionally, the present application further proposes a computer storage medium 300. Fig. 11 is a schematic diagram of a computer storage medium 300 according to an embodiment of the application.
The computer storage medium 300 of the embodiment of the application stores therein program instructions 310, and the program instructions 310 are executed to obtain characteristic parameters of an audio input signal by using a preset neural network, calculate a filtering weight coefficient based on the characteristic parameters, process the audio input signal based on the filtering weight coefficient to obtain a filtered audio signal, calculate a cost value based on the filtered audio signal and a real signal, and train the preset neural network by using the cost value.
The program instructions 310 may form a program file stored in the storage medium in the form of a software product, so that a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) performs all or part of the steps of the methods according to the embodiments of the present application. The storage medium includes a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random-access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes, or a terminal device such as a computer, a server, a mobile phone, a tablet, etc.
Compared with the prior art, the audio noise reduction filtering method processes the audio sound by combining the preset neural network with the traditional signal processing, and the preset neural network is trained to obtain a final preset neural network model by utilizing the advantages that the characteristic parameters which are difficult to estimate in the traditional signal processing can be better solved by the preset neural network, so that the traditional signal processing of the audio sound by acquiring the filtering weight coefficient can be steadily ensured. The application combines the preset neural network with the traditional signal processing to complement the two, when the signal to noise ratio becomes low, the noise of the audio input signal can be reduced, and the voice quality of the final audio output is improved.
The foregoing description is only illustrative of the present application and is not intended to limit the scope of the application, and all equivalent structures or equivalent processes or direct or indirect application in other related technical fields are included in the scope of the present application.

Claims (10)

CN202111605349.4A2021-12-252021-12-25 Audio noise reduction filtering method, noise reduction filtering device, electronic device and storage mediumActiveCN114495960B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202111605349.4ACN114495960B (en)2021-12-252021-12-25 Audio noise reduction filtering method, noise reduction filtering device, electronic device and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202111605349.4ACN114495960B (en)2021-12-252021-12-25 Audio noise reduction filtering method, noise reduction filtering device, electronic device and storage medium

Publications (2)

Publication NumberPublication Date
CN114495960A CN114495960A (en)2022-05-13
CN114495960Btrue CN114495960B (en)2025-08-08

Family

ID=81496570

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111605349.4AActiveCN114495960B (en)2021-12-252021-12-25 Audio noise reduction filtering method, noise reduction filtering device, electronic device and storage medium

Country Status (1)

CountryLink
CN (1)CN114495960B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN117275500A (en)*2022-06-142023-12-22青岛海尔科技有限公司Dereverberation method, device, equipment and storage medium
CN115798501A (en)*2022-12-072023-03-14深圳市中科蓝讯科技股份有限公司Voice noise reduction method and device and electronic equipment
CN116030821A (en)*2023-03-272023-04-28北京探境科技有限公司Audio processing method, device, electronic equipment and readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108429994A (en)*2017-02-152018-08-21阿里巴巴集团控股有限公司Audio identification, echo cancel method, device and equipment
CN110634500A (en)*2019-10-142019-12-31达闼科技成都有限公司Method for calculating prior signal-to-noise ratio, electronic device and storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP3381731B2 (en)*1992-08-142003-03-04ソニー株式会社 Noise reduction device
US10691975B2 (en)*2017-07-192020-06-23XNOR.ai, Inc.Lookup-based convolutional neural network
CN111862952B (en)*2019-04-262024-04-12华为技术有限公司 A de-reverberation model training method and device
EP3793210A1 (en)*2019-09-112021-03-17Oticon A/sA hearing device comprising a noise reduction system
CN110889197B (en)*2019-10-312023-04-21佳禾智能科技股份有限公司 Adaptive feed-forward active noise reduction method based on neural network, computer-readable storage medium, electronic device
CN111091805B (en)*2019-11-152023-05-26佳禾智能科技股份有限公司Feedback type noise reduction method based on neural network
CN111128214B (en)*2019-12-192022-12-06网易(杭州)网络有限公司Audio noise reduction method and device, electronic equipment and medium
CN113011433B (en)*2019-12-202023-10-13杭州海康威视数字技术股份有限公司Filtering parameter adjusting method and device
CN111681665A (en)*2020-05-202020-09-18浙江大华技术股份有限公司Omnidirectional noise reduction method, equipment and storage medium
CN112017681B (en)*2020-09-072022-05-13思必驰科技股份有限公司Method and system for enhancing directional voice
CN113012710A (en)*2021-01-282021-06-22广州朗国电子科技有限公司Audio noise reduction method and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108429994A (en)*2017-02-152018-08-21阿里巴巴集团控股有限公司Audio identification, echo cancel method, device and equipment
CN110634500A (en)*2019-10-142019-12-31达闼科技成都有限公司Method for calculating prior signal-to-noise ratio, electronic device and storage medium

Also Published As

Publication numberPublication date
CN114495960A (en)2022-05-13

Similar Documents

PublicationPublication DateTitle
CN114495960B (en) Audio noise reduction filtering method, noise reduction filtering device, electronic device and storage medium
CN113674172B (en) An image processing method, system, device and storage medium
US8325909B2 (en)Acoustic echo suppression
US20150163587A1 (en)Audio Information Processing Method and Apparatus
CN110265054B (en)Speech signal processing method, device, computer readable storage medium and computer equipment
CN113506582B (en)Voice signal identification method, device and system
CN113744748A (en)Network model training method, echo cancellation method and device
CN117219107B (en)Training method, device, equipment and storage medium of echo cancellation model
CN114373473A (en)Simultaneous noise reduction and dereverberation through low-delay deep learning
CN109102821A (en)Delay time estimation method, system, storage medium and electronic equipment
CN113314135B (en)Voice signal identification method and device
US20230403506A1 (en)Multi-channel echo cancellation method and related apparatus
CN113053406B (en)Voice signal identification method and device
CN117174105A (en)Speech noise reduction and dereverberation method based on improved deep convolutional network
US8515096B2 (en)Incorporating prior knowledge into independent component analysis
CN112997249B (en)Voice processing method, device, storage medium and electronic equipment
CN114245117A (en)Multi-sampling rate multiplexing network reconstruction method, device, equipment and storage medium
WO2025001564A1 (en)Audio processing method and apparatus, device, and computer-readable storage medium
CN114724578B (en) Audio signal processing method, device and storage medium
CN115662394A (en)Voice extraction method, device, storage medium and electronic device
CN114822479A (en)Signal howling suppression method, device, computer equipment and storage medium
CN114900730B (en)Method and device for acquiring delay estimation steady state value, electronic equipment and storage medium
CN115440236B (en) Echo suppression method, device, electronic device and storage medium
CN115426576B (en)Sound feedback suppression method, electronic device and storage medium
CN112260662B (en) A method for adaptive filtering, computer equipment and device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp