Movatterモバイル変換


[0]ホーム

URL:


CN119094934B - Microphone balance self-adaptive adjustment method, device and equipment - Google Patents

Microphone balance self-adaptive adjustment method, device and equipment

Info

Publication number
CN119094934B
CN119094934BCN202411209383.3ACN202411209383ACN119094934BCN 119094934 BCN119094934 BCN 119094934BCN 202411209383 ACN202411209383 ACN 202411209383ACN 119094934 BCN119094934 BCN 119094934B
Authority
CN
China
Prior art keywords
frequency
spectrum
singer
gain
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411209383.3A
Other languages
Chinese (zh)
Other versions
CN119094934A (en
Inventor
张强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jinfuqiang Technology Co ltd
Original Assignee
Shenzhen Jinfuqiang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jinfuqiang Technology Co ltdfiledCriticalShenzhen Jinfuqiang Technology Co ltd
Priority to CN202411209383.3ApriorityCriticalpatent/CN119094934B/en
Publication of CN119094934ApublicationCriticalpatent/CN119094934A/en
Application grantedgrantedCritical
Publication of CN119094934BpublicationCriticalpatent/CN119094934B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开一种麦克风均衡性自适应调整方法、装置和设备,其中,方法包括:采集并分析演唱者的音频信号;对采集的音频信号进行时频转换,生成实时频谱图;将实时频谱和参考频谱进行比对,计算出各个频段与参考频谱的差值,所述参考频谱为预先录制的理想声音或在系统中设定的目标频谱;根据每个频段的差值,实时调整均衡器的增益设置,使麦克风能够自动根据演唱者的声音特性实时调节频率响应。本发明技术方案旨在提高麦克风均衡调节的精确性和灵活性,提升录音和演出效果。

The present invention discloses a method, device and equipment for adaptively adjusting microphone equalization, wherein the method includes: collecting and analyzing the audio signal of the singer; performing time-frequency conversion on the collected audio signal to generate a real-time spectrum; comparing the real-time spectrum with the reference spectrum, and calculating the difference between each frequency band and the reference spectrum, wherein the reference spectrum is a pre-recorded ideal sound or a target spectrum set in the system; adjusting the gain setting of the equalizer in real time according to the difference of each frequency band, so that the microphone can automatically adjust the frequency response in real time according to the vocal characteristics of the singer. The technical solution of the present invention aims to improve the accuracy and flexibility of microphone equalization adjustment and enhance the recording and performance effects.

Description

Microphone balance self-adaptive adjustment method, device and equipment
Technical Field
The present invention relates to the field of microphone technologies, and in particular, to a method, an apparatus, and a device for adaptively adjusting microphone balance.
Background
In modern music recordings and live shows, the selection and setting of microphones is critical to the presentation of sound quality. Since the singer's gamut, style of sound and style of music are different, the balance adjustment of the microphones must be able to accommodate these differences to ensure that the best sound effects are presented during recording and performance. Traditional microphone equalization settings typically rely on manual adjustments, requiring a high degree of expertise from the audio engineer. However, this method is not only time-consuming and laborious, but also cannot accommodate in real time the range changes and the differences in sound quality that may occur during singing by the singer.
Currently, in order to improve the efficiency and accuracy of microphone equalization adjustment, some automatic equalization techniques are presented. These techniques may automatically adjust the equalization settings of the microphones based on the vocal characteristics of the singer. However, existing automatic equalization techniques typically rely on preset parameters that do not adequately account for the individual vocal characteristics of the singer and the overtone structure in a particular piece of music. This results in limited accuracy and flexibility in microphone equalization adjustment, especially when faced with complex acoustic environments or professional recordings requiring high sound quality, which is often difficult to meet in the prior art.
Disclosure of Invention
The invention mainly aims to provide a microphone balance self-adaptive adjustment method, device and equipment, which aim at improving the accuracy and flexibility of microphone balance adjustment and improving recording and performance effects.
In order to achieve the above purpose, the following technical scheme is adopted in the embodiment of the application.
In a first aspect, the present invention provides a method for adaptively adjusting microphone equalization, including:
Collecting and analyzing audio signals of singers;
performing time-frequency conversion on the acquired audio signals to generate a real-time spectrogram;
comparing the real-time frequency spectrum with a reference frequency spectrum, and calculating the difference value between each frequency band and the reference frequency spectrum, wherein the reference frequency spectrum is a prerecorded ideal sound or a target frequency spectrum set in a system;
According to the difference value of each frequency band, the gain setting of the equalizer is adjusted in real time, so that the microphone can automatically adjust the frequency response in real time according to the sound characteristics of singers.
In one possible implementation manner, after the step of adjusting the gain setting of the equalizer in real time according to the difference value of each frequency band, the method further includes:
continuously monitoring the adjusted audio output, and checking the effect of balance adjustment;
Feedback from the singer is collected and the gain setting of the equalizer is adjusted.
In one possible implementation manner, the step of continuously monitoring the adjusted audio output and checking the effect of the equalization adjustment further includes:
filtering the ambient noise ensures that the equalization adjustments are focused on the useful signal.
In a possible implementation manner, the step of collecting and analyzing the audio signal of the singer includes:
Collecting and analyzing the range width of the singer, and marking the lowest and highest frequency ranges in which the singer can accurately sound;
the overtone characteristic of the singer, which means other frequency components than the fundamental frequency, is collected and analyzed for determining the timbre and texture of the sound.
In one possible implementation manner, the step of performing time-frequency conversion on the collected audio signal to generate a real-time spectrogram includes:
dividing the audio signal into small blocks of windows, wherein the time of each window is 20ms-50ms;
Fourier transform is applied to each window to obtain spectral information.
In one possible implementation manner, in the step of adjusting the gain setting of the equalizer in real time according to the difference value of each frequency band, the frequency band interpolation is calculated by D (f, T) =t (f) - |x (f, T) |, where D (f, T) represents the spectrum difference of the current time T in the frequency f, and |x (f, T) | is the amplitude of the input signal spectrum.
In one possible implementation, the gain setting method is G (f, t) =αd (f, t) +gprev (f, t), where α is an adjustment coefficient, the rate of gain adjustment is controlled, and Gprev (f, t) is the gain at the previous moment, to smooth the transition.
In a second aspect, the present invention provides a microphone equalization adaptive adjustment apparatus, comprising:
the acquisition module is used for acquiring and analyzing the audio signals of singers;
the conversion module is used for performing time-frequency conversion on the collected audio signals and generating a real-time spectrogram;
the calculation gain module is used for comparing the difference between the frequency spectrum of the current audio frequency and the reference frequency spectrum and calculating the frequency band to be adjusted;
And the gain adjusting module is used for adjusting the gain setting of the equalizer in real time.
In a third aspect, the present invention provides an apparatus comprising a processor and a memory coupled to the processor, the memory for storing computer program code comprising computer instructions which, when read from the memory by the processor, cause the electronic apparatus to perform the microphone equalization adaptation method as described in the first aspect or any of the possible implementations of the first aspect.
The technical scheme of the invention comprises the steps of generating a real-time spectrogram by carrying out time-frequency conversion on an acquired audio signal, comparing the real-time spectrogram with a reference spectrogram, calculating the difference value between each frequency band and the reference spectrogram, wherein the reference spectrogram is a prerecorded ideal sound or a target spectrogram set in a system, and adjusting the gain setting of an equalizer in real time according to the difference value of each frequency band, so that a microphone can automatically adjust the frequency response in real time according to the sound characteristics of singers. By analyzing the voice domain and overtone characteristics of the singer in detail and combining the reference audio, the balance setting of the microphone is dynamically adjusted, so that the self-adaptive tone quality optimization is realized, and the microphone can better capture and present the voice characteristics of the singer through the fine adjustment.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an embodiment of a microphone equalization adaptive adjustment method according to the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Aiming at the problems in the background technology, the embodiment of the application adopts the following technical scheme.
In a first aspect, referring to fig. 1 in combination, the present invention provides a method for adaptively adjusting microphone equalization, including the following steps:
s10, collecting and analyzing the audio signals of singers.
In this step, first, the acquisition singer sings "do, get, mian, chose, la, west, do..", in the order of the designated musical scale, covering all pitches of its musical scale. During the collection process, high-quality microphones and recording equipment are used for recording the voice of singers, so that the definition and accuracy of recording are ensured. The recorded sound is time-frequency analyzed, usually using fourier transform (FFT) or short-time fourier transform (STFT) to convert the audio signal to the frequency domain, and the frequency components of each scale are analyzed, the upper and lower limits of the singer's range are determined, and the singer's range width is calculated. From the frequency analysis results, the lowest and highest frequency ranges in which the singer can accurately sound are marked, so that the range width of the singer can be determined, and the data can provide a basis for the adaptive adjustment of the subsequent equalizer.
Secondly, a representative piece of music singed by a singer is collected and recorded by a high-quality recording device, the recorded music is subjected to time-frequency conversion analysis, main frequency and overtone components in a frequency spectrum are focused, overtone distribution and intensity in each frequency band (bass, middle-tone and treble) are recorded, overtones refer to other frequency components except a fundamental frequency, and the overtones determine the tone and texture of sound. By analyzing the overtones, the unique characteristics of the singer's voice can be understood.
S20, performing time-frequency conversion on the collected audio signals to generate a real-time spectrogram.
In this step, the time-frequency conversion is to convert the audio signal in the time domain into the frequency domain so as to analyze the frequency components thereof. Is very important for audio processing, especially in applications such as equalization and noise reduction. The generation of the real-time spectrogram graphically reveals this frequency information, and the audio signal is split into shorter frames (typically 20-40 ms) because the audio signal is typically continuous and cannot be directly processed in its entirety. The frames may overlap slightly when processed to reduce the boundary effects to enable visual observation of the energy distribution of the audio signal at different frequencies. A window function (e.g., hanning window or hamming window) is applied to each frame to reduce the effects of spectrum leakage and ensure the sharpness of the spectrum. A Fast Fourier Transform (FFT) is applied to each frame, and the time domain signal is converted to the frequency domain, resulting in spectral information for that frame. These spectral information represent the energy distribution of the audio signal at different frequencies. A spectrogram is a two-dimensional image, with the horizontal axis representing time, the vertical axis representing frequency, and color or brightness representing energy intensity at that point in time and frequency. With the continuous collection and processing of the audio signals, the spectrogram can be dynamically updated, and the current frequency component change is displayed in real time.
Through the spectrogram, different frequency components in the audio signal can be intuitively seen. For example, a low frequency region (e.g., below 100 Hz) may represent a bass portion and a high frequency region (e.g., several kilohertz) may represent a treble or noisy sound, a spectrogram may help identify noise or interference in the audio, e.g., when a particular frequency segment is seen to be abnormally prominent in the spectrogram, an interference signal may be indicated, and equalizer parameters may be adjusted in real time by observing the spectrogram. For example, if the low frequency part is too strong, the gain of the low frequency band may be reduced, if the high frequency part is insufficient, the gain of the high frequency band may be increased, and during audio recording or transmission, the spectrogram may be used to diagnose audio problems such as some frequency band deletions or abnormal enhancements.
S30, comparing the real-time frequency spectrum with a reference frequency spectrum, and calculating the difference value between each frequency band and the reference frequency spectrum, wherein the reference frequency spectrum is a prerecorded ideal sound or a target frequency spectrum set in the system.
In this step, the spectra of the bass, midrange and treble segments of the singer obtained by the analysis are compared with reference audio (possibly standard sound or target sound), and the difference (difference) between the frequency component of each frequency band and the reference audio is calculated, where the reference spectrum is a pre-recorded ideal sound or a target spectrum set in the system. The reference spectrum is an ideal frequency distribution map for comparing and adjusting the spectrum of the actual audio signal to achieve the desired sound quality. The reference spectrum may come from a variety of sources, depending on the application scenario and the target. For example, a normalized spectrum is generally set according to a certain standard or specification, an ideal spectrum is set according to an ideal spectrum shape of an ideal sound quality in a specific application scene, a spectrum of a reference audio, and a practical high-quality audio is used as the reference audio and the spectrum thereof is used as a reference. The reference spectrum provides an explicit goal for audio processing, so that the equalization adjustment is more scientific and systematic, the error of subjective adjustment is reduced, and defects in the audio signal, such as the loss or the over-intensity of certain frequency bands, can be identified and corrected by comparing with the reference spectrum, thereby optimizing the overall sound quality.
And S40, adjusting the gain setting of the equalizer in real time according to the difference value of each frequency band, so that the microphone can automatically adjust the frequency response in real time according to the sound characteristics of singers.
In this step, the gain setting of the automatic equalizer is adjusted according to the difference value of each frequency band, for example, if the frequency response of the singer in the bass segment is weak, the gain of that frequency band may be increased, and vice versa. The application of these adjustments to the equalizer of the microphone enables the microphone to automatically adjust the frequency response in real time according to the vocal characteristics of the singer. By continuous monitoring and adjustment, it is ensured that the microphone always provides the best sound quality in different singing environments.
In one possible implementation manner, after the step of adjusting the gain setting of the equalizer in real time according to the difference value of each frequency band, the method further includes the following steps:
s401, continuously monitoring the adjusted audio output, and checking the effect of balance adjustment.
In this step, the audio output signal is continuously monitored using an audio analysis tool (e.g., spectrum analyzer, RMS level detector, etc.), and key features of the adjusted audio, such as spectral distribution, volume level, signal-to-noise ratio (SNR), etc., are extracted. These features can be used to evaluate the effect of equalization adjustments. The method comprises the steps of comparing an adjusted audio frequency spectrum with a pre-defined reference frequency spectrum in real time, checking whether each frequency band is matched with an expected value of the reference frequency spectrum, paying special attention to energy distribution of low frequency, medium frequency and high frequency, and recording the deviation as a further adjustment basis if an actual frequency spectrum deviates from the reference frequency spectrum. An automatic feedback mechanism can be established in the audio processing system, so that the system can automatically adjust the equalization parameters according to the real-time monitoring result. For example, the system may set a threshold that if the energy of a band continues to deviate from the reference value by more than the set threshold, the system will automatically adjust the equalizer to attenuate or enhance the gain of that band. After each automatic adjustment, the system will again monitor the audio output and verify whether the new equalization settings improve the sound quality. By continuous monitoring, the equalizer can dynamically adapt to changing audio inputs and environments, thereby maintaining a stable sound quality.
S402, collecting feedback of singers, and adjusting gain setting of an equalizer.
In this step, a physical control panel or foot switch may be provided, through which the singer can quickly submit feedback, or a simple application may be designed or a tablet may be used to allow the singer to submit feedback via a touch screen, or a voice recognition function may be integrated into the recording software to allow the singer to submit feedback via voice (e.g. "increase low frequency" or "decrease high frequency"). The singer feedback is collected while the current equalizer settings and audio output are recorded for association with the feedback. The collected feedback is classified to determine which frequency bands of gain need to be adjusted. For example, if the singer reflects "bass deficient", it is necessary to increase the gain of the low frequency band. And then, verifying the feedback of the singer by combining the audio monitoring data (such as a real-time spectrogram), and ensuring that the feedback is consistent with the real audio performance. Finally, the gain setting of the equalizer is adjusted based on the feedback. And the feedback and optimization process is cycled until the desired effect is achieved.
In one possible implementation manner, the step of continuously monitoring the adjusted audio output and checking the effect of the equalization adjustment further includes:
filtering the ambient noise ensures that the equalization adjustments are focused on the useful signal.
In this step, the ambient noise is filtered out in combination with noise cancellation techniques, such as adaptive filters, ensuring that the adaptive equalization focuses on the useful signal.
In a possible implementation manner, the step of collecting and analyzing the audio signal of the singer includes:
S101, acquiring and analyzing the range width of the singer, and marking the lowest and highest frequency ranges of the singer capable of accurately sounding.
In this step, the acquisition singer sings "do, get, mian, chose, la, west, do..", covering all pitches of its gamut, in the specified musical scale order. During the collection process, high-quality microphones and recording equipment are used for recording the voice of singers, so that the definition and accuracy of recording are ensured. The recorded sound is time-frequency analyzed, usually using fourier transform (FFT) or short-time fourier transform (STFT) to convert the audio signal to the frequency domain, and the frequency components of each scale are analyzed, the upper and lower limits of the singer's range are determined, and the singer's range width is calculated. And marking the lowest and highest frequency ranges in which the singer can accurately sound according to the frequency analysis result, thereby determining the range width of the singer.
S102, collecting and analyzing overtone characteristics of singers, wherein the overtone characteristics refer to other frequency components except a fundamental frequency and are used for determining timbre and texture of sound.
In this step, a representative piece of music singed by a singer is collected and recorded using a high-quality recording device as well, the recorded music is subjected to time-frequency conversion analysis, the main frequency and overtone components in the frequency spectrum are focused, overtone distribution and intensity in each frequency band (bass, midrange, treble) are recorded, overtones refer to other frequency components besides the fundamental frequency, and the overtones determine the timbre and texture of sound. By analyzing the overtones, the unique characteristics of the singer's voice can be understood.
In one possible implementation manner, the step of performing time-frequency conversion on the collected audio signal to generate a real-time spectrogram includes:
dividing the audio signal into small blocks of windows, wherein the time of each window is 20ms-50ms;
Fourier transform is applied to each window to obtain spectral information.
In this step, first, the time length of each window is determined, typically 20ms to 50ms. This range is short enough to capture rapid changes in the audio signal while long enough to obtain good resolution in the frequency domain. For example, if the sampling rate is 44.1kHz (common to audio processing), 20ms corresponds to 882 sample points and 50ms corresponds to 2205 sample points. Then, the audio signal is divided into a plurality of small blocks according to the set window size. For each window, corresponding sample data is extracted. Overlapping window techniques, i.e., a degree of overlap (e.g., 50% overlap) between each window, are typically used to reduce window boundary effects and better capture transients. A window function (e.g., hanning, hamming, or blackman) is applied to reduce spectral leakage before fourier transforming each window. The window functions smooth the signal at the boundaries, reduce the spurious spectral components caused by truncation, multiply the signal for each window with the selected window function, and generate smoothed signal data. A Fast Fourier Transform (FFT) is performed on the signal data after the application of the window function, converting the time domain signal into a frequency domain signal. For each window, spectral information, typically including the amplitude and phase of the frequency components, is extracted and stored. These spectral information can be further processed or visualized (e.g., a spectrogram) to analyze the frequency characteristics and variations of the audio signal. The spectral information of each window is arranged in a time sequence, a spectrogram (Spectrogram) is generated, the vertical axis is frequency, the horizontal axis is time, and the color or brightness represents the energy intensity of each frequency component. By observing the spectrogram, the frequency components of the audio signal are analyzed over time to identify specific audio features or events (e.g., notes, noise, speech segments, etc.).
In one possible implementation manner, in the step of adjusting the gain setting of the equalizer in real time according to the difference value of each frequency band, the frequency band interpolation is calculated by D (f, T) =t (f) - |x (f, T) |, where D (f, T) represents the spectrum difference of the current time T in the frequency f, and |x (f, T) | is the amplitude of the input signal spectrum.
In one possible implementation, the gain setting method is G (f, t) =αd (f, t) +gprev (f, t), where α is an adjustment coefficient, the rate of gain adjustment is controlled, and Gprev (f, t) is the gain at the previous moment, to smooth the transition.
It will be appreciated that for an input audio signal x (t), a time-to-frequency conversion is first required to convert it to a frequency domain representation. A common method is short-time fourier transform (STFT), X (f, t) =stft { X (t) }, X (f, t) is the spectrum over time t, and f is the frequency. The spectrum difference is then calculated, and a target spectrum T (f) (which may be a preset ideal spectrum or a reference spectrum calculated based on historical data) is set. For the current time T, a difference between the input signal spectrum and the target spectrum is calculated, D (f, T) =t (f) - |x (f, T) |, where D (f, T) represents the spectrum difference in frequency f at the current time T and |x (f, T) | is the amplitude of the input signal spectrum, then a gain adjustment is calculated, and based on the difference D (f, T), a gain adjustment amount G (f, T) for each frequency band is calculated. The gain adjustment is typically determined according to the magnitude of the difference G (f, t) =αd (f, t) +gprev (f, t), where α is an adjustment coefficient, the rate of the gain adjustment is controlled, gprev (f, t) is the gain at the previous time for smooth transition, and then the gain adjustment is applied to perform equalization adjustment on the original audio signal, typically by a filter. The gain adjustment for each band may be achieved using a digital filter (e.g., IIR or FIR filter) where Y (f, t) =g (f, t) ×x (f, t), where Y (f, t) is the spectrum after the gain is applied, and then converting the adjusted spectrum Y (f, t) back to the time domain signal by Inverse Short Time Fourier Transform (ISTFT) =istft { Y (f, t) }. Finally, feedback is carried out according to the adjusted output signal y (T), the steps are repeated, and the gain G (f, T) is continuously adjusted, so that the output signal gradually approaches the reference frequency spectrum T (f).
By way of example, assume that we are processing an audio signal with a sampling frequency of 44.1kHz at some point, with the goal of achieving adaptive equalization. For simplicity we focus on only three frequency bins, low (100 Hz), medium (1 kHz) and high (10 kHz), assuming that we have performed a Short Time Fourier Transform (STFT) on the audio signal, resulting in spectral magnitudes of |x (100 Hz, T) |=0.8, |x (1 kHz, T) |=0.5, |x (10 kHz, T) =0.2, assuming that the target spectrum (ideal spectrum) has a magnitude of T (100 Hz) =1.0, T (1 kHz) =0.7, T (10 kHz) =0.5, calculating the difference D (f, T): D (100 Hz, T) =1.0-0.8=0.2 for the low (100 Hz), D (1 kHz, T) =0.7-0.5=0.2 for the medium (1 kHz) =0.2, and D (10 kHz) =0.5=0.3 for the high (10 kHz). Let the adjustment coefficient α=0.5, and the gains Gprev (f, t) at the previous time point are all 1.0 (i.e., without any adjustment). Then the gain adjustment G (f, t) at the current moment is that for the low frequency band (100 Hz): G (100 Hz, t) =0.5x0.2+1.0=1.1, for the mid frequency band (1 kHz): G (1 kHz, t) =0.5x0.2+1.0=1.1, for the high frequency band (10 kHz): G (10 kHz, t) =0.5 x 0.3+1.0=1.15, the calculated gain is applied to the original spectrum: for the low frequency band (100 Hz): Y (100 Hz, t) =1.1 x 0.8=0.88, for the mid frequency band (1 kHz): Y (1 kHz, t) =1.1x0.5=0.55;
For the high frequency band (10 kHz) Y (10 kHz, t) =1.15×0.2=0.23, and finally the adjusted spectrum Y (f, t) is converted back into the time domain signal Y (t) by the Inverse Short Time Fourier Transform (ISTFT). Let us assume that in this simplified example we get an adjusted time domain signal Y (t) =istft { Y (f, t) }. In this example, by applying gain adjustment, we have achieved a smaller gain boost for the output signal at the low frequency band (100 Hz) and the medium frequency band (1 kHz), and a larger gain boost at the high frequency band (10 kHz). Such adjustment may allow the audio signal to be more balanced, approaching the target spectrum, thereby improving sound quality.
In a second aspect, the present invention provides a microphone equalization adaptive adjustment apparatus, comprising:
the acquisition module is used for acquiring and analyzing the audio signals of singers;
the conversion module is used for performing time-frequency conversion on the collected audio signals and generating a real-time spectrogram;
the calculation gain module is used for comparing the difference between the frequency spectrum of the current audio frequency and the reference frequency spectrum and calculating the frequency band to be adjusted;
And the gain adjusting module is used for adjusting the gain setting of the equalizer in real time.
It should be understood by those skilled in the art that the division of each module in the embodiment is merely a division of a logic function, and may be fully or partially integrated onto one or more actual carriers in practical application, and the modules may be fully implemented in a form of software called by a processing unit, or may be fully implemented in a form of hardware, or may be implemented in a form of combination of software and hardware, and it should be noted that each module in the microphone balance adaptive adjustment device in this embodiment is in one-to-one correspondence with each step in the microphone balance adaptive adjustment method in the foregoing embodiment, so that a specific implementation of this embodiment may refer to an implementation of the foregoing microphone balance adaptive adjustment method, and will not be repeated herein.
In a third aspect, the present invention provides an apparatus comprising a processor and a memory coupled to the processor, the memory for storing computer program code comprising computer instructions which, when read from the memory by the processor, cause the electronic apparatus to perform the microphone equalization adaptation method as described in the first aspect or any of the possible implementations of the first aspect.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM, or various devices including one or any combination of the above. The computer may be a variety of computing devices including smart terminals and servers.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, the executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, in one or more scripts in a hypertext markup Language (HTML, hyperTextMarkup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or distributed across multiple sites and interconnected by a communication network.
The foregoing description of the preferred embodiments of the application is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the application.

Claims (6)

Translated fromChinese
1.一种麦克风均衡性自适应调整方法,其特征在于,包括:1. A method for adaptively adjusting microphone equalization, comprising:采集并分析演唱者的音频信号;Collect and analyze the singer's audio signal;对采集的音频信号进行时频转换,生成实时频谱图;Perform time-frequency conversion on the collected audio signal to generate a real-time spectrum graph;将实时频谱和参考频谱进行比对,计算出各个频段与参考频谱的差值,所述参考频谱为预先录制的理想声音或在系统中设定的目标频谱;Compare the real-time spectrum with the reference spectrum, and calculate the difference between each frequency band and the reference spectrum, wherein the reference spectrum is a pre-recorded ideal sound or a target spectrum set in the system;根据每个频段的差值,实时调整均衡器的增益设置,使麦克风能够自动根据演唱者的声音特性实时调节频率响应;According to the difference in each frequency band, the gain setting of the equalizer is adjusted in real time, so that the microphone can automatically adjust the frequency response in real time according to the vocal characteristics of the singer;其中,所述采集并分析演唱者的音频信号的步骤中,包括:The step of collecting and analyzing the singer's audio signal includes:采集并分析演唱者的音域宽度,标记演唱者可以准确发声的最低和最高频率范围;Collect and analyze the singer's vocal range, marking the lowest and highest frequency ranges that the singer can accurately produce;采集并分析演唱者的泛音特性,所述泛音特性是指除基频外的其他频率成分,用于决定声音的音色和质感;Collect and analyze the overtone characteristics of the singer, which refers to other frequency components besides the fundamental frequency, and is used to determine the timbre and texture of the sound;所述根据每个频段的差值,实时调整均衡器的增益设置的步骤中,所述频段插值的计算方法为D(f,t)=T(f)-∣X(f,t)∣,其中,D(f,t)表示当前时刻t在频率f上的频谱差异,∣X(f,t)∣是输入信号频谱的幅度;所述增益设置的方法为G(f,t)=αD(f,t)+Gprev(f,t),其中,α是调整系数,控制增益调整的速率,Gprev(f,t)是前一时刻的增益,用来平滑过渡。In the step of adjusting the gain setting of the equalizer in real time according to the difference of each frequency band, the calculation method of the frequency band interpolation is D(f,t)=T(f)-|X(f,t)|, wherein D(f,t) represents the spectrum difference at the frequency f at the current moment t, and |X(f,t)| is the amplitude of the input signal spectrum; the gain setting method is G(f,t)=αD(f,t)+Gprev(f,t), wherein α is the adjustment coefficient, which controls the rate of gain adjustment, and Gprev(f,t) is the gain at the previous moment, which is used for smooth transition.2.根据权利要求1所述的麦克风均衡性自适应调整方法,其特征在于,所述根据每个频段的差值,实时调整均衡器的增益设置的步骤之后,还包括:2. The method for adaptively adjusting microphone equalization according to claim 1, characterized in that after the step of adjusting the gain setting of the equalizer in real time according to the difference of each frequency band, it also includes:持续监测调整后的音频输出,检查均衡调整的效果;Continue to monitor the adjusted audio output to check the effect of the equalization adjustment;收集演唱者的反馈,调整均衡器的增益设置。Gather feedback from the singer and adjust the gain settings on the equalizer.3.根据权利要求2所述的麦克风均衡性自适应调整方法,其特征在于,所述持续监测调整后的音频输出,检查均衡调整的效果的步骤中,还包括:3. The method for adaptively adjusting microphone equalization according to claim 2, wherein the step of continuously monitoring the adjusted audio output and checking the effect of the equalization adjustment further comprises:过滤环境噪声,确保均衡调整专注于有用信号。Filters out ambient noise, ensuring equalization adjustments focus on the desired signal.4.根据权利要求1所述的麦克风均衡性自适应调整方法,其特征在于,所述对采集的音频信号进行时频转换,生成实时频谱图的步骤包括:4. The method for adaptively adjusting microphone equalization according to claim 1, wherein the step of performing time-frequency conversion on the collected audio signal to generate a real-time spectrum diagram comprises:将音频信号分成小块窗口,每一窗口的时间为20ms-50ms;Divide the audio signal into small windows, each window lasting 20ms-50ms;对每个窗口应用傅里叶变换,得到频谱信息。Apply Fourier transform to each window to get the spectrum information.5.一种麦克风均衡性自适应调整装置,其特征在于,包括:5. A microphone equalization adaptive adjustment device, characterized by comprising:采集模块,用于采集并分析演唱者的音频信号;An acquisition module, used for acquiring and analyzing the singer's audio signal;转换模块,用于对采集的音频信号进行时频转换,生成实时频谱图;A conversion module, used to perform time-frequency conversion on the collected audio signal and generate a real-time spectrum graph;计算增益模块,用于比较当前音频的频谱与参考频谱的差异,计算需要调整的频段;The gain calculation module is used to compare the difference between the current audio spectrum and the reference spectrum and calculate the frequency band that needs to be adjusted;调整增益模块,用于实时调整均衡器的增益设置;Adjust gain module, used to adjust the gain setting of the equalizer in real time;其中,所述装置还用于采集并分析演唱者的音域宽度,标记演唱者可以准确发声的最低和最高频率范围;The device is also used to collect and analyze the singer's vocal range width, marking the lowest and highest frequency ranges that the singer can accurately produce sound;采集并分析演唱者的泛音特性,所述泛音特性是指除基频外的其他频率成分,用于决定声音的音色和质感;Collect and analyze the overtone characteristics of the singer, which refers to other frequency components besides the fundamental frequency, and is used to determine the timbre and texture of the sound;所述频段插值的计算方法为D(f,t)=T(f)-∣X(f,t)∣,其中,D(f,t)表示当前时刻t在频率f上的频谱差异,∣X(f,t)∣是输入信号频谱的幅度;所述增益设置的方法为G(f,t)=αD(f,t)+Gprev(f,t),其中,α是调整系数,控制增益调整的速率,Gprev(f,t)是前一时刻的增益,用来平滑过渡。The frequency band interpolation calculation method is D(f,t)=T(f)-|X(f,t)|, where D(f,t) represents the spectrum difference at the frequency f at the current moment t, and |X(f,t)| is the amplitude of the input signal spectrum; the gain setting method is G(f,t)=αD(f,t)+Gprev(f,t), where α is the adjustment coefficient, which controls the rate of gain adjustment, and Gprev(f,t) is the gain at the previous moment, which is used for smooth transition.6.一种设备,其特征在于,包括:处理器和存储器,所述存储器与所述处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述处理器从所述存储器中读取所述计算机指令,以使得所述电子设备执行如权利要求1-4中任一项所述的麦克风均衡性自适应调整方法。6. A device, characterized in that it comprises: a processor and a memory, wherein the memory is coupled to the processor, the memory is used to store computer program code, and the computer program code includes computer instructions. When the processor reads the computer instructions from the memory, the electronic device executes the microphone equalization adaptive adjustment method as described in any one of claims 1-4.
CN202411209383.3A2024-08-302024-08-30Microphone balance self-adaptive adjustment method, device and equipmentActiveCN119094934B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202411209383.3ACN119094934B (en)2024-08-302024-08-30Microphone balance self-adaptive adjustment method, device and equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202411209383.3ACN119094934B (en)2024-08-302024-08-30Microphone balance self-adaptive adjustment method, device and equipment

Publications (2)

Publication NumberPublication Date
CN119094934A CN119094934A (en)2024-12-06
CN119094934Btrue CN119094934B (en)2025-07-25

Family

ID=93661551

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202411209383.3AActiveCN119094934B (en)2024-08-302024-08-30Microphone balance self-adaptive adjustment method, device and equipment

Country Status (1)

CountryLink
CN (1)CN119094934B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111354368A (en)*2018-12-212020-06-30Gn奥迪欧有限公司Method for compensating processed audio signal

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6292511B1 (en)*1998-10-022001-09-18Usa Digital Radio Partners, LpMethod for equalization of complementary carriers in an AM compatible digital audio broadcast system
CN101645268B (en)*2009-08-192012-03-14李宋Computer real-time analysis system for singing and playing
EP3025516B1 (en)*2013-07-222020-11-04Harman Becker Automotive Systems GmbHAutomatic timbre, loudness and equalization control
US10013997B2 (en)*2014-11-122018-07-03Cirrus Logic, Inc.Adaptive interchannel discriminative rescaling filter
CN207266263U (en)*2017-09-192018-04-20南京中广华夏影视科技有限公司A kind of digital movie audio processors
CN114157965B (en)*2021-11-262024-03-29国光电器股份有限公司Sound effect compensation method and device, earphone and storage medium
CN114978346A (en)*2022-05-262022-08-30中国电力科学研究院有限公司 A digital signal linear equalization method, system, electronic device and storage medium
CN118509772A (en)*2024-07-112024-08-16方博科技(深圳)有限公司Chirp signal equalization optimization method for progressive filter parameter adjustment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111354368A (en)*2018-12-212020-06-30Gn奥迪欧有限公司Method for compensating processed audio signal

Also Published As

Publication numberPublication date
CN119094934A (en)2024-12-06

Similar Documents

PublicationPublication DateTitle
JP6027087B2 (en) Acoustic signal processing system and method for performing spectral behavior transformations
TW502248B (en)Method of modifying harmonic content of a complex waveform
AU2007243586B2 (en)Audio gain control using specific-loudness-based auditory event detection
CN112951259B (en)Audio noise reduction method and device, electronic equipment and computer readable storage medium
JP5666444B2 (en) Apparatus and method for processing an audio signal for speech enhancement using feature extraction
US10753965B2 (en)Spectral-dynamics of an audio signal
US20180088899A1 (en)Tonal/transient structural separation for audio effects
CN107533848B (en) System and method for voice recovery
Prego et al.A blind algorithm for reverberation-time estimation using subband decomposition of speech signals
Osses Vecchi et al.Perceptual similarity between piano notes: Simulations with a template-based perception model
CN120148538B (en) Noise interference compensation method and system applied to audio transmission system
US20110064244A1 (en)Method and Arrangement for Processing Audio Data, and a Corresponding Computer Program and a Corresponding Computer-Readable Storage Medium
US9626949B2 (en)System of modeling characteristics of a musical instrument
US20170024495A1 (en)Method of modeling characteristics of a musical instrument
US20230186782A1 (en)Electronic device, method and computer program
JP4654621B2 (en) Voice processing apparatus and program
CN119811410A (en) High-quality audio data processing method and system
CN119155596B (en) An audio signal compensation method based on AI control and related device
Giampiccolo et al.A time-domain virtual bass enhancement circuital model for real-time music applications
CN119094934B (en)Microphone balance self-adaptive adjustment method, device and equipment
CN113259811B (en)Method and audio processing unit for detecting pitch and use thereof
CN118042354B (en)Automatic regulating system of teaching sound field environment
CN118972771A (en) A sound testing and automatic tuning method and system
CN112927713B (en)Audio feature point detection method, device and computer storage medium
Sundberg et al.Voice source variation between vowels in male opera singers

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp