Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Spectral density estimation

From Wikipedia, the free encyclopedia
Signal processing technique
For the statistical method, seeProbability density estimation.
For broader coverage of this topic, seeSpectral density.
This article needs editing tocomply with Wikipedia'sManual of Style. In particular, it has problems withMOS:FORMULA - avoid mixing<math>...</math> and{{math}} in the same expression. Please helpimprove the content.(July 2025) (Learn how and when to remove this message)

Instatistical signal processing, the goal ofspectral density estimation (SDE) or simplyspectral estimation is toestimate thespectral density (also known as thepower spectral density) of a signal from a sequence of time samples of the signal.[1] Intuitively speaking, the spectral density characterizes thefrequency content of the signal. One purpose of estimating the spectral density is to detect anyperiodicities in the data, by observing peaks at the frequencies corresponding to these periodicities.

Some SDE techniques assume that a signal is composed of a limited (usually small) number of generating frequencies plus noise and seek to find the location and intensity of the generated frequencies. Others make no assumption on the number of components and seek to estimate the whole generating spectrum.

Overview

[edit]
This articlemay need to be cleaned up. It has been merged fromFrequency domain.
Example of voice waveform and its frequency spectrum
A periodic waveform (triangle wave) and its frequency spectrum, showing a "fundamental" frequency at 220 Hz followed by multiples (harmonics) of 220 Hz
The power spectral density of a segment of music is estimated by two different methods, for comparison

Spectrum analysis, also referred to asfrequency domain analysis or spectral density estimation, is the technical process of decomposing a complex signal into simpler parts. As described above, many physical processes are best described as a sum of many individual frequency components. Any process that quantifies the various amounts (e.g. amplitudes, powers, intensities) versus frequency (orphase) can be calledspectrum analysis.

Spectrum analysis can be performed on the entire signal. Alternatively, a signal can be broken into short segments (sometimes calledframes), and spectrum analysis may be applied to these individual segments.Periodic functions (such assin(t){\displaystyle \sin(t)}) are particularly well-suited for this sub-division. General mathematical techniques for analyzing non-periodic functions fall into the category ofFourier analysis.

TheFourier transform of a function produces a frequency spectrum which contains all of the information about the original signal, but in a different form. This means that the original function can be completely reconstructed (synthesized) by aninverse Fourier transform. For perfect reconstruction, the spectrum analyzer must preserve both theamplitude andphase of each frequency component. These two pieces of information can be represented as a 2-dimensional vector, as acomplex number, or as magnitude (amplitude) and phase inpolar coordinates (i.e., as aphasor). A common technique in signal processing is to consider the squared amplitude, orpower; in this case the resulting plot is referred to as apower spectrum.

Because of reversibility, the Fourier transform is called arepresentation of the function, in terms of frequency instead of time; thus, it is afrequency domain representation. Linear operations that could be performed in the time domain have counterparts that can often be performed more easily in the frequency domain. Frequency analysis also simplifies the understanding and interpretation of the effects of various time-domain operations, both linear and non-linear. For instance, onlynon-linear ortime-variant operations can create new frequencies in the frequency spectrum.

In practice, nearly all software and electronic devices that generate frequency spectra utilize adiscrete Fourier transform (DFT), which operates onsamples of the signal, and which provides a mathematical approximation to the full integral solution. The DFT is almost invariably implemented by an efficient algorithm calledfast Fourier transform (FFT). The array of squared-magnitude components of a DFT is a type of power spectrum calledperiodogram, which is widely used for examining the frequency characteristics of noise-free functions such asfilter impulse responses andwindow functions. But the periodogram does not provide processing-gain when applied to noiselike signals or even sinusoids at low signal-to-noise ratios[why?]. In other words, the variance of its spectral estimate at a given frequency does not decrease as the number of samples used in the computation increases. This can be mitigated by averaging over time (Welch's method[2])  or over frequency (smoothing). Welch's method is widely used for spectral density estimation (SDE). However, periodogram-based techniques introduce small biases that are unacceptable in some applications. So other alternatives are presented in the next section.

Techniques

[edit]

Many other techniques for spectral estimation have been developed to mitigate the disadvantages of the basic periodogram. These techniques can generally be divided intonon-parametric,parametric, and more recentlysemi-parametric (also called sparse) methods.[3] The non-parametric approaches explicitly estimate thecovariance or the spectrum of the process without assuming that the process has any particular structure. Some of the most common estimators in use for basic applications (e.g.Welch's method) are non-parametric estimators closely related to the periodogram. By contrast, the parametric approaches assume that the underlyingstationary stochastic process has a certain structure that can be described using a small number of parameters (for example, using anauto-regressive or moving-average model). In these approaches, the task is to estimate the parameters of the model that describes the stochastic process. When using the semi-parametric methods, the underlying process is modeled using a non-parametric framework, with the additional assumption that the number of non-zero components of the model is small (i.e., the model is sparse). Similar approaches may also be used for missing data recovery[4] as well assignal reconstruction.

Following is a partial list of spectral density estimation techniques:

Parametric estimation

[edit]

In parametric spectral estimation, one assumes that the signal is modeled by astationary process which has a spectral density function (SDF)S(f;a1,,ap){\displaystyle S(f;a_{1},\ldots ,a_{p})} that is a function of the frequencyf{\displaystyle f} andp{\displaystyle p} parametersa1,,ap{\displaystyle a_{1},\ldots ,a_{p}}.[8] The estimation problem then becomes one of estimating these parameters.

The most common form of parametric SDF estimate uses as a model anautoregressive modelAR(p){\displaystyle {\text{AR}}(p)} of orderp{\displaystyle p}.[8]: 392  A signal sequence{Yt}{\displaystyle \{Y_{t}\}} obeying a zero meanAR(p){\displaystyle {\text{AR}}(p)} process satisfies the equation

Yt=ϕ1Yt1+ϕ2Yt2++ϕpYtp+εt,{\displaystyle Y_{t}=\phi _{1}Y_{t-1}+\phi _{2}Y_{t-2}+\cdots +\phi _{p}Y_{t-p}+\varepsilon _{t},}

where theϕ1,,ϕp{\displaystyle \phi _{1},\ldots ,\phi _{p}} are fixed coefficients andεt{\displaystyle \varepsilon _{t}} is a white noise process with zero mean andinnovation varianceσp2{\displaystyle \sigma _{p}^{2}}. The SDF for this process is

S(f;ϕ1,,ϕp,σp2)=σp2Δt|1k=1pϕke2πifkΔt|2,|f|<fN,{\displaystyle S(f;\phi _{1},\ldots ,\phi _{p},\sigma _{p}^{2})={\frac {\sigma _{p}^{2}\Delta t}{\left|1-\sum _{k=1}^{p}\phi _{k}e^{-2\pi ifk\Delta t}\right|^{2}}},\qquad |f|<f_{N},}

withΔt{\displaystyle \Delta t} the sampling time interval andfN{\displaystyle f_{N}} theNyquist frequency.

There are a number of approaches to estimating the parametersϕ1,,ϕp,σp2{\displaystyle \phi _{1},\ldots ,\phi _{p},\sigma _{p}^{2}} of theAR(p){\displaystyle {\text{AR}}(p)} process and thus the spectral density:[8]: 452-453 

  • TheYule–Walker estimators are found by recursively solving the Yule–Walker equations for anAR(p){\displaystyle {\text{AR}}(p)} process
  • TheBurg estimators are found by treating the Yule–Walker equations as a form of ordinary least squares problem. The Burg estimators are generally considered superior to the Yule–Walker estimators.[8]: 452  Burg associated these withmaximum entropy spectral estimation.[9]
  • Theforward-backward least-squares estimators treat theAR(p){\displaystyle {\text{AR}}(p)} process as a regression problem and solves that problem using forward-backward method. They are competitive with the Burg estimators.
  • Themaximum likelihood estimators estimate the parameters using amaximum likelihood approach. This involves a nonlinear optimization and is more complex than the first three.

Alternative parametric methods include fitting to amoving-average model (MA) and to a fullautoregressive moving-average model (ARMA).

Frequency estimation

[edit]

Frequency estimation is the process ofestimating thefrequency, amplitude, and phase-shift of asignal in the presence ofnoise given assumptions about the number of the components.[10] This contrasts with the general methods above, which do not make prior assumptions about the components.

Single tone

[edit]
See also:Sinusoidal model

If one only wants to estimate the frequency of the single loudestpure-tone signal, one can use apitch detection algorithm.

If the dominant frequency changes over time, then the problem becomes the estimation of theinstantaneous frequency as defined in thetime–frequency representation. Methods for instantaneous frequency estimation include those based on theWigner–Ville distribution and higher orderambiguity functions.[11]

If one wants to knowall the (possibly complex) frequency components of a received signal (including transmitted signal and noise), one uses a multiple-tone approach.

Multiple tones

[edit]

A typical model for a signalx(n){\displaystyle x(n)} consists of a sum ofp{\displaystyle p} complex exponentials in the presence ofwhite noise,w(n){\displaystyle w(n)}

x(n)=k=1pAkeinωk+w(n){\displaystyle x(n)=\sum _{k=1}^{p}A_{k}e^{in\omega _{k}}+w(n)}.

The power spectral density ofx(n){\displaystyle x(n)} is composed ofp{\displaystyle p}impulse functions in addition to the spectral density function due to noise.

The most common methods for frequency estimation involve identifying the noisesubspace to extract these components. These methods are based oneigendecomposition of theautocorrelation matrix into a signal subspace and a noise subspace. After these subspaces are identified, a frequency estimation function is used to find the component frequencies from the noise subspace. The most popular methods of noise subspace based frequency estimation arePisarenko's method, themultiple signal classification (MUSIC) method, the eigenvector method, and the minimum norm method.

Pisarenko's method
P^PHD(ejω)=1|eHvmin|2{\displaystyle {\hat {P}}_{\text{PHD}}\left(e^{j\omega }\right)={\frac {1}{\left|\mathbf {e} ^{H}\mathbf {v} _{\text{min}}\right|^{2}}}}
MUSIC
P^MU(ejω)=1i=p+1M|eHvi|2{\displaystyle {\hat {P}}_{\text{MU}}\left(e^{j\omega }\right)={\frac {1}{\sum _{i=p+1}^{M}\left|\mathbf {e} ^{H}\mathbf {v} _{i}\right|^{2}}}}
Eigenvector method
P^EV(ejω)=1i=p+1M1λi|eHvi|2{\displaystyle {\hat {P}}_{\text{EV}}\left(e^{j\omega }\right)={\frac {1}{\sum _{i=p+1}^{M}{\frac {1}{\lambda _{i}}}\left|\mathbf {e} ^{H}\mathbf {v} _{i}\right|^{2}}}}
Minimum norm method
P^MN(ejω)=1|eHa|2; a=λPnu1{\displaystyle {\hat {P}}_{\text{MN}}\left(e^{j\omega }\right)={\frac {1}{\left|\mathbf {e} ^{H}\mathbf {a} \right|^{2}}};\ \mathbf {a} =\lambda \mathbf {P} _{n}\mathbf {u} _{1}}

Example calculation

[edit]

Supposexn{\displaystyle x_{n}}, fromn=0{\displaystyle n=0} toN1{\displaystyle N-1} is a time series (discrete time) with zero mean. Suppose that it is a sum of a finite number of periodic components (all frequencies are positive):

xn=kAksin(2πνkn+ϕk)=kAk[sin(ϕk)cos(2πνkn)+cos(ϕk)sin(2πνkn)]=k[akcos(2πνkn)+bksin(2πνkn)]{\displaystyle {\begin{aligned}x_{n}&=\sum _{k}A_{k}\sin(2\pi \nu _{k}n+\phi _{k})\\&=\sum _{k}A_{k}\left[\sin(\phi _{k})\cos(2\pi \nu _{k}n)+\cos(\phi _{k})\sin(2\pi \nu _{k}n)\right]\\&=\sum _{k}\left[a_{k}\cos(2\pi \nu _{k}n)+b_{k}\sin(2\pi \nu _{k}n)\right]\end{aligned}}}whereak=Aksin(ϕk),bk=Akcos(ϕk).{\displaystyle {\begin{aligned}a_{k}&=A_{k}\sin(\phi _{k}),&b_{k}&=A_{k}\cos(\phi _{k}).\end{aligned}}}

The variance ofxn{\displaystyle x_{n}} is, for a zero-mean function as above, given by

1Nn=0N1xn2.{\displaystyle {\frac {1}{N}}\sum _{n=0}^{N-1}x_{n}^{2}.}

If these data were samples taken from an electrical signal, this would be its average power (power is energy per unit time, so it is analogous to variance if energy is analogous to the amplitude squared).

Now, for simplicity, suppose the signal extends infinitely in time, so we pass to the limit asN.{\displaystyle N\to \infty .} If the average power is bounded, which is almost always the case in reality, then the following limit exists and is the variance of the data.

limN1Nn=0N1xn2.{\displaystyle \lim _{N\to \infty }{\frac {1}{N}}\sum _{n=0}^{N-1}x_{n}^{2}.}

Again, for simplicity, we will pass to continuous time, and assume that the signal extends infinitely in time in both directions. Then these two formulas become

x(t)=kAksin(2πνkt+ϕk){\displaystyle x(t)=\sum _{k}A_{k}\sin \left(2\pi \nu _{k}t+\phi _{k}\right)}

and

limT12TTTx(t)2dt.{\displaystyle \lim _{T\to \infty }{\frac {1}{2T}}\int _{-T}^{T}x(t)^{2}\,dt.}

The root mean square ofsin{\displaystyle \sin } is1/2{\displaystyle 1/{\sqrt {2}}}, so the variance ofAksin(2πνkt+ϕk){\displaystyle A_{k}\sin(2\pi \nu _{k}t+\phi _{k})} is12Ak2.{\displaystyle {\tfrac {1}{2}}A_{k}^{2}.} Hence, the contribution to the average power ofx(t){\displaystyle x(t)} coming from the component with frequencyνk{\displaystyle \nu _{k}} is12Ak2.{\displaystyle {\tfrac {1}{2}}A_{k}^{2}.} All these contributions add up to the average power ofx(t).{\displaystyle x(t).}

Then the power as a function of frequency is12Ak2,{\displaystyle {\tfrac {1}{2}}A_{k}^{2},} and its statisticalcumulative distribution functionS(ν){\displaystyle S(\nu )} will be

S(ν)=12k:νk<νAk2.{\displaystyle S(\nu )={\frac {1}{2}}\sum _{k:\nu _{k}<\nu }A_{k}^{2}.}

S{\displaystyle S} is astep function, monotonically non-decreasing. Its jumps occur at the frequencies of theperiodic components ofx{\displaystyle x}, and the value of each jump is the power or variance of that component.

The variance is the covariance of the data with itself. If we now consider the same data but with a lag ofτ{\displaystyle \tau }, we can take thecovariance ofx(t){\displaystyle x(t)} withx(t+τ){\displaystyle x(t+\tau )}, and define this to be theautocorrelation functionc{\displaystyle c} of the signal (or data)x{\displaystyle x}:

c(τ)=limT12TTTx(t)x(t+τ)dt.{\displaystyle c(\tau )=\lim _{T\to \infty }{\frac {1}{2T}}\int _{-T}^{T}x(t)\,x(t+\tau )\,dt.}

If it exists, it is an even function ofτ.{\displaystyle \tau .} If the average power is bounded, thenc{\displaystyle c} exists everywhere, is finite, and is bounded byc(0){\displaystyle c(0)}, which is the average power or variance of the data.

It can be shown thatc{\displaystyle c} can be decomposed into periodic components with the same periods asx{\displaystyle x}:

c(τ)=12kAk2cos(2πνkτ).{\displaystyle c(\tau )={\tfrac {1}{2}}\sum _{k}A_{k}^{2}\cos(2\pi \nu _{k}\tau ).}

This is in fact the spectral decomposition ofc{\displaystyle c} over the different frequencies, and is related to the distribution of power ofx{\displaystyle x} over the frequencies: the amplitude of a frequency component ofc{\displaystyle c} is its contribution to the average power of the signal.

The power spectrum of this example is not continuous, and therefore does not have a derivative, and therefore this signal does not have a power spectral density function. In general, the power spectrum will usually be the sum of two parts: a line spectrum such as in this example, which is not continuous and does not have a density function, and a residue, which is absolutely continuous and does have a density function.

See also

[edit]

References

[edit]
  1. ^P Stoica and R Moses, Spectral Analysis of Signals, Prentice Hall, 2005.
  2. ^Welch, P. D. (1967), "The use of Fast Fourier Transform for the estimation of power spectra: A method based on time averaging over short, modified periodograms",IEEE Transactions on Audio and Electroacoustics, AU-15 (2):70–73,Bibcode:1967ITAE...15...70W,doi:10.1109/TAU.1967.1161901,S2CID 13900622
  3. ^abStoica, Petre; Babu, Prabhu; Li, Jian (January 2011). "New Method of Sparse Parameter Estimation in Separable Models and Its Use for Spectral Analysis of Irregularly Sampled Data".IEEE Transactions on Signal Processing.59 (1):35–47.Bibcode:2011ITSP...59...35S.doi:10.1109/TSP.2010.2086452.ISSN 1053-587X.S2CID 15936187.
  4. ^Stoica, Petre; Li, Jian; Ling, Jun; Cheng, Yubo (April 2009)."Missing data recovery via a nonparametric iterative adaptive approach".2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE. pp. 3369–3372.doi:10.1109/icassp.2009.4960347.ISBN 978-1-4244-2353-8.
  5. ^Sward, Johan; Adalbjornsson, Stefan Ingi; Jakobsson, Andreas (March 2017)."A generalization of the sparse iterative covariance-based estimator".2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE. pp. 3954–3958.doi:10.1109/icassp.2017.7952898.ISBN 978-1-5090-4117-6.S2CID 5640068.
  6. ^Yardibi, Tarik; Li, Jian; Stoica, Petre; Xue, Ming; Baggeroer, Arthur B. (January 2010). "Source Localization and Sensing: A Nonparametric Iterative Adaptive Approach Based on Weighted Least Squares".IEEE Transactions on Aerospace and Electronic Systems.46 (1):425–443.Bibcode:2010ITAES..46..425Y.doi:10.1109/TAES.2010.5417172.hdl:1721.1/59588.ISSN 0018-9251.S2CID 18834345.
  7. ^Panahi, Ashkan; Viberg, Mats (February 2011)."On the resolution of the LASSO-based DOA estimation method".2011 International ITG Workshop on Smart Antennas. IEEE. pp. 1–5.doi:10.1109/wsa.2011.5741938.ISBN 978-1-61284-075-8.S2CID 7013162.
  8. ^abcdPercival, Donald B.; Walden, Andrew T. (1992).Spectral Analysis for Physical Applications. Cambridge University Press.ISBN 9780521435413.
  9. ^Burg, J.P. (1967) "Maximum Entropy Spectral Analysis",Proceedings of the 37th Meeting of the Society of Exploration Geophysicists, Oklahoma City, Oklahoma.
  10. ^Hayes, Monson H.,Statistical Digital Signal Processing and Modeling, John Wiley & Sons, Inc., 1996.ISBN 0-471-59431-8.
  11. ^Lerga, Jonatan."Overview of Signal Instantaneous Frequency Estimation Methods"(PDF). University of Rijeka. Retrieved22 March 2014.

Further reading

[edit]
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis (see alsoTemplate:Least squares and regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
Retrieved from "https://en.wikipedia.org/w/index.php?title=Spectral_density_estimation&oldid=1303898846"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp