Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Spectral density

From Wikipedia, the free encyclopedia
(Redirected fromFrequency spectrum)
Relative importance of certain frequencies in a composite signal
This article is about signal processing and relation of spectra to time-series. For further applications in the physical sciences, seeSpectrum (physical sciences).
"Spectral power density" redirects here; not to be confused withSpectral power.

The spectral density of afluorescent light as a function of optical wavelength shows peaks at atomic transitions, indicated by the numbered arrows.
The voice waveform over time (left) has a broad audio power spectrum (right).
This articlemay be too technical for most readers to understand. Pleasehelp improve it tomake it understandable to non-experts, without removing the technical details.(June 2024) (Learn how and when to remove this message)

Insignal processing, the power spectrumSxx(f){\displaystyle S_{xx}(f)} of acontinuous timesignalx(t){\displaystyle x(t)} describes the distribution ofpower into frequency componentsf{\displaystyle f} composing that signal.[1]Fourier analysis shows that any physical signal can be decomposed into a distribution of frequencies over a continuous range, where some of the power may be concentrated at discrete frequencies. The statistical average of the energy or power of any type of signal (includingnoise) as analyzed in terms of its frequency content, is called itsspectral density.

When the energy of the signal is concentrated around a finite time interval, especially if its total energy is finite, one may compute theenergy spectral density. More commonly used is thepower spectral density (PSD, or simplypower spectrum), which applies to signals existing overall time, or over a time period large enough (especially in relation to the duration of a measurement) that it could as well have been over an infinite time interval. The PSD then refers to the spectral power distribution that would be found, since the total energy of such a signal over all time would generally be infinite.Summation or integration of the spectral components yields the total power (for a physical process) or variance (in a statistical process), identical to what would be obtained by integratingx2(t){\displaystyle x^{2}(t)} over the time domain, as dictated byParseval's theorem.[1]

The spectrum of a physical processx(t){\displaystyle x(t)} often contains essential information about the nature ofx{\displaystyle x}. For instance, thepitch andtimbre of a musical instrument can be determined from a spectral analysis. Thecolor of a light source is determined by the spectrum of the electromagnetic wave's electric fieldE(t){\displaystyle E(t)} as it oscillates at an extremely high frequency. Obtaining a spectrum from time series data such as these involves theFourier transform, and generalizations based on Fourier analysis. In many cases the time domain is not directly captured in practice, such as when adispersive prism is used to obtain a spectrum of light in aspectrograph, or when a sound is perceived through its effect on the auditory receptors of the inner ear, each of which is sensitive to a particular frequency.

However this article concentrates on situations in which the time series is known (at least in a statistical sense) or directly measured (such as by a microphone sampled by a computer). The power spectrum is important instatistical signal processing and in the statistical study ofstochastic processes, as well as in many other branches ofphysics andengineering. Typically the process is a function of time, but one can similarly discuss data in the spatial domain being decomposed in terms ofspatial frequency.[1]

Units

[edit]
See also:Fourier transform § Units

Inphysics, the signal might be a wave, such as anelectromagnetic wave, anacoustic wave, or the vibration of a mechanism. Thepower spectral density (PSD) of the signal describes thepower density of the signal as a function of frequency. Power spectral density is commonly expressed in theSI unitwatt perhertz (W/Hz).[2]

When a signal is defined in terms of only avoltage varying in time, for instance, there is no specific power associated with a given voltage. In this case "power" is simply reckoned in terms of the square of the signal, as this would always beproportional to the actual power delivered by that signal into a givenimpedance. So one might use the unit V2⋅Hz−1 for the PSD.Energy spectral density (ESD) would have the unit V2⋅s⋅Hz−1, sinceenergy is power multiplied by time (e.g.,watt-hour).[3]

In the general case, the unit of PSD will be the ratio of unit of variance per unit of frequency; so, for example, a series of displacement values (in meters) over time (in seconds) will have PSD with the unit m2/Hz.In the analysis of randomvibrations, the unitg02⋅Hz−1 may be used for the PSD ofacceleration, whereg0 denotesstandard gravity.[4]

Mathematically, it is not necessary to assign physical dimensions to the signal or to the independent variable. In the following discussion the meaning ofx(t) will remain unspecified, but the independent variable will be assumed to be that of time.

One-sided vs. two-sided

[edit]

A PSD can be either aone-sided function of only positive frequencies or atwo-sided function of both positive andnegative frequencies but with only half the amplitude. Noise PSDs are generally one-sided in engineering and two-sided in physics.[5]

Definition

[edit]

Energy spectral density

[edit]
"Energy spectral density" redirects here; not to be confused withEnergy spectrum.

Insignal processing, theenergy of a signalx(t){\displaystyle x(t)} is given byE|x(t)|2 dt.{\displaystyle E\triangleq \int _{-\infty }^{\infty }\left|x(t)\right|^{2}\ dt.}Assuming the total energy is finite (i.e.x(t){\displaystyle x(t)} is asquare-integrable function) allows applyingParseval's theorem (orPlancherel's theorem).[6] That is,|x(t)|2dt=|x^(f)|2df,{\displaystyle \int _{-\infty }^{\infty }|x(t)|^{2}\,dt=\int _{-\infty }^{\infty }\left|{\hat {x}}(f)\right|^{2}\,df,}wherex^(f)=ei2πftx(t) dt,{\displaystyle {\hat {x}}(f)=\int _{-\infty }^{\infty }e^{-i2\pi ft}x(t)\ dt,}is theFourier transform ofx(t){\displaystyle x(t)} atfrequencyf{\displaystyle f} (inHz).[7] The theorem also holds true in the discrete-time cases. Since the integral on the left-hand side is the energy of the signal, the value of|x^(f)|2df{\displaystyle \left|{\hat {x}}(f)\right|^{2}df} can be interpreted as adensity function multiplied by an infinitesimally small frequency interval, describing the energy contained in the signal at frequencyf{\displaystyle f} in the frequency intervalf+df{\displaystyle f+df}.

Therefore, theenergy spectral density ofx(t){\displaystyle x(t)} is defined as[8]

S¯xx(f)|x^(f)|2{\displaystyle {\bar {S}}_{xx}(f)\triangleq \left|{\hat {x}}(f)\right|^{2}}Eq.1

The functionS¯xx(f){\displaystyle {\bar {S}}_{xx}(f)} and theautocorrelation ofx(t){\displaystyle x(t)} form a Fourier transform pair, a result also known as theWiener–Khinchin theorem (see alsoPeriodogram).

As a physical example of how one might measure the energy spectral density of a signal, supposeV(t){\displaystyle V(t)} represents thepotential (involts) of an electrical pulse propagating along atransmission line ofimpedanceZ{\displaystyle Z}, and suppose the line is terminated with amatched resistor (so that all of the pulse energy is delivered to the resistor and none is reflected back). ByOhm's law, the power delivered to the resistor at timet{\displaystyle t} is equal toV(t)2/Z{\displaystyle V(t)^{2}/Z}, so the total energy is found by integratingV(t)2/Z{\displaystyle V(t)^{2}/Z} with respect to time over the duration of the pulse. To find the value of the energy spectral densityS¯xx(f){\displaystyle {\bar {S}}_{xx}(f)} at frequencyf{\displaystyle f}, one could insert between the transmission line and the resistor abandpass filter which passes only a narrow range of frequencies (Δf{\displaystyle \Delta f}, say) near the frequency of interest and then measure the total energyE(f){\displaystyle E(f)} dissipated across the resistor. The value of the energy spectral density atf{\displaystyle f} is then estimated to beE(f)/Δf{\displaystyle E(f)/\Delta f}. In this example, since the powerV(t)2/Z{\displaystyle V(t)^{2}/Z} has the unit V2⋅Ω−1, the energyE(f){\displaystyle E(f)} has the unit V2⋅s⋅Ω−1 =J, and hence the estimateE(f)/Δf{\displaystyle E(f)/\Delta f} of the energy spectral density has the unit J⋅Hz−1. In many situations, it is common to omit the step of dividing byZ{\displaystyle Z} so that the energy spectral density instead has the unit V2⋅s·Hz−1.

This definition generalizes in a straightforward manner to a discrete signal with acountably infinite number of valuesxn{\displaystyle x_{n}} such as a signal sampled at discrete timestn=t0+(nΔt){\displaystyle t_{n}=t_{0}+(n\,\Delta t)}:S¯xx(f)=limN(Δt)2|n=NNxnei2πfnΔt|2|x^d(f)|2,{\displaystyle {\bar {S}}_{xx}(f)=\lim _{N\to \infty }(\Delta t)^{2}\underbrace {\left|\sum _{n=-N}^{N}x_{n}e^{-i2\pi fn\,\Delta t}\right|^{2}} _{\left|{\hat {x}}_{d}(f)\right|^{2}},}wherex^d(f){\displaystyle {\hat {x}}_{d}(f)} is thediscrete-time Fourier transform ofxn.{\displaystyle x_{n}.}  The sampling intervalΔt{\displaystyle \Delta t} is needed to keep the correct physical unit and to ensure that we recover the continuous case in the limitΔt0{\displaystyle \Delta t\to 0}. But in the mathematical sciences the interval is often set to 1, which simplifies the results at the expense of generality. (Also seeNormalized frequency (unit))

Power spectral density

[edit]
Not to be confused withspectral power distribution.
The power spectrum of the measuredcosmic microwave background radiation temperature anisotropy in terms of the angular scale. The solid line is a theoretical model, for comparison.

The above definition of energy spectral density is suitable for transients (pulse-like signals) whose energy is concentrated around one time window; then the Fourier transforms of the signals generally exist. For continuous signals over all time, one must rather define thepower spectral density (PSD) which exists forstationary processes; this describes how thepower of a signal or time series is distributed over frequency, as in the simple example given previously. Here, power can be the actual physical power, or more often, for convenience with abstract signals, is simply identified with the squared value of the signal. For example, statisticians study thevariance of a function over timex(t){\displaystyle x(t)} (or over another independent variable), and using an analogy with electrical signals (among other physical processes), it is customary to refer to it as thepower spectrum even when there is no physical power involved. If one were to create a physicalvoltage source which followedx(t){\displaystyle x(t)} and applied it to the terminals of a oneohmresistor, then indeed the instantaneous power dissipated in that resistor would be given byx2(t){\displaystyle x^{2}(t)}watts.

The average powerP{\displaystyle P} of a signalx(t){\displaystyle x(t)} over all time is therefore given by the following time average, where the periodT{\displaystyle T} is centered about some arbitrary timet=t0{\displaystyle t=t_{0}}:P=limT1Tt0T/2t0+T/2|x(t)|2dt{\displaystyle P=\lim _{T\to \infty }{\frac {1}{T}}\int _{t_{0}-T/2}^{t_{0}+T/2}\left|x(t)\right|^{2}\,dt}

Whenever it is more convenient to deal with time limits in the signal itself rather than time limits in the bounds of the integral, the average power can also be written asP=limT1T|xT(t)|2dt,{\displaystyle P=\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }\left|x_{T}(t)\right|^{2}\,dt,}wherexT(t)=x(t)wT(t){\displaystyle x_{T}(t)=x(t)w_{T}(t)} andwT(t){\displaystyle w_{T}(t)} is unity within the arbitrary period and zero elsewhere.

WhenP{\displaystyle P} is non-zero, the integral must grow to infinity at least as fast asT{\displaystyle T} does. That is the reason why we cannot use the energy of the signal, which is that diverging integral.

In analyzing the frequency content of the signalx(t){\displaystyle x(t)}, one might like to compute the ordinary Fourier transformx^(f){\displaystyle {\hat {x}}(f)}; however, for many signals of interest the ordinary Fourier transform does not formally exist.[nb 1] However, under suitable conditions, certain generalizations of the Fourier transform (e.g. theFourier–Stieltjes transform) still adhere toParseval's theorem. As such,P=limT1T|x^T(f)|2df,{\displaystyle P=\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }|{\hat {x}}_{T}(f)|^{2}\,df,}where the integrand defines thepower spectral density:[9][10]

Sxx(f)=limT1T|x^T(f)|2{\displaystyle S_{xx}(f)=\lim _{T\to \infty }{\frac {1}{T}}|{\hat {x}}_{T}(f)|^{2}\,}Eq.2

Theconvolution theorem then allows regarding|x^T(f)|2{\displaystyle |{\hat {x}}_{T}(f)|^{2}} as theFourier transform of the timeconvolution ofxT(t){\displaystyle x_{T}^{*}(-t)} andxT(t){\displaystyle x_{T}(t)}, where * represents the complex conjugate.

In order to deduce Eq.2, we will find an expression for[x^T(f)]{\displaystyle [{\hat {x}}_{T}(f)]^{*}} that will be useful for the purpose. In fact, we will demonstrate that[x^T(f)]=F{xT(t)}{\displaystyle [{\hat {x}}_{T}(f)]^{*}={\mathcal {F}}\left\{x_{T}^{*}(-t)\right\}}. Start by noting thatF{xT(t)}=xT(t)ei2πftdt{\displaystyle {\begin{aligned}{\mathcal {F}}\left\{x_{T}^{*}(-t)\right\}&=\int _{-\infty }^{\infty }x_{T}^{*}(-t)e^{-i2\pi ft}dt\end{aligned}}}and letz=t{\displaystyle z=-t}, so thatz{\displaystyle z\rightarrow -\infty } whent{\displaystyle t\rightarrow \infty } and vice versa. SoxT(t)ei2πftdt=xT(z)ei2πfz(dz)=xT(z)ei2πfzdz=xT(t)ei2πftdt{\displaystyle {\begin{aligned}\int _{-\infty }^{\infty }x_{T}^{*}(-t)e^{-i2\pi ft}dt&=\int _{\infty }^{-\infty }x_{T}^{*}(z)e^{i2\pi fz}\left(-dz\right)\\&=\int _{-\infty }^{\infty }x_{T}^{*}(z)e^{i2\pi fz}dz\\&=\int _{-\infty }^{\infty }x_{T}^{*}(t)e^{i2\pi ft}dt\end{aligned}}}where, in the last line, use has been made ofz{\displaystyle z} andt{\displaystyle t} being dummy variables.So, we haveF{xT(t)}=xT(t)ei2πftdt=xT(t)ei2πftdt=xT(t)[ei2πft]dt=[xT(t)ei2πftdt]=[F{xT(t)}]=[x^T(f)]{\displaystyle {\begin{aligned}{\mathcal {F}}\left\{x_{T}^{*}(-t)\right\}&=\int _{-\infty }^{\infty }x_{T}^{*}(-t)e^{-i2\pi ft}dt\\&=\int _{-\infty }^{\infty }x_{T}^{*}(t)e^{i2\pi ft}dt\\&=\int _{-\infty }^{\infty }x_{T}^{*}(t)[e^{-i2\pi ft}]^{*}dt\\&=\left[\int _{-\infty }^{\infty }x_{T}(t)e^{-i2\pi ft}dt\right]^{*}\\&=\left[{\mathcal {F}}\left\{x_{T}(t)\right\}\right]^{*}\\&=\left[{\hat {x}}_{T}(f)\right]^{*}\end{aligned}}}q.e.d.

Now, let's demonstrate eq.2 by using the demonstrated identity. In addition, we will make the substitutionu(t)=xT(t){\displaystyle u(t)=x_{T}^{*}(-t)}. In this way, we have:|x^T(f)|2=[x^T(f)]x^T(f)=F{xT(t)}F{xT(t)}=F{u(t)}F{xT(t)}=F{u(t)xT(t)}=[u(τt)xT(t)dt]ei2πfτdτ=[xT(tτ)xT(t)dt]ei2πfτ dτ,{\displaystyle {\begin{aligned}\left|{\hat {x}}_{T}(f)\right|^{2}&=[{\hat {x}}_{T}(f)]^{*}\cdot {\hat {x}}_{T}(f)\\&={\mathcal {F}}\left\{x_{T}^{*}(-t)\right\}\cdot {\mathcal {F}}\left\{x_{T}(t)\right\}\\&={\mathcal {F}}\left\{u(t)\right\}\cdot {\mathcal {F}}\left\{x_{T}(t)\right\}\\&={\mathcal {F}}\left\{u(t)\mathbin {\mathbf {*} } x_{T}(t)\right\}\\&=\int _{-\infty }^{\infty }\left[\int _{-\infty }^{\infty }u(\tau -t)x_{T}(t)dt\right]e^{-i2\pi f\tau }d\tau \\&=\int _{-\infty }^{\infty }\left[\int _{-\infty }^{\infty }x_{T}^{*}(t-\tau )x_{T}(t)dt\right]e^{-i2\pi f\tau }\ d\tau ,\end{aligned}}}where the convolution theorem has been used when passing from the 3rd to the 4th line.

Now, if we divide the time convolution above by the periodT{\displaystyle T} and take the limit asT{\displaystyle T\rightarrow \infty }, it becomes theautocorrelation function of the non-windowed signalx(t){\displaystyle x(t)}, which is denoted asRxx(τ){\displaystyle R_{xx}(\tau )}, provided thatx(t){\displaystyle x(t)} isergodic, which is true in most, but not all, practical cases.[nb 2]limT1T|x^T(f)|2=[limT1TxT(tτ)xT(t)dt]ei2πfτ dτ=Rxx(τ)ei2πfτdτ{\displaystyle \lim _{T\to \infty }{\frac {1}{T}}\left|{\hat {x}}_{T}(f)\right|^{2}=\int _{-\infty }^{\infty }\left[\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }x_{T}^{*}(t-\tau )x_{T}(t)dt\right]e^{-i2\pi f\tau }\ d\tau =\int _{-\infty }^{\infty }R_{xx}(\tau )e^{-i2\pi f\tau }d\tau }

Assuming the ergodicity ofx(t){\displaystyle x(t)}, the power spectral density can be found once more as the Fourier transform of theautocorrelation functionRxx{\displaystyle R_{xx}}, a property known as theWiener–Khinchin theorem.[11]

Sxx(f)=Rxx(τ)ei2πfτdτ=R^xx(f){\displaystyle S_{xx}(f)=\int _{-\infty }^{\infty }R_{xx}(\tau )e^{-i2\pi f\tau }\,d\tau ={\hat {R}}_{xx}(f)}Eq.3

Many authors use this relationship to define the power spectral density in terms of the autocorrelation function instead of the Fourier transform of the signal as we have done.[12]

The power of the signal in a given frequency band[f1,f2]{\displaystyle [f_{1},f_{2}]}, where0<f1<f2{\displaystyle 0<f_{1}<f_{2}}, can be calculated by integrating over frequency. SinceSxx(f)=Sxx(f){\displaystyle S_{xx}(-f)=S_{xx}(f)}, an equal amount of power can be attributed to positive and negative frequency bands, which accounts for the factor of 2 in the following form (such trivial factors depend on the conventions used):Pband-limited=2f1f2Sxx(f)df{\displaystyle P_{\textsf {band-limited}}=2\int _{f_{1}}^{f_{2}}S_{xx}(f)\,df}More generally, similar techniques may be used to estimate a time-varying spectral density. In this case the time intervalT{\displaystyle T} is finite rather than approaching infinity. This results in decreased spectral coverage and resolution since frequencies of less than1/T{\displaystyle 1/T} are not sampled, and results at frequencies which are not an integer multiple of1/T{\displaystyle 1/T} are not independent. Just using a single such time series, the estimated power spectrum will be very "noisy"; however this can be alleviated if it is possible to evaluate the expected value (in the above equation) using a large (or infinite) number of short-term spectra corresponding tostatistical ensembles of realizations ofx(t){\displaystyle x(t)} evaluated over the specified time window.

Just as with the energy spectral density, the definition of the power spectral density can be generalized todiscrete time variablesxn{\displaystyle x_{n}}. As before, we can consider a window ofNnN{\displaystyle -N\leq n\leq N} with the signal sampled at discrete timestn=t0+(nΔt){\displaystyle t_{n}=t_{0}+(n\,\Delta t)} for a total measurement periodT=(2N+1)Δt{\displaystyle T=(2N+1)\,\Delta t}.Sxx(f)=limN(Δt)2T|n=NNxnei2πfnΔt|2{\displaystyle S_{xx}(f)=\lim _{N\to \infty }{\frac {(\Delta t)^{2}}{T}}\left|\sum _{n=-N}^{N}x_{n}e^{-i2\pi fn\,\Delta t}\right|^{2}}Note that a single estimate of the PSD can be obtained through a finite number of samplings. As before, the actual PSD is achieved whenN{\displaystyle N} (and thusT{\displaystyle T}) approaches infinity and the expected value is formally applied. In a real-world application, one would typically average a finite-measurement PSD over many trials to obtain a more accurate estimate of the theoretical PSD of the physical process underlying the individual measurements. This computed PSD is sometimes called aperiodogram. This periodogram converges to the true PSD as the number of estimates as well as the averaging time intervalT{\displaystyle T} approach infinity.[13]

If two signals both possess power spectral densities, then thecross-spectral density can similarly be calculated; as the PSD is related to the autocorrelation, so is the cross-spectral density related to thecross-correlation.

Properties of the power spectral density

[edit]

Some properties of the PSD include:[14]

Cross power spectral density

[edit]
See also:Coherence (signal processing)

Given two signalsx(t){\displaystyle x(t)} andy(t){\displaystyle y(t)}, each of which possess power spectral densitiesSxx(f){\displaystyle S_{xx}(f)} andSyy(f){\displaystyle S_{yy}(f)}, it is possible to define across power spectral density (CPSD) orcross spectral density (CSD). To begin, let us consider the average power of such a combined signal.P=limT1T[xT(t)+yT(t)][xT(t)+yT(t)]dt=limT1T|xT(t)|2+xT(t)yT(t)+yT(t)xT(t)+|yT(t)|2dt{\displaystyle {\begin{aligned}P&=\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }\left[x_{T}(t)+y_{T}(t)\right]^{*}\left[x_{T}(t)+y_{T}(t)\right]dt\\&=\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }|x_{T}(t)|^{2}+x_{T}^{*}(t)y_{T}(t)+y_{T}^{*}(t)x_{T}(t)+|y_{T}(t)|^{2}dt\\\end{aligned}}}

Using the same notation and methods as used for the power spectral density derivation, we exploit Parseval's theorem and obtainSxy(f)=limT1T[x^T(f)y^T(f)]Syx(f)=limT1T[y^T(f)x^T(f)]{\displaystyle {\begin{aligned}S_{xy}(f)&=\lim _{T\to \infty }{\frac {1}{T}}\left[{\hat {x}}_{T}^{*}(f){\hat {y}}_{T}(f)\right]&S_{yx}(f)&=\lim _{T\to \infty }{\frac {1}{T}}\left[{\hat {y}}_{T}^{*}(f){\hat {x}}_{T}(f)\right]\end{aligned}}}where, again, the contributions ofSxx(f){\displaystyle S_{xx}(f)} andSyy(f){\displaystyle S_{yy}(f)} are already understood. Note thatSxy(f)=Syx(f){\displaystyle S_{xy}^{*}(f)=S_{yx}(f)}, so the full contribution to the cross power is, generally, from twice the real part of either individualCPSD. Just as before, from here we recast these products as the Fourier transform of a time convolution, which when divided by the period and taken to the limitT{\displaystyle T\to \infty } becomes the Fourier transform of across-correlation function.[16]Sxy(f)=[limT1TxT(tτ)yT(t)dt]ei2πfτdτ=Rxy(τ)ei2πfτdτSyx(f)=[limT1TyT(tτ)xT(t)dt]ei2πfτdτ=Ryx(τ)ei2πfτdτ,{\displaystyle {\begin{aligned}S_{xy}(f)&=\int _{-\infty }^{\infty }\left[\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }x_{T}^{*}(t-\tau )y_{T}(t)dt\right]e^{-i2\pi f\tau }d\tau =\int _{-\infty }^{\infty }R_{xy}(\tau )e^{-i2\pi f\tau }d\tau \\S_{yx}(f)&=\int _{-\infty }^{\infty }\left[\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }y_{T}^{*}(t-\tau )x_{T}(t)dt\right]e^{-i2\pi f\tau }d\tau =\int _{-\infty }^{\infty }R_{yx}(\tau )e^{-i2\pi f\tau }d\tau ,\end{aligned}}}whereRxy(τ){\displaystyle R_{xy}(\tau )} is thecross-correlation ofx(t){\displaystyle x(t)} withy(t){\displaystyle y(t)} andRyx(τ){\displaystyle R_{yx}(\tau )} is the cross-correlation ofy(t){\displaystyle y(t)} withx(t){\displaystyle x(t)}. In light of this, the PSD is seen to be a special case of the CSD forx(t)=y(t){\displaystyle x(t)=y(t)}. Ifx(t){\displaystyle x(t)} andy(t){\displaystyle y(t)} are real signals (e.g. voltage or current), their Fourier transformsx^(f){\displaystyle {\hat {x}}(f)} andy^(f){\displaystyle {\hat {y}}(f)} are usually restricted to positive frequencies by convention. Therefore, in typical signal processing, the fullCPSD is just one of theCPSDs scaled by a factor of two.CPSDFull=2Sxy(f)=2Syx(f){\displaystyle \operatorname {CPSD} _{\text{Full}}=2S_{xy}(f)=2S_{yx}(f)}

For discrete signalsxn andyn, the relationship between the cross-spectral density and the cross-covariance isSxy(f)=n=Rxy(τn)ei2πfτnΔτ{\displaystyle S_{xy}(f)=\sum _{n=-\infty }^{\infty }R_{xy}(\tau _{n})e^{-i2\pi f\tau _{n}}\,\Delta \tau }

Estimation

[edit]
Main article:Spectral density estimation

The goal of spectral density estimation is toestimate the spectral density of arandom signal from a sequence of time samples. Depending on what is known about the signal, estimation techniques can involveparametric ornon-parametric approaches, and may be based on time-domain or frequency-domain analysis. For example, a common parametric technique involves fitting the observations to anautoregressive model. A common non-parametric technique is theperiodogram.

The spectral density is usually estimated usingFourier transform methods (such as theWelch method), but other techniques such as themaximum entropy method can also be used.

Related concepts

[edit]
Not to be confused withspectral density (physical science).
  • Thespectral centroid of a signal is the midpoint of its spectral density function, i.e. the frequency that divides the distribution into two equal parts.
  • Thespectral edge frequency (SEF), usually expressed as "SEFx", represents thefrequency below whichx percent of the total power of a given signal are located; typically,x is in the range 75 to 95. It is more particularly a popular measure used inEEG monitoring, in which case SEF has variously been used to estimate the depth ofanesthesia and stages ofsleep.[17][18]
  • Aspectral envelope is theenvelope curve of the spectrum density. It describes one point in time (one window, to be precise). For example, inremote sensing using aspectrometer, the spectral envelope of a feature is the boundary of itsspectral properties, as defined by the range of brightness levels in each of thespectral bands of interest.
  • The spectral density is a function of frequency, not a function of time. However, the spectral density of a small window of a longer signal may be calculated, and plotted versus time associated with the window. Such a graph is called aspectrogram. This is the basis of a number of spectral analysis techniques such as theshort-time Fourier transform andwavelets.
  • A "spectrum" generally means the power spectral density, as discussed above, which depicts the distribution of signal content over frequency. Fortransfer functions (e.g.,Bode plot,chirp) the complete frequency response may be graphed in two parts: power versus frequency andphase versus frequency—thephase spectral density,phase spectrum, orspectral phase. Less commonly, the two parts may be thereal and imaginary parts of the transfer function. This is not to be confused with thefrequency response of a transfer function, which also includes a phase (or equivalently, a real and imaginary part) as a function of frequency. The time-domainimpulse responseh(t){\displaystyle h(t)} cannot generally be uniquely recovered from the power spectral density alone without the phase part. Although these are also Fourier transform pairs, there is no symmetry (as there is for theautocorrelation) forcing the Fourier transform to be real-valued. SeeUltrashort pulse#Spectral phase,phase noise,group delay.
  • Sometimes one encounters anamplitude spectral density (ASD), which is the square root of the PSD; the ASD of a voltage signal has the unit V⋅Hz−1/2.[19] This is useful when theshape of the spectrum is rather constant, since variations in the ASD will then be proportional to variations in the signal's voltage level itself. But it is mathematically preferred to use the PSD, since only in that case is the area under the curve meaningful in terms of actual power over all frequency or over a specified bandwidth.

Applications

[edit]
Further information:Spectrum

Any signal that can be represented as a variable that varies in time has a corresponding frequency spectrum. This includes familiar entities such asvisible light (perceived ascolor), musical notes (perceived aspitch),radio/TV (specified by their frequency, or sometimeswavelength) and even the regular rotation of the earth. When these signals are viewed in the form of a frequency spectrum, certain aspects of the received signals or the underlying processes producing them are revealed. In some cases the frequency spectrum may include a distinct peak corresponding to asine wave component. And additionally there may be peaks corresponding toharmonics of a fundamental peak, indicating a periodic signal which isnot simply sinusoidal. Or a continuous spectrum may show narrow frequency intervals which are strongly enhanced corresponding to resonances, or frequency intervals containing almost zero power as would be produced by anotch filter.

Electrical engineering

[edit]
Spectrogram of anFM radio signal with frequency on the horizontal axis and time increasing upwards on the vertical axis

The concept and use of the power spectrum of a signal is fundamental inelectrical engineering, especially inelectronic communication systems, includingradio communications,radars, and related systems, plus passiveremote sensing technology. Electronic instruments calledspectrum analyzers are used to observe and measure thepower spectra of signals.

The spectrum analyzer measures the magnitude of theshort-time Fourier transform (STFT) of an input signal. If the signal being analyzed can be considered a stationary process, the STFT is a good smoothed estimate of its power spectral density.

Cosmology

[edit]

Primordial fluctuations, density variations in the early universe, are quantified by a power spectrum which gives the power of the variations as a function of spatial scale.

See also

[edit]

Notes

[edit]
  1. ^Some authors, e.g., (Risken & Frank 1996, p. 30) still use the non-normalized Fourier transform in a formal way to formulate a definition of the power spectral densityx^(ω)x^(ω)=2πf(ω)δ(ωω),{\displaystyle \langle {\hat {x}}(\omega ){\hat {x}}^{\ast }(\omega ')\rangle =2\pi f(\omega )\delta (\omega -\omega '),}whereδ(ωω){\displaystyle \delta (\omega -\omega ')} is theDirac delta function. Such formal statements may sometimes be useful to guide the intuition, but should always be used with utmost care.
  2. ^ TheWiener–Khinchin theorem makes sense of this formula for anywide-sense stationary process under weaker hypotheses:Rxx{\displaystyle R_{xx}} does not need to be absolutely integrable, it only needs to exist. But the integral can no longer be interpreted as usual. The formula also makes sense if interpreted as involvingdistributions (in the sense ofLaurent Schwartz, not in the sense of a statisticalCumulative distribution function) instead of functions. IfRxx{\displaystyle R_{xx}} is continuous,Bochner's theorem can be used to prove that its Fourier transform exists as a positivemeasure, whose distribution function is F (but not necessarily as a function and not necessarily possessing a probability density).
  1. ^abcP Stoica & R Moses (2005)."Spectral Analysis of Signals"(PDF).
  2. ^Maral 2004.
  3. ^Norton & Karczub 2003.
  4. ^Birolini 2007, p. 83.
  5. ^Paschotta, Rüdiger (5 April 2005)."Power Spectral Density".rp-photonics.com. Archived fromthe original on 2024-04-15. Retrieved2024-06-26.
  6. ^Oppenheim & Verghese 2016, p. 60.
  7. ^Stein 2000, pp. 108, 115.
  8. ^Oppenheim & Verghese 2016, p. 14.
  9. ^Oppenheim & Verghese 2016, pp. 422–423.
  10. ^Miller & Childers 2012, pp. 429–431.
  11. ^Miller & Childers 2012, p. 433.
  12. ^Dennis Ward Ricker (2003).Echo Signal Processing. Springer.ISBN 978-1-4020-7395-3.
  13. ^Brown & Hwang 1997.
  14. ^Miller & Childers 2012, p. 431.
  15. ^Davenport & Root 1987.
  16. ^William D Penny (2009)."Signal Processing Course, chapter 7".
  17. ^Iranmanesh & Rodriguez-Villegas 2017.
  18. ^Imtiaz & Rodriguez-Villegas 2014.
  19. ^Michael Cerna & Audrey F. Harvey (2000)."The Fundamentals of FFT-Based Signal Analysis and Measurement"(PDF). Archived from the original on September 15, 2012.

References

[edit]

External links

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Spectral_density&oldid=1321471869#Explanation"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp