To digitally analyze and manipulate an analog signal, it must be digitized with ananalog-to-digital converter (ADC).[7] Sampling is usually carried out in two stages,discretization andquantization. Discretization means that the signal is divided into equal intervals of time, and each interval is represented by a single measurement of amplitude. Quantization means each amplitude measurement is approximated by a value from a finite set. Roundingreal numbers to integers is an example.
TheNyquist–Shannon sampling theorem states that a signal can be exactly reconstructed from its samples if the sampling frequency is greater than twice the highest frequency component in the signal. In practice, the sampling frequency is often significantly higher than this.[8] It is common to use ananti-aliasing filter to limit the signal bandwidth to comply with the sampling theorem, however careful selection of this filter is required because the reconstructed signal will be the filtered signal plus residualaliasing from imperfectstop band rejection instead of the original (unfiltered) signal.
Theoretical DSP analyses and derivations are typically performed ondiscrete-time signal models with no amplitude inaccuracies (quantization error), created by the abstract process ofsampling. Numerical methods require a quantized signal, such as those produced by an ADC. The processed result might be a frequency spectrum or a set of statistics. But often it is another quantized signal that is converted back to analog form by adigital-to-analog converter (DAC).
DSP engineers usually study digital signals in one of the following domains:time domain (one-dimensional signals), spatial domain (multidimensional signals),frequency domain, andwavelet domains. They choose the domain in which to process a signal by making an informed assumption (or by trying different possibilities) as to which domain best represents the essential characteristics of the signal and the processing to be applied to it. A sequence of samples from a measuring device produces a temporal or spatial domain representation, whereas adiscrete Fourier transform produces the frequency domain representation.
Time domain refers to the analysis of signals with respect to time. Similarly, space domain refers to the analysis of signals with respect to position, e.g., pixel location for the case of image processing.
The most common processing approach in the time or space domain is enhancement of the input signal through a method called filtering.Digital filtering generally consists of some linear transformation of a number of surrounding samples around the current sample of the input or output signal. The surrounding samples may be identified with respect to time or space. The output of a linear digital filter to any given input may be calculated byconvolving the input signal with animpulse response.
Signals are converted from time or space domain to the frequency domain usually through use of theFourier transform. The Fourier transform converts the time or space information to a magnitude and phase component of each frequency. With some applications, how the phase varies with frequency can be a significant consideration. Where phase is unimportant, often the Fourier transform is converted to the power spectrum, which is the magnitude of each frequency component squared.
The most common purpose for analysis of signals in the frequency domain is analysis of signal properties. The engineer can study the spectrum to determine which frequencies are present in the input signal and which are missing. Frequency domain analysis is also calledspectrum- orspectral analysis.
Filtering, particularly in non-realtime work can also be achieved in the frequency domain, applying the filter and then converting back to the time domain. This can be an efficient implementation and can give essentially any filter response including excellent approximations tobrickwall filters.
There are some commonly used frequency domain transformations. For example, thecepstrum converts a signal to the frequency domain through Fourier transform, takes the logarithm, then applies another Fourier transform. This emphasizes the harmonic structure of the original spectrum.
Digital filters come in bothinfinite impulse response (IIR) andfinite impulse response (FIR) types. Whereas FIR filters are always stable, IIR filters have feedback loops that may become unstable and oscillate. TheZ-transform provides a tool for analyzing stability issues of digital IIR filters. It is analogous to theLaplace transform, which is used to design and analyze analog IIR filters.
A signal is represented as linear combination of its previous samples. Coefficients of the combination are called autoregression coefficients. This method has higher frequency resolution and can process shorter signals compared to the Fourier transform.[9]Prony's method can be used to estimate phases, amplitudes, initial phases and decays of the components of signal.[10][9] Components are assumed to be complex decaying exponents.[10][9]
A time-frequency representation of signal can capture both temporal evolution and frequency structure of analyzed signal. Temporal and frequency resolution are limited by the principle of uncertainty and the tradeoff is adjusted by the width of analysis window. Linear techniques such asShort-time Fourier transform,wavelet transform,filter bank,[11] non-linear (e.g.,Wigner–Ville transform[10]) andautoregressive methods (e.g. segmented Prony method)[10][12][13] are used for representation of signal on the time-frequency plane. Non-linear and segmented Prony methods can provide higher resolution, but may produce undesirable artifacts. Time-frequency analysis is usually used for analysis of non-stationary signals. For example, methods offundamental frequency estimation, such as RAPT and PEFAC[14] are based on windowed spectral analysis.
An example of the 2D discrete wavelet transform that is used inJPEG2000. The original image is high-pass filtered, yielding the three large images, each describing local changes in brightness (details) in the original image. It is then low-pass filtered and downscaled, yielding an approximation image; this image is high-pass filtered to produce the three smaller detail images, and low-pass filtered to produce the final approximation image in the upper-left.
Empirical mode decomposition is based on decomposition signal intointrinsic mode functions (IMFs). IMFs are quasi-harmonical oscillations that are extracted from the signal.[15]
For systems that do not have areal-time computing requirement and the signal data (either input or output) exists in data files, processing may be done economically with a general-purpose computer. This is essentially no different from any otherdata processing, except DSP mathematical techniques (such as theDCT andFFT) are used, and the sampled data is usually assumed to be uniformly sampled in time or space. An example of such an application is processingdigital photographs with software such asPhotoshop.
When the application requirement is real-time, DSP is often implemented using specialized or dedicated processors or microprocessors, sometimes using multiple processors or multiple processing cores. These may process data using fixed-point arithmetic or floating point. For more demanding applicationsFPGAs may be used.[20] For the most demanding applications or high-volume products,ASICs might be designed specifically for the application.
Parallel implementations of DSP algorithms, utilizing multi-core CPU and many-core GPU architectures, are developed to improve the performances in terms of latency of these algorithms.[21]
Native processing is done by the computer's CPU rather than by DSP or outboard processing, which is done by additional third-party DSP chips located on extension cards or external hardware boxes or racks. Manydigital audio workstations such asLogic Pro,Cubase,Digital Performer andPro Tools LE use native processing. Others, such asPro Tools HD,Universal Audio's UAD-1 andTC Electronic's Powercore use DSP processing.
Jonathan M. Blackledge, Martin Turner:Digital Signal Processing: Mathematical and Computational Methods, Software Development and Applications, Horwood Publishing,ISBN1-898563-48-9
James D. Broesch:Digital Signal Processing Demystified, Newnes,ISBN1-878707-16-7
Stein, Jonathan Yaakov (2000-10-09).Digital Signal Processing, a Computer Science Perspective. Wiley.ISBN0-471-29546-9.
Stergiopoulos, Stergios (2000).Advanced Signal Processing Handbook: Theory and Implementation for Radar, Sonar, and Medical Imaging Real-Time Systems. CRC Press.ISBN0-8493-3691-0.
Van De Vegte, Joyce (2001).Fundamentals of Digital Signal Processing. Prentice Hall.ISBN0-13-016077-6.
Oppenheim, Alan V.; Schafer, Ronald W. (2001).Discrete-Time Signal Processing. Pearson.ISBN1-292-02572-7.
Hayes, Monson H. Statistical digital signal processing and modeling. John Wiley & Sons, 2009. (withMATLAB scripts)
^B. SOMANATHAN NAIR (2002).Digital electronics and logic design. PHI Learning Pvt. Ltd. p. 289.ISBN9788120319561.Digital signals are fixed-width pulses, which occupy only one of two levels of amplitude.
^Billings, Stephen A. (Sep 2013).Nonlinear System Identification: NARMAX Methods in the Time, Frequency, and Spatio-Temporal Domains. UK: Wiley.ISBN978-1-119-94359-4.
^Broesch, James D.; Stranneby, Dag; Walker, William (2008-10-20).Digital Signal Processing: Instant access (1 ed.). Butterworth-Heinemann-Newnes. p. 3.ISBN9780750689762.
^Walden, R. H. (1999). "Analog-to-digital converter survey and analysis".IEEE Journal on Selected Areas in Communications.17 (4):539–550.doi:10.1109/49.761034.
^So, Stephen; Paliwal, Kuldip K. (2005). "Improved noise-robustness in distributed speech recognition via perceptually-weighted vector quantisation of filterbank energies".Ninth European Conference on Speech Communication and Technology.