Delta Function
The simplest impulse response is nothing more that a delta function, as shownin Fig. 7-1a. That is, an impulse on the input produces an identical impulse onthe output. This means thatall signals are passed through the systemwithoutchange. Convolving any signal with a delta function results in exactly the samesignal. Mathematically, this is written:
This property makes the delta function theidentity for convolution. This is analogous tozero being the identity for addition (a + 0 =a), andone being the identity for multiplication (a×1 =a). At first glance, this type of system mayseem trivial and uninteresting. Not so! Such systems are the ideal for datastorage, communication and measurement. Much of DSP is concerned withpassing information through systems without change or degradation.
Figure 7-1b shows a slight modification to the delta function impulse response. If the delta function is made larger or smaller in amplitude, the resulting systemis anamplifier orattenuator, respectively. In equation form, amplificationresults ifk isgreater than one, and attenuation results ifk isless than one:
The impulse response in Fig. 7-1c is a delta function with ashift. This resultsin a system that introduces an identical shift between the input and outputsignals. This could be described as a signaldelay, or a signaladvance,depending on the direction of the shift. Letting the shift be represented by theparameter,s, this can be written as the equation:
Science and engineering are filled with cases where one signal is a shiftedversion of another. For example, consider a radio signal transmitted from aremote space probe, and the corresponding signal received on the earth. Thetime it takes the radio wave to propagate over the distance causes a delaybetween the two signals. In biology, the electrical signals in adjacent nervecells are shifted versions of each other, as determined by the time it takes anaction potential to cross the synaptic junction that connects the two.
Figure 7-1d shows an impulse response composed of a delta function plus ashifted and scaled delta function. By superposition, the output of this systemis the input signal plus a delayed version of the input signal, i.e., anecho. Echoes are important in many DSP applications. The addition of echoes is akey part in making audio recordings sound natural and pleasant. Radar andsonar analyze echoes to detect aircraft and submarines. Geophysicists useechoes to find oil. Echoes are also very important in telephone networks,because you want toavoid them.
Calculus-like Operations
Convolution can change discrete signals in ways that resemble integration anddifferentiation. Since the terms "derivative" and "integral" specifically refer tooperations oncontinuous signals, other names are given to their discretecounterparts. The discrete operation that mimics thefirst derivative is called thefirst difference. Likewise, the discrete form of theintegral is called therunning sum. It is also common to hear these operations called thediscretederivative and thediscrete integral, although mathematicians frown when they hear these informal terms used.
Figure 7-2 shows the impulse responses that implement the first difference andthe running sum. Figure 7-3 shows an example using these operations. In 7-3a,the original signal is composed of several sections with varying slopes. Convolving this signal with the first difference impulse response produces thesignal in Fig. 7-3b. Just as with the first derivative, the amplitude of each pointin the first difference signal is equal to theslope at the corresponding locationin the original signal. The running sum is the inverse operation of the firstdifference. That is, convolving the signal in (b), with the running sum'simpulse response, produces the signal in (a).
These impulse responses are simple enough that a full convolution program isusually not needed to implement them. Rather, think of them in the alternativemode: each sample in the output signal is asum of weighted samples from theinput. For instance, the first difference can be calculated:
That is, each sample in the output signal is equal to the difference between two adjacent samples in the input signal. For instance,y[40] =x[40] -x[39]. Itshould be mentioned that this is not the only way to define adiscrete derivative. Another common method is to define the slope symmetrically around the pointbeing examined, such as:y[n] = (x[n+1] -x[n-1])/2.
Using this same approach, each sample in the running sum can be calculated bysumming all points in the original signal to theleft of the sample's location. Forinstance, ify[n] is the running sum ofx[n], then sampley[40] is found by adding samplesx[0] throughx[40]. Likewise, sampley[41] is found by adding samplesx[0] throughx[41]. Of course, it would be very inefficient to calculate the running sum in this manner. For example, ify[40] has already beencalculated,y[41] can be calculated with only a single addition:y[41] =x[41] +y[40]. In equation form:
Relations of this type are calledrecursion equations ordifference equations. We will revisit them in Chapter 19. For now, the important idea to understandis that these relations areidentical to convolution using the impulse responsesof Fig. 7-2. Table 7-1 provides computer programs that implement thesecalculus-like operations.
Low-pass and High-pass Filters
The design of digital filters is covered in detail in later chapters. For now, besatisfied to understand the general shape of low-pass and high-passfilter kernels(another name for a filter's impulse response). Figure 7-4 shows severalcommon low-pass filter kernels. In general, low-pass filter kernels arecomposed of a group ofadjacent positive points. This results in each sample inthe output signal being a weighted average of many adjacent points from theinput signal. This averagingsmoothes the signal, thereby removing high-frequency components. As shown by the sinc function in (c), some low-passfilter kernels include a few negative valued samples in the tails. Just as inanalog electronics, digital low-pass filters are used for noise reduction, signalseparation, wave shaping, etc.
The cutoff frequency of the filter is changed by making filter kernel wider ornarrower. If a low-pass filter has a gain ofone at DC (zero frequency), then thesum of all of the points in the impulse response must be equal toone. Asillustrated in (a) and (c), some filter kernelstheoretically extend to infinitywithout dropping to a value of zero. In actual practice, the tails are truncatedafter a certain number of samples, allowing it to be represented by a finitenumber of points. How else could it be stored in a computer?
Figure 7-5 shows three common high-pass filter kernels, derived from thecorresponding low-pass filter kernels in Fig. 7-4. This is a common strategy infilter design: first devise a low-pass filter and then transform it to what youneed, high-pass, band-pass, band-reject, etc. To understand the low-pass tohigh-pass transform, remember that a delta function impulse response passes theentire signal, while a low-pass impulse response passes only the low-frequencycomponents. By superposition, a filter kernel consisting of a delta functionminus the low-pass filter kernel will pass the entire signal minus the low-frequency components. A high-pass filter is born! As shown in Fig. 7-5, thedelta function is usually added at the center of symmetry, or sample zero if thefilter kernel is not symmetrical. High-pass filters have zero gain at DC (zerofrequency), achieved by making the sum of all the points in the filter kernelequal tozero.
Causal and Noncausal Signals
Imagine a simple analog electronic circuit. If you apply a short pulse to theinput, you will see a response on the output. This is the kind of cause and effectthat our universe is based on. One thing we definitely know:any effect musthappen after the cause. This is a basic characteristic of what we calltime. Nowcompare this to a DSP system that changes an input signal into an output signal,both stored in arrays in a computer. If this mimics a real world system, it mustfollow the same principle ofcausality as the real world does. For example, thevalue at sample number eight in the input signal can only affect sample numbereight or greater in the output signal. Systems that operate in this manner aresaid to becausal.Of course, digital processing doesn't necessarily have tofunction this way. Since both the input and output signals are arrays of numbersstored in a computer, any of the input signal values can affect any of the outputsignal values.
As shown by the examples in Fig. 7-6, the impulse response of a causal systemmust have a value of zero for allnegative numbered samples. Think of this fromthe input side view of convolution. To be causal, an impulse in the input signalat sample numbern must only affect those points in the output signal with asample number ofn or greater. In common usage, the termcausal is appliedtoany signal where all the negative numbered samples have a value of zero,whether it is an impulse response or not.
Zero Phase, Linear Phase, and Nonlinear Phase
As shown in Fig. 7-7, a signal is said to bezero phase if it has left-rightsymmetry around sample number zero. A signal is said to belinear phase if ithas left-right symmetry, but around some point other than zero. This means thatany linear phase signal can be changed into a zero phase signal simply byshifting left or right. Lastly, a signal is said to benonlinear phase if it does nothave left-right symmetry.
You are probably thinking that these names don't seem to follow from theirdefinitions. What doesphase have to do withsymmetry? The answer lies in thefrequency spectrum, and will be discussed in more detail in later chapters. Briefly, the frequency spectrum of any signal is composed of two parts, themagnitude and the phase. The frequency spectrum of a signal that issymmetrical around zero has a phase that is zero. Likewise, the frequencyspectrum of a signal that is symmetrical around some nonzero point has a phasethat is a straight line, i.e., a linear phase. Lastly, the frequency spectrum of asignal that is not symmetrical has a phase that is not a straight line, i.e., it hasa nonlinear phase.
A special note about the potentially confusing terms:linear andnonlinearphase. What does this have to do the concept of system linearity discussed inprevious chapters? Absolutely nothing! System linearity is the broad conceptthat nearly all of DSP is based on (superposition, homogeneity, additivity, etc).Linear andnonlinear phase mean that the phase is, or is not, a straight line. Infact, a system must belinear even to say that the phase is zero, linear, ornonlinear.