In the 1960s it was predicted that artificial intelligence wouldrevolutionize the way humans interact with computers and othermachines. It was believed that by the end of the century we would haverobots cleaning our houses, computers driving our cars, and voiceinterfaces controlling the storage and retrieval of information. Thishasn't happened; these abstract tasks are far more complicated thanexpected, and very difficult to carry out with the step-by-step logicprovided by digital computers.
However, the last forty years have shown that computers are extremelycapable in two broad areas, (1) data manipulation, such as wordprocessing and database management, and (2)mathematical calculation,used in science, engineering, and Digital Signal Processing. Allmicroprocessors can perform both tasks; however, it is difficult(expensive) to make a device that isoptimized for both. There aretechnical tradeoffs in the hardware design, such as the size of theinstruction set and how interrupts are handled. Even

more important, there aremarketing issues involved: development andmanufacturing cost, competitive position, product lifetime, and so on. As a broad generalization, these factors have made traditionalmicroprocessors, such as the Pentium®, primarily directed at datamanipulation. Similarly, DSPs are designed to perform themathematical calculations needed in Digital Signal Processing.
Figure 28-1 lists the most important differences between these twocategories. Data manipulation involves storing and sorting information. For instance, consider a word processing program. The basic task is tostore the information (typed in by the operator), organize theinformation (cut and paste, spell checking, page layout, etc.), and thenretrieve the information (such as saving the document on a floppy diskor printing it with a laser printer). These tasks are accomplished bymoving data from one location to another, andtesting for inequalities(A=B,A<B, etc.). As an example, imagine sorting a list of words intoalphabetical order. Each word is represented by an 8 bit number, theASCII value of the first letter in the word. Alphabetizing involvedrearranging the order of the words until the ASCII values continuallyincrease from the beginning to the end of the list. This can beaccomplished by repeating two steps over-and-over until thealphabetization is complete. First, test two adjacent entries for being inalphabetical order (IF A>B THEN ...). Second, if the two entries are notin alphabetical order, switch them so that they are (A⇄B). When thistwo step process is repeated many times on all adjacent pairs, the listwill eventually become alphabetized.
As another example, consider how a document is printed from a wordprocessor. The computer continually tests the input device (mouse orkeyboard) for the binary code that indicates "print the document." When this code is detected, the program moves the data from thecomputer's memory to the printer. Here we have the same two basicoperations: moving data and inequality testing. While mathematics isoccasionally used in this type of application, it is infrequent and doesnot significantly affect the overall execution speed.
In comparison, the execution speed of most DSP algorithms is limitedalmost completely by the number of multiplications and additionsrequired. For example, Fig. 28-2 shows the implementation of an FIRdigital filter, the most common DSP technique. Using the standardnotation, the input signal is referred to byx[ ], while the output signalis denoted byy[ ]. Our task is to calculate the sample at locationn inthe output signal, i.e.,y[n]. An FIR filter performs this calculation bymultiplying appropriate samples from the input signal by a group ofcoefficients, denoted by:a0,a1,a2,a3, …, and then adding the products. In equation form,y[n] is found by:

This is simply saying that the input signal has beenconvolved with afilter kernel (i.e., an impulse response) consisting of:a0,a1,a2,a3, �. Depending on the application, there may only be a few coefficients inthe filter kernel, or many thousands. While there is some data transferand inequality evaluation in this algorithm, such as to keep track of theintermediate results and control the loops, the math operations dominatethe execution time.

In addition to preforming mathematical calculations very rapidly, DSPsmust also have apredictable execution time. Suppose you launch yourdesktop computer on some task, say, converting a word-processingdocument from one form to another. It doesn't matter if the processingtakes tenmilliseconds or tenseconds; you simply wait for the action tobe completed before you give the computer its next assignment.
In comparison, most DSPs are used in applications where the processingiscontinuous, not having a defined start or end. For instance, consideran engineer designing a DSP system for an audio signal, such as ahearing aid. If the digital signal is being received at 20,000 samples persecond, the DSP must be able to maintain a sustained throughput of20,000 samples per second. However, there are important reasons notto make it any faster than necessary. As the speed increases, so does thecost, the power consumption, thedesign difficulty, and so on. This makesan accurate knowledge of the execution time critical for selecting theproper device, as well as the algorithms that can be applied.