Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Binary multiplier

From Wikipedia, the free encyclopedia
Electronic circuit used to multiply binary numbers
Part of a series on
Arithmetic logic circuits
Quick navigation

Abinary multiplier is anelectronic circuit used indigital electronics, such as acomputer, tomultiply twobinary numbers.

A variety ofcomputer arithmetic techniques can be used to implement a digital multiplier. Most techniques involve computing the set ofpartial products, which are then summed together usingbinary adders. This process is similar tolong multiplication, except that it uses a base-2 (binary)numeral system.

History

[edit]

Between 1947 and 1949 Arthur Alec Robinson worked forEnglish Electric, as a student apprentice, and then as a development engineer. Crucially during this period he studied for a PhD degree at the University of Manchester, where he worked on the design of the hardware multiplier for the earlyMark 1 computer.However, until the late 1970s, mostminicomputers did not have a multiply instruction, and so programmers used a "multiply routine"[1][2][3]which repeatedlyshifts and accumulates partial results,often written usingloop unwinding.Mainframe computers had multiply instructions, but they did the same sorts of shifts and adds as a "multiply routine".

Earlymicroprocessors also had no multiply instruction. Though the multiply instruction became common with the 16-bit generation,[4]at least two 8-bit processors have a multiply instruction: theMotorola 6809, introduced in 1978,[5] andIntel MCS-51 family, developed in 1980, and later the modernAtmel AVR 8-bit microprocessors present in the ATMega, ATTiny and ATXMega microcontrollers.

As moretransistors per chip became available due to larger-scale integration, it became possible to put enough adders on a single chip to sum all the partial products at once, rather than reuse a single adder to handle each partial product one at a time.

Because some commondigital signal processing algorithms spend most of their time multiplying,digital signal processor designers sacrifice considerable chip area in order to make the multiply as fast as possible; a single-cyclemultiply–accumulate unit often used up most of the chip area of early DSPs.

Unsigned integer multiplication

[edit]

Binary long multiplication

[edit]

The method taught in school for multiplying decimal numbers is based on calculating partial products, shifting them to the left and then adding them together. The most difficult part is to obtain the partial products, as that involves multiplying a long number by one digit (from 0 to 9):

     123   × 456   =====     738  (this is 123 × 6)    615   (this is 123 × 5, shifted one position to the left) + 492    (this is 123 × 4, shifted two positions to the left)   =====   56088

A binary computer does exactly the same multiplication as decimal numbers do, but with binary numbers. In binary encoding each long number is multiplied by one digit (either 0 or 1), and that is much easier than in decimal, as the product by 0 or 1 is just 0 or the same number. Therefore, the multiplication of two binary numbers comes down to calculating partial products (which are 0 or the first number),shifting them left, and then adding them together (a binary addition, of course):

       1011   (this is binary for decimal 11)     × 1110   (this is binary for decimal 14)     ======       0000   (this is 1011 × 0)      1011    (this is 1011 × 1, shifted one position to the left)     1011     (this is 1011 × 1, shifted two positions to the left)  + 1011      (this is 1011 × 1, shifted three positions to the left)  =========   10011010   (this is binary for decimal 154)

This is much simpler than in the decimal system, as there is no table of multiplication to remember: just shifts and adds.

This method is mathematically correct and has the advantage that a small CPU may perform the multiplication by using the shift and add features of its arithmetic logic unit rather than a specialized circuit. The method is slow, however, as it involves many intermediate additions. These additions are time-consuming. Faster multipliers may be engineered in order to do fewer additions; a modern processor might implement a dedicated parallel adder for partial products, letting the multiplication of two 64-bit numbers be done with only 6 rounds of additions, rather than 63.

A concrete view

[edit]

Suppose we want to multiply twounsigned 8-bit integers together:a[7:0] andb[7:0]. We can produce eight partial products by performing eight 1-bit multiplications, one for each bit in multiplicanda:

 p0[7:0] = a[0] × b[7:0] = {8{a[0]}} & b[7:0] p1[7:0] = a[1] × b[7:0] = {8{a[1]}} & b[7:0] p2[7:0] = a[2] × b[7:0] = {8{a[2]}} & b[7:0] p3[7:0] = a[3] × b[7:0] = {8{a[3]}} & b[7:0] p4[7:0] = a[4] × b[7:0] = {8{a[4]}} & b[7:0] p5[7:0] = a[5] × b[7:0] = {8{a[5]}} & b[7:0] p6[7:0] = a[6] × b[7:0] = {8{a[6]}} & b[7:0] p7[7:0] = a[7] × b[7:0] = {8{a[7]}} & b[7:0]

where the followingVerilog notation is used:

  • {8{a[0]}} means repeating a[0] (the 0th bit of a) 8 times.
  • a[7:0] stands for the selectinga from its 7th bit to its 0th bit, inclusive, for a total of 8 bits.
  • & symbol stands for a bitwise AND.

In order to obtain our product, we then need to add up all eight of our partial products, as shown here:

                                                p0[7] p0[6] p0[5] p0[4] p0[3] p0[2] p0[1] p0[0]                                       +  p1[7] p1[6] p1[5] p1[4] p1[3] p1[2] p1[1] p1[0]   0                                 +  p2[7] p2[6] p2[5] p2[4] p2[3] p2[2] p2[1] p2[0]   0     0                           +  p3[7] p3[6] p3[5] p3[4] p3[3] p3[2] p3[1] p3[0]   0     0     0                     +  p4[7] p4[6] p4[5] p4[4] p4[3] p4[2] p4[1] p4[0]   0     0     0     0               +  p5[7] p5[6] p5[5] p5[4] p5[3] p5[2] p5[1] p5[0]   0     0     0     0     0         +  p6[7] p6[6] p6[5] p6[4] p6[3] p6[2] p6[1] p6[0]   0     0     0     0     0     0   +  p7[7] p7[6] p7[5] p7[4] p7[3] p7[2] p7[1] p7[0]   0     0     0     0     0     0     0-----------------------------------------------------------------------------------------------P[15] P[14] P[13] P[12] P[11] P[10]  P[9]  P[8]  P[7]  P[6]  P[5]  P[4]  P[3]  P[2]  P[1]  P[0]

In other words,P[15:0] is produced by summingp0,p1 << 1,p2 << 2, and so forth, to produce our final unsigned 16-bit product:

P[15:0] = p0[7:0] + (p1[7:0] << 1) + (p2[7:0] << 2) + (p3[7:0] << 3) + (p4[7:0] << 4) + (p5[7:0] << 5) + (p6[7:0] << 6) + (p7[7:0] << 7)

Building on top of smaller blocks

[edit]

Suppose we now want to multiply twounsigned 16-bit integers together:u[15:0] andv[15:0] using the 8-by-8-bit multiplier from before. Using thepartial products method (justified by the associativity of multiplication) and running in 8-bit chunks, we have:

p0[15:0] = u[7:0] × v[7:0]p1[15:0] = u[15:8] × v[7:0]p2[15:0] = u[7:0] × v[15:8]p3[15:0] = u[15:8] × v[15:8]

The final product would then be:

P[31:0] = p0 + p1 << 8 + p2 << 8 + p3 << 16

One notices that if only P[15:0] (the lower 16 bits of the result) is required, no computation of p3 is needed.

Hardware implementation

[edit]

As described above, the process of multiplication can be split into 3 steps:[6][7]

  • generating partial product
  • reducing partial product
  • computing final product

Shift-add

[edit]

Older multiplier architectures employed a shifter and accumulator to sum each partial product, often one partial product per cycle, trading off speed for die area. To achieve the ability to perform one accumulation per cycle, a fast adder (something faster than ripple-carry) is required.[8]

Modern multipliers

[edit]

Modern multiplier architectures use the (modified)Baugh–Wooley algorithm,[9][10][11][12]Wallace trees, orDadda multipliers to add the partial products together in a single cycle.

In a fast multiplier,the partial-product reduction (i.e. computing partial sums) process usually contributes the most to the delay, power, and area of the multiplier.[6]For speed, the "reduce partial product" stages are typically implemented as acarry-save adder composed of compressors and the "compute final product" step is implemented as a fast adder.

A compressor is any device that takes more bits as input than it generates as output. In the context of multiplier engineering, it refers to an adder with carry-in and carry-out. The basic compressor is the full adder, a "3:2 compressor": many fast multipliers use such a compressor implemented in staticCMOS.

To achieve better performance in the same area or the same performance in a smaller area, multiplier designs may use higher order compressors such as 7:3 compressors;[7][6]implement the compressors in faster logic (such transmission gate logic, pass transistor logic,domino logic);[8]connect the compressors in a different pattern; or some combination.

The performance of theWallace tree implementation is sometimes improved bymodifiedBooth encoding one of the two multiplicands, which reduces the number of partial products that must be summed.

Single-cycle multiplier

[edit]

A "single cycle" multiplier (or "fast multiplier") is purecombinational logic.

Schematic of 2-bit by 2-bit binary multiplier usingIEEE Std 91/91a-1991 US symbols to implement with twoXOR gates and sixAND gates.

Extension to other data types

[edit]

Signed integers

[edit]

Ifb had been asigned integer instead of anunsigned integer, then the partial products would need to have been sign-extended up to the width of the product before summing. Ifa had been a signed integer, then partial productp7 would need to be subtracted from the final sum, rather than added to it.

The above array multiplier can be modified to supporttwo's complement notation signed numbers by inverting several of the product terms and inserting a one to the left of the first and last partial product term:

                                                    1   ~p0[7]  p0[6]  p0[5]  p0[4]  p0[3]  p0[2]  p0[1]  p0[0]                                              +  ~p1[7]  p1[6]  p1[5]  p1[4]  p1[3]  p1[2]  p1[1]  p1[0]    0                                       +  ~p2[7]  p2[6]  p2[5]  p2[4]  p2[3]  p2[2]  p2[1]  p2[0]    0      0                                +  ~p3[7]  p3[6]  p3[5]  p3[4]  p3[3]  p3[2]  p3[1]  p3[0]    0      0      0                         +  ~p4[7]  p4[6]  p4[5]  p4[4]  p4[3]  p4[2]  p4[1]  p4[0]    0      0      0      0                  +  ~p5[7]  p5[6]  p5[5]  p5[4]  p5[3]  p5[2]  p5[1]  p5[0]    0      0      0      0      0           +  ~p6[7]  p6[6]  p6[5]  p6[4]  p6[3]  p6[2]  p6[1]  p6[0]    0      0      0      0      0      0+  1    p7[7] ~p7[6] ~p7[5] ~p7[4] ~p7[3] ~p7[2] ~p7[1] ~p7[0]    0      0      0      0      0      0      0--------------------------------------------------------------------------------------------------------------- P[15]  P[14]  P[13]  P[12]  P[11]  P[10]   P[9]   P[8]   P[7]   P[6]   P[5]   P[4]   P[3]   P[2]   P[1]   P[0]

Where ~p represents the complement (opposite value) of p.

This section mayrequirecleanup to meet Wikipedia'squality standards. The specific problem is:Should include proof here. Please helpimprove this section if you can.(December 2025) (Learn how and when to remove this message)

There are many simplifications in the bit array above that are not shown and are not obvious.

  • The sequences of one complemented bit followed by noncomplemented bits are implementing a two's complement trick to avoid sign extension.
  • The sequence of p7 (noncomplemented bit followed by all complemented bits) is because we're subtracting this term so they were all negated to start out with (and a 1 was added in the least significant position).

For both types of sequences, the last bit is flipped and an implicit −1 should be added directly below the MSB. When the +1 from the two's complement negation for p7 in bit position 0 (LSB) and all the −1's in bit columns 7 through 14 (where each of the MSBs are located) are added together, they can be simplified to the single 1 that "magically" is floating out to the left. For an explanation and proof of why flipping the MSB saves us the sign extension, see a computer arithmetic book.[13]

Floating-point numbers

[edit]

Abinary floating-point number contains a sign bit, significant bits (known as the significand) and exponent bits (for simplicity, we don't consider base and combination field). The sign bits of each operand are XOR'd to get the sign of the answer. Then, the two exponents are added to get the exponent of the result. Finally, multiplication of each operand's significand will return the significand of the result. However, if the result of the binary multiplication is higher than the total number of bits for a specific precision (e.g. 32, 64, 128), rounding is required and the exponent is changed appropriately.

See also

[edit]

References

[edit]
  1. ^Rather, Elizabeth D.; Colburn, Donald R.; Moore, Charles H. (1996) [1993]."The evolution of Forth". In Bergin, Thomas J.; Gibson, Richard G. (eds.).History of programming languages---II. Association for Computing Machinery. pp. 625–670.doi:10.1145/234286.1057832.ISBN 0201895021.
  2. ^Davies, A.C.; Fung, Y.T. (1977)."Interfacing a hardware multiplier to a general-purpose microprocessor".Microprocessors.1 (7):425–432.doi:10.1016/0308-5953(77)90004-6.
  3. ^Rafiquzzaman, M. (2005)."§2.5.1 Binary Arithmetic: Multiplication of Unsigned Binary Numbers".Fundamentals of Digital Logic and Microcomputer Design.Wiley. p. 46.ISBN 978-0-47173349-2.
  4. ^Rafiquzzaman 2005,§7.3.3 Addition, Subtraction, Multiplication and Division of Signed and Unsigned Numbers p. 251
  5. ^Kant, Krishna (2007)."§2.11.2 16-Bit Microprocessors".Microprocessors and Microcontrollers: Architecture, Programming and System Design 8085, 8086, 8051, 8096. PHI Learning. p. 57.ISBN 9788120331914.
  6. ^abcRouholamini, Mahnoush; Kavehie, Omid; Mirbaha, Amir-Pasha; Jasbi, Somaye Jafarali; Navi, Keivan."A New Design for 7:2 Compressors"(PDF).
  7. ^abLeong, Yuhao; Lo, HaiHiung; Drieberg, Michael; Sayuti, Abu Bakar; Sebastian, Patrick."Performance Comparison Review of 8-3 compressor on FPGA".
  8. ^abPeng Chang."A Reconfigurable Digital Multiplier and 4:2 Compressor Cells Design".2008.
  9. ^Baugh, Charles Richmond; Wooley, Bruce A. (December 1973). "A Two's Complement Parallel Array Multiplication Algorithm".IEEE Transactions on Computers.C-22 (12):1045–1047.doi:10.1109/T-C.1973.223648.S2CID 7473784.
  10. ^Hatamian, Mehdi; Cash, Glenn (1986)."A 70-MHz 8-bit×8-bit parallel pipelined multiplier in 2.5-μm CMOS".IEEE Journal of Solid-State Circuits.21 (4):505–513.Bibcode:1986IJSSC..21..505H.doi:10.1109/jssc.1986.1052564.
  11. ^Gebali, Fayez (2003)."Baugh–Wooley Multiplier"(PDF).University of Victoria, CENG 465 Lab 2.Archived(PDF) from the original on 2018-04-14. Retrieved2018-04-14.
  12. ^Reynders, Nele; Dehaene, Wim (2015).Ultra-Low-Voltage Design of Energy-Efficient Digital Circuits. Analog Circuits and Signal Processing. Springer.doi:10.1007/978-3-319-16136-5.ISBN 978-3-319-16135-8.ISSN 1872-082X.LCCN 2015935431.
  13. ^Parhami, Behrooz (2000).Computer Arithmetic: Algorithms and Hardware Designs.Oxford University Press.ISBN 0-19-512583-5.
  • Hennessy, John L.; Patterson, David A. (1990). "Section A.2, section A.9".Computer Architecture: A quantitative Approach. Morgan Kaufmann. pp. A–3..A–6, A–39..A–49.ISBN 978-0-12383872-8.

External links

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Binary_multiplier&oldid=1334945769"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp