Amicroprocessor is acomputerprocessor for which the data processing logic and control is included on a singleintegrated circuit (IC), or a small number of ICs. The microprocessor contains the arithmetic, logic, and control circuitry required to perform the functions of a computer'scentral processing unit (CPU). The IC is capable of interpreting and executing program instructions and performing arithmetic operations.[1] The microprocessor is a multipurpose,clock-driven,register-based,digital integrated circuit that acceptsbinary data as input, processes it according toinstructions stored in itsmemory, and provides results (also in binary form) as output. Microprocessors contain bothcombinational logic andsequential digital logic, and operate on numbers and symbols represented in thebinary number system.
The integration of a whole CPU onto a single or a few integrated circuits usingvery-large-scale integration (VLSI) greatly reduced the cost of processing power. Integrated circuit processors are produced in large numbers by highly automatedmetal–oxide–semiconductor (MOS)fabrication processes, resulting in a relatively lowunit price. Single-chip processors increase reliability because there are fewer electrical connections that can fail. Asmicroprocessor designs improve, the cost of manufacturing a chip (with smaller components built on a semiconductor chip the same size) generally stays the same, according toRock's law.
Before microprocessors, small computers had been built using racks ofcircuit boards with manymedium- andsmall-scale integrated circuits. These were typically of theTTL type. Microprocessors combined this into one or a fewlarge-scale ICs. While there is disagreement over who deserves credit for the invention of the microprocessor, the first commercially available microprocessor was theIntel 4004, designed byFederico Faggin and introduced in 1971.[2]
The complexity of an integrated circuit is bounded by physical limitations on the number oftransistors that can be put onto one chip, the number of package terminations that can connect the processor to other parts of the system, the number of interconnections it is possible to make on the chip, and the heat that the chip candissipate. Advancing technology makes more complex and powerful chips feasible to manufacture.
A minimal hypothetical microprocessor might include only anarithmetic logic unit (ALU), and acontrol logic section. The ALU performs addition, subtraction, and operations such as AND or OR. Each operation of the ALU sets one or moreflags in astatus register, which indicate the results of the last operation (zero value, negative number,overflow, or others). The control logic retrieves instruction codes from memory and initiates the sequence of operations required for the ALU to carry out the instruction. A singleoperation code might affect many individual data paths, registers, and other elements of the processor.
As integrated circuit technology advanced, it was feasible to manufacture more and more complex processors on a single chip. The size of data objects became larger; allowing more transistors on a chip allowedword sizes to increase from4- and8-bit words up to today's64-bit words. Additional features were added to the processor architecture; more on-chip registers sped up programs, and complex instructions could be used to make more compact programs.Floating-point arithmetic, for example, was often not available on 8-bit microprocessors, but had to be carried out insoftware. Integration of thefloating-point unit, first as a separate integrated circuit and then as part of the same microprocessor chip, sped up floating-point calculations.
Occasionally, physical limitations of integrated circuits made such practices as abit slice approach necessary. Instead of processing all of a long word on one integrated circuit, multiple circuitsin parallel processed subsets of each word. While this required extra logic to handle, for example, carry and overflow within each slice, the result was a system that could handle, for example,32-bit words using integrated circuits with a capacity for only four bits each.
The ability to put large numbers of transistors on one chip makes it feasible to integrate memory on the same die as the processor. ThisCPU cache has the advantage of faster access than off-chip memory and increases the processing speed of the system for many applications. Processorclock frequency has increased more rapidly than external memory speed, socache memory is necessary if the processor is not to be delayed by slower external memory.
The design of some processors has become complicated enough to be difficult to fullytest, and this has caused problems at large cloud providers.[6]
Systems on chip (SoCs) often integrate one or more microprocessor and microcontroller cores with other components such asradio modems, and are used in smartphones and tablet computers.
Microprocessors can be selected for differing applications based on their word size, which is a measure of their complexity. Longer word sizes allow eachclock cycle of a processor to carry out more computation, but correspond to physically larger integrated circuit dies with higher standby and operatingpower consumption.[7] 4-, 8- or 12-bit processors are widely integrated into microcontrollers operating embedded systems. Where a system is expected to handle larger volumes of data or require a more flexibleuser interface, 16-, 32- or 64-bit processors are used. An 8- or16-bit processor may be selected over a 32-bit processor forsystem on a chip or microcontroller applications that require extremelylow-power electronics, or are part of amixed-signal integrated circuit with noise-sensitive on-chipanalog electronics such as high-resolution analog to digital converters, or both.Some people say that running 32-bit arithmetic on an 8-bit chip could end up using more power, as the chip must execute software with multiple instructions.[8]However, others say that modern 8-bit chips are always more power-efficient than 32-bit chips when running equivalent software routines.[9]
Thousands of items that were traditionally not computer-related include microprocessors. These include householdappliances, vehicles (and their accessories), tools and test instruments, toys, light switches/dimmers andelectrical circuit breakers, smoke alarms, battery packs, and hi-fi audio/visual components (fromDVD players tophonograph turntables). Such products as cellular telephones,DVD video system andHDTV broadcast systems fundamentally require consumer devices with powerful, low-cost, microprocessors. Increasingly stringent pollution control standards effectively require automobile manufacturers to use microprocessor engine management systems to allow optimal control of emissions over the widely varying operating conditions of an automobile. Non-programmable controls would require bulky, or costly implementation to achieve the results possible with a microprocessor.
A microprocessor control program (embedded software) can be tailored to fit the needs of a product line, allowing upgrades in performance with minimal redesign of the product. Unique features can be implemented in product line's various models at negligible production cost.
Microprocessor control of a system can provide control strategies that would be impractical to implement using electromechanical controls or purpose-built electronic controls. For example, an internal combustion engine's control system can adjust ignition timing based on engine speed, load, temperature, and any observed tendency for knocking—allowing the engine to operate on a range of fuel grades.
The advent of low-costcomputers onintegrated circuits has transformedmodern society. General-purpose microprocessors inpersonal computers are used for computation, text editing,multimedia display, and communication over theInternet. Many more microprocessors are part ofembedded systems, providing digital control over myriad objects from appliances to automobiles tocellular phones and industrialprocess control. Microprocessors perform binary operations based onBoolean logic, named afterGeorge Boole. The ability to operate computer systems using Boolean logic was first proven in a 1938 thesis by master's studentClaude Shannon, who later went on to become a professor. Shannon is considered "The Father of Information Theory". In 1951,microprogramming was invented byMaurice Wilkes at theUniversity of Cambridge from the realisation that the central processor could be controlled by a specialised program in a dedicatedROM.[10] Wilkes is also credited with the idea of symbolic labels, macros and subroutine libraries.[11]
Following the development ofMOS integrated circuit chips in the early 1960s, MOS chips reached highertransistor density and lower manufacturing costs thanbipolarintegrated circuits by 1964. MOS chips further increased in complexity at a rate predicted byMoore's law, leading tolarge-scale integration (LSI) with hundreds oftransistors on a single MOS chip by the late 1960s. The application of MOS LSI chips tocomputing was the basis for the first microprocessors, as engineers began recognizing that a completecomputer processor could be contained on several MOS LSI chips.[12] Designers in the late 1960s were striving to integrate thecentral processing unit (CPU) functions of a computer onto a handful of MOS LSI chips, called microprocessor unit (MPU) chipsets.
While there is disagreement over who invented the microprocessor,[2][13] the first commercially available microprocessor was theIntel 4004, released as a single MOS LSI chip in 1971.[14] The single-chip microprocessor was made possible with the development of MOSsilicon-gate technology (SGT).[15] The earliest MOS transistors hadaluminiummetal gates, which Italian physicistFederico Faggin replaced withsiliconself-aligned gates to develop the first silicon-gate MOS chip atFairchild Semiconductor in 1968.[15] Faggin later joinedIntel and used his silicon-gate MOS technology to develop the 4004, along withMarcian Hoff,Stanley Mazor andMasatoshi Shima in 1971.[16] The 4004 was designed forBusicom, which had earlier proposed a multi-chip design in 1969, before Faggin's team at Intel changed it into a new single-chip design. The4-bit Intel 4004 was soon followed by the 8-bitIntel 8008 in 1972. The MP944 chipset used in theF-14 Central Air Data Computer in 1970 has also been cited as an early microprocessor, but was not known to the public until declassified in 1998.
Otherembedded uses of 4-bit and 8-bit microprocessors, such asterminals,printers, various kinds ofautomation etc., followed soon after. Affordable 8-bit microprocessors with16-bit addressing also led to the first general-purposemicrocomputers from the mid-1970s on.
The first use of the term "microprocessor" is attributed toViatron Computer Systems[17] describing the custom integrated circuit used in their System 21 small computer system announced in 1968.
Since the early 1970s, the increase in capacity of microprocessors has followedMoore's law; this originally suggested that the number of components that can be fitted onto a chip doubles every year. With present technology, it is actually every two years,[18][obsolete source] and as a result Moore later changed the period to two years.[19]
TheFour-Phase Systems AL1 was an 8-bitbit slice chip containing eight registers and an ALU.[20] It was designed byLee Boysel in 1969.[21][22][23] At the time, it formed part of a nine-chip, 24-bit CPU with three AL1s. It was later called a microprocessor when, in response to 1990s litigation byTexas Instruments, Boysel constructed a demonstration system where a single AL1 with a 1969 datestamp formed part of a courtroom demonstration computer system, together with RAM, ROM, and an input-output device.[24] The AL1 wasn't sold individually, but was part of the System IV/70 announced in September 1970 and first delivered in February 1972.[25]
In 1968,Garrett AiResearch (who employed designersRay Holt and Steve Geller) was invited to produce a digital computer to compete withelectromechanical systems then under development for the main flight control computer in theUS Navy's newF-14 Tomcat fighter. The design was complete by 1970, and used aMOS-based chipset as the core CPU. The design was significantly (approximately 20 times) smaller and much more reliable than the mechanical systems it competed against and was used in all of the early Tomcat models. This system contained "a 20-bit,pipelined,parallelmulti-microprocessor". The Navy refused to allow publication of the design until 1997. Released in 1998, the documentation on theCADC, and theMP944 chipset, are well known. Ray Holt's autobiographical story of this design and development is presented in the book: The Accidental Engineer.[26][27]
Ray Holt graduated fromCalifornia State Polytechnic University, Pomona in 1968, and began his computer design career with the CADC.[28] From its inception, it was shrouded in secrecy until 1998 when at Holt's request, the US Navy allowed the documents into the public domain. Holt has claimed that no one has compared this microprocessor with those that came later.[29] According to Parab et al. (2007),
The scientific papers and literature published around 1971 reveal that the MP944 digital processor used for the F-14 Tomcat aircraft of the US Navy qualifies as the first microprocessor. Although interesting, it was not a single-chip processor, as was not the Intel 4004 – they both were more like a set of parallel building blocks you could use to make a general-purpose form. It contains a CPU,RAM,ROM, and two other support chips like the Intel 4004. It was made from the sameP-channel technology, operated atmilitary specifications and had larger chips – an excellent computer engineering design by any standards. Its design indicates a major advance over Intel, and two year earlier. It actually worked and was flying in the F-14 when the Intel 4004 was announced. It indicates that today's industry theme of convergingDSP-microcontroller architectures was started in 1971.[30]
In 1990, American engineer Gilbert Hyatt was awarded U.S. Patent No. 4,942,516,[32] which was based on a 16-bit serial computer he built at hisNorthridge, California, home in 1969 from boards of bipolar chips after quitting his job atTeledyne in 1968;[2][33] though the patent had been submitted in December 1970 and prior toTexas Instruments' filings for the TMX 1795 and TMS 0100, Hyatt's invention was never manufactured.[33][34][35] This nonetheless led to claims that Hyatt was the inventor of the microprocessor and the payment of substantial royalties through aPhilips N.V. subsidiary,[36] until Texas Instruments prevailed in a complex legal battle in 1996, when the U.S. Patent Office overturned key parts of the patent, while allowing Hyatt to keep it.[2][37] Hyatt said in a 1990Los Angeles Times article that his invention would have been created had his prospective investors backed him, and that the venture investors leaked details of his chip to the industry, though he did not elaborate with evidence to support this claim.[33] In the same article,The Chip authorT.R. Reid was quoted as saying that historians may ultimately place Hyatt as a co-inventor of the microprocessor, in the way that Intel's Noyce and TI's Kilby share credit for the invention of the chip in 1958: "Kilby got the idea first, but Noyce made it practical. The legal ruling finally favored Noyce, but they are considered co-inventors. The same could happen here."[33] Hyatt would go on to fight a decades-long legal battle with the state of California over alleged unpaid taxes on his patent's windfall after 1990, which would culminate in a landmark Supreme Court case addressing states'sovereign immunity inFranchise Tax Board of California v. Hyatt (2019).
Texas Instruments developed in 1970–1971 a one-chip CPU replacement for theDatapoint 2200 terminal, the TMX 1795 (later TMC 1795). Like Intel's later8008, it was rejected by customer Datapoint. According to Gary Boone, the TMX 1795 never reached production. Still it reached a prototype state at 1971 February 24.[38] Since it was built to the same specification, its instruction set was very similar to the Intel 8008.[39][40]
The TMS1802NC, announced September 17, 1971, was the first microcontroller and at launch implemented a four-function calculator. The TMS1802NC, despite its designation, was not part of theTMS 1000 series; it was later redesignated as part of the TMS 0100 series, which was used in the TI Datamath calculator. It was marketed as a calculator-on-a-chip and also "fully programmable", but this programming had to done during manufacturing. Its chip integrated a CPU with an 11-bit instruction word, 3520 bits (320 instructions) of ROM and 182 bits of RAM.[39][41][40][42]
The PICO1/GI250 chip introduced in 1971: It was designed by Pico Electronics (Glenrothes, Scotland) and manufactured by General Instrument of Hicksville NY.
In 1971, Pico Electronics[43] andGeneral Instrument (GI) introduced their first collaboration in ICs, a complete single-chip calculator IC for the Monroe/Litton Royal Digital III calculator. This chip could also arguably lay claim to be one of the first microprocessors or microcontrollers havingROM,RAM and aRISC instruction set on-chip. The layout for the four layers of thePMOS process was hand drawn at x500 scale on mylar film, a significant task at the time given the complexity of the chip.
Pico was a spinout by five GI design engineers whose vision was to create single-chip calculator ICs. They had significant previous design experience on multiple calculator chipsets with both GI andMarconi-Elliott.[44] The key team members had originally been tasked byElliott Automation to create an 8-bit computer in MOS and had helped establish a MOS Research Laboratory inGlenrothes, Scotland in 1967.
Calculators were becoming the largest single market for semiconductors so Pico and GI went on to have significant success in this burgeoning market. GI continued to innovate in microprocessors and microcontrollers with products including the CP1600, IOB1680 and PIC1650.[45] In 1987, the GI Microelectronics business was spun out into theMicrochipPIC microcontroller business.
Intel's first microprocessor, the4004, with cover removed (left) and as actually used (right)Intel advertisement inElectronic News magazine from 1971 emphasizing the 4004's affordability, compactness, ease of programming, and flexibility.
TheIntel 4004 is often (falsely) regarded as the first true microprocessor built on a single chip,[46][47] priced atUS$60 (equivalent to $470 in 2024).[48] The first known advertisement for the 4004 is dated November 15, 1971, and appeared inElectronic News.[49] The microprocessor was designed by a team consisting of Italian engineerFederico Faggin, American engineersMarcian Hoff andStanley Mazor, and Japanese engineerMasatoshi Shima.[50]
The project that produced the 4004 originated in 1969, whenBusicom, a Japanese calculator manufacturer, asked Intel to build a chipset for high-performancedesktop calculators. Busicom's original design called for a programmable chip set consisting of seven different chips. Three of the chips were to make a special-purpose CPU with its program stored in ROM and its data stored in shift register read-write memory.Ted Hoff, the Intel engineer assigned to evaluate the project, believed the Busicom design could be simplified by using dynamic RAM storage for data, rather than shift register memory, and a more traditional general-purpose CPU architecture. Hoff came up with a four-chip architectural proposal: a ROM chip for storing the programs, a dynamic RAM chip for storing data, a simpleI/O device, and a 4-bit central processing unit (CPU). Although not a chip designer, he felt the CPU could be integrated into a single chip, but as he lacked the technical know-how the idea remained just a wish for the time being.
While the architecture and specifications of the MCS-4 came from the interaction of Hoff withStanley Mazor, a software engineer reporting to him, and with Busicom engineerMasatoshi Shima, during 1969, Mazor and Hoff moved on to other projects. In April 1970, Intel hired Italian engineerFederico Faggin as project leader, a move that ultimately made the single-chip CPU final design a reality (Shima meanwhile designed the Busicom calculator firmware and assisted Faggin during the first six months of the implementation). Faggin, who originally developed thesilicon gate technology (SGT) in 1968 atFairchild Semiconductor[51] and designed the world's first commercial integrated circuit using SGT, the Fairchild 3708, had the correct background to lead the project into what would become the first commercial general purpose microprocessor. Since SGT was his very own invention, Faggin also used it to create his new methodology forrandom logic design that made it possible to implement a single-chip CPU with the proper speed, power dissipation and cost. The manager of Intel's MOS Design Department wasLeslie L. Vadász at the time of the MCS-4 development but Vadász's attention was completely focused on the mainstream business of semiconductor memories so he left the leadership and the management of the MCS-4 project to Faggin, who was ultimately responsible for leading the 4004 project to its realization. Production units of the 4004 were first delivered to Busicom in March 1971 and shipped to other customers in late 1971.[citation needed]
TheIntel 4004 was followed in 1972 by theIntel 8008, intel's first8-bit microprocessor.[52] The 8008 was not, however, an extension of the 4004 design, but instead the culmination of a separate design project at Intel, arising from a contract withComputer Terminals Corporation, of San Antonio TX, for a chip for a terminal they were designing,[53] theDatapoint 2200—fundamental aspects of the design came not from Intel but from CTC. In 1968, CTC's Vic Poor and Harry Pyle developed the original design for theinstruction set and operation of the processor. In 1969, CTC contracted two companies,Intel andTexas Instruments, to make a single-chip implementation, known as the CTC 1201.[54] In late 1970 or early 1971, TI dropped out being unable to make a reliable part. In 1970, with Intel yet to deliver the part, CTC opted to use their own implementation in the Datapoint 2200, using traditional TTL logic instead (thus the first machine to run "8008 code" was not in fact a microprocessor at all and was delivered a year earlier). Intel's version of the 1201 microprocessor arrived in late 1971, but was too late, slow, and required a number of additional support chips. CTC had no interest in using it. CTC had originally contracted Intel for the chip, and would have owed themUS$50,000 (equivalent to $388,209 in 2024) for their design work.[54] To avoid paying for a chip they did not want (and could not use), CTC released Intel from their contract and allowed them free use of the design.[54] Intel marketed it as the 8008 in April, 1972, as the world's first 8-bit microprocessor. It was the basis for the famous "Mark-8" computer kit advertised in the magazineRadio-Electronics in 1974. This processor had an 8-bit data bus and a 14-bit address bus.[55]
The 8008 was the precursor to the successfulIntel 8080 (1974), which offered improved performance over the 8008 and required fewer support chips. Federico Faggin conceived and designed it using high voltage N channel MOS. TheZilog Z80 (1976) was also a Faggin design, using low voltage N channel with depletion load and derivative Intel 8-bit processors: all designed with the methodology Faggin created for the 4004.Motorola released the competing6800 in August 1974, and the similarMOS Technology 6502 was released in 1975 (both designed largely by the same people). The 6502 family rivaled the Z80 in popularity during the 1980s.
A low overall cost, little packaging, simplecomputer bus requirements, and sometimes the integration of extra circuitry (e.g. the Z80's built-inmemory refresh circuitry) allowed thehome computer "revolution" to accelerate sharply in the early 1980s. This delivered such inexpensive machines as the SinclairZX81, which sold forUS$99 (equivalent to $342.41 in 2024). A variation of the 6502, theMOS Technology 6510 was used in theCommodore 64 and yet another variant, the 8502, powered theCommodore 128.
The Western Design Center, Inc (WDC) introduced the CMOSWDC 65C02 in 1982 and licensed the design to several firms. It was used as the CPU in theApple IIe andIIc personal computers as well as in medical implantable gradepacemakers anddefibrillators, automotive, industrial and consumer devices. WDC pioneered the licensing of microprocessor designs, later followed byARM (32-bit) and other microprocessorintellectual property (IP) providers in the 1990s.
Motorola introduced theMC6809 in 1978. It was an ambitious and well thought-through 8-bit design that wassource compatible with the6800, and implemented using purelyhard-wired logic (subsequent 16-bit microprocessors typically usedmicrocode to some extent, asCISC design requirements were becoming too complex for pure hard-wired logic).
A seminal microprocessor in the world of spaceflight wasRCA'sRCA 1802 (aka CDP1802, RCA COSMAC) (introduced in 1976), which was used on board theGalileo probe to Jupiter (launched 1989, arrived 1995). RCA COSMAC was the first to implementCMOS technology. The CDP1802 was used because it could be run at verylow power, and because a variant was available fabricated using a special production process,silicon on sapphire (SOS), which provided much better protection againstcosmic radiation andelectrostatic discharge than that of any other processor of the era. Thus, the SOS version of the 1802 was said to be the firstradiation-hardened microprocessor.
The RCA 1802 had astatic design, meaning that theclock frequency could be made arbitrarily low, or even stopped. This let theGalileo spacecraft use minimum electric power for long uneventful stretches of a voyage. Timers or sensors would awaken the processor in time for important tasks, such as navigation updates, attitude control, data acquisition, and radio communication. Current versions of the Western Design Center 65C02 and 65C816 also havestatic cores, and thus retain data even when the clock is completely halted.
TheIntersil 6100 family consisted of a12-bit microprocessor (the 6100) and a range of peripheral support and memory ICs. The microprocessor recognised the DECPDP-8minicomputer instruction set. As such it was sometimes referred to as theCMOS-PDP8. Since it was also produced by Harris Corporation, it was also known as theHarris HM-6100. By virtue of its CMOS technology and associated benefits, the 6100 was being incorporated into some military designs until the early 1980s.
The first multi-chip16-bit microprocessor was theNational SemiconductorIMP-16, introduced in early 1973. An 8-bit version of the chipset was introduced in 1974 as the IMP-8.
Another early single-chip 16-bit microprocessor was TI'sTMS 9900, which was also compatible with theirTI-990 line of minicomputers. The 9900 was used in the TI 990/4 minicomputer, theTI-99/4A home computer, and the TM990 line of OEM microcomputer boards. The chip was packaged in a large ceramic 64-pinDIP package, while most 8-bit microprocessors such as the Intel 8080 used the more common, smaller, and less expensive plastic 40-pin DIP. A follow-on chip, the TMS 9980, was designed to compete with the Intel 8080, had the full TI 990 16-bit instruction set, used a plastic 40-pin package, moved data 8 bits at a time, but could only address 16 KB. A third chip, the TMS 9995, was a new design. The family later expanded to include the 99105 and 99110.
Intel "upsized" their 8080 design into the 16-bitIntel 8086, the first member of thex86 family, which powers most modernPC type computers.Intel introduced the 8086 as a cost-effective way of porting software from the 8080 lines, and succeeded in winning much business on that premise. The8088, a version of the 8086 that used an 8-bit external data bus, was the microprocessor in the firstIBM PC. Intel then released the80186 and80188, the80286 and, in 1985, the 32-bit80386, cementing their PC market dominance with the processor family's backwards compatibility. The 80186 and 80188 were essentially versions of the 8086 and 8088, enhanced with some onboard peripherals and a few new instructions. Although Intel's 80186 and 80188 were not used in IBM PC type designs,[dubious –discuss] second source versions from NEC, theV20 and V30 frequently were. The 8086 and successors had an innovative but limited method ofmemory segmentation, while the 80286 introduced a full-featured segmentedmemory management unit (MMU). The 80386 introduced a flat 32-bit memory model with paged memory management.
The 16-bit Intel x86 processors up to and including the 80386 do not includefloating-point units (FPUs). Intel introduced the8087,80187,80287 and80387 math coprocessors to add hardware floating-point and transcendental function capabilities to the 8086 through 80386 CPUs. The 8087 works with the 8086/8088 and 80186/80188,[58] the 80187 works with the 80186 but not the 80188,[59] the 80287 works with the 80286 and the 80387 works with the 80386. The combination of an x86 CPU and an x87 coprocessor forms a single multi-chip microprocessor; the two chips are programmed as a unit using a single integrated instruction set.[60] The 8087 and 80187 coprocessors are connected in parallel with the data and address buses of their parent processor and directly execute instructions intended for them. The 80287 and 80387 coprocessors are interfaced to the CPU through I/O ports in the CPU's address space, this is transparent to the program, which does not need to know about or access these I/O ports directly; the program accesses the coprocessor and its registers through normal instruction opcodes.
16-bit designs had only been on the market briefly when32-bit implementations started to appear.
The most significant of the 32-bit designs is theMotorola MC68000, introduced in 1979. The 68k, as it was widely known, had 32-bit registers in its programming model but used 16-bit internal data paths, three 16-bit Arithmetic Logic Units, and a 16-bit external data bus (to reduce pin count), and externally supported only 24-bit addresses (internally it worked with full 32 bit addresses). InPC-based IBM-compatible mainframes the MC68000 internal microcode was modified to emulate the 32-bit System/370 IBM mainframe.[61] Motorola generally described it as a 16-bit processor. The combination of high performance, large (16 megabytes or 224 bytes) memory space and fairly low cost made it the most popularCPU design of its class. TheApple Lisa andMacintosh designs made use of the 68000, as did other designs in the mid-1980s, including theAtari ST andAmiga.
The world's first single-chip fully 32-bit microprocessor, with 32-bit data paths, 32-bit buses, and 32-bit addresses, was theAT&TBell LabsBELLMAC-32A, with first samples in 1980, and general production in 1982.[62][63] After thedivestiture of AT&T in 1984, it was renamed the WE 32000 (WE forWestern Electric), and had two follow-on generations, the WE 32100 and WE 32200. These microprocessors were used in the AT&T 3B5 and 3B15 minicomputers; in the 3B2, the world's first desktop super microcomputer; in the "Companion", the world's first 32-bitlaptop computer; and in "Alexander", the world's first book-sized super microcomputer, featuring ROM-pack memory cartridges similar to today's gaming consoles. All these systems ran theUNIX System V operating system.
The first commercial, single chip, fully 32-bit microprocessor available on the market was theHP FOCUS.
Intel's first 32-bit microprocessor was theiAPX 432, which was introduced in 1981, but was not a commercial success. It had an advancedcapability-basedobject-oriented architecture, but poor performance compared to contemporary architectures such as Intel's own 80286 (introduced 1982), which was almost four times as fast on typical benchmark tests. However, the results for the iAPX432 was partly due to a rushed and therefore suboptimalAdacompiler.[citation needed]
Motorola's success with the 68000 led to theMC68010, which addedvirtual memory support. TheMC68020, introduced in 1984 added full 32-bit data and address buses. The 68020 became hugely popular in theUnix supermicrocomputer market, and many small companies (e.g.,Altos,Charles River Data Systems,Cromemco) produced desktop-size systems. TheMC68030 was introduced next, improving upon the previous design by integrating the MMU into the chip. The continued success led to theMC68040, which included anFPU for better math performance. The 68050 failed to achieve its performance goals and was not released, and the follow-upMC68060 was released into a market saturated by much faster RISC designs. The 68k family faded from use in the early 1990s.
Other large companies designed the 68020 and follow-ons into embedded equipment. At one point, there were more 68020s in embedded equipment than there wereIntel Pentiums in PCs.[64] TheColdFire processor cores are derivatives of the 68020.
During this time (early to mid-1980s),National Semiconductor introduced a very similar 16-bit pinout, 32-bit internal microprocessor called the NS 16032 (later renamed 32016), the full 32-bit version named theNS 32032. Later, National Semiconductor produced theNS 32132, which allowed two CPUs to reside on the same memory bus with built in arbitration. The NS32016/32 outperformed the MC68000/10, but the NS32332—which arrived at approximately the same time as the MC68020—did not have enough performance. The third generation chip, the NS32532, was different. It had about double the performance of the MC68030, which was released around the same time. The appearance of RISC processors like the AM29000 and MC88000 (now both dead) influenced the architecture of the final core, the NS32764. Technically advanced—with a superscalar RISC core, 64-bit bus, and internally overclocked—it could still execute Series 32000 instructions through real-time translation.
When National Semiconductor decided to leave the Unix market, the chip was redesigned into the Swordfish Embedded processor with a set of on-chip peripherals. The chip turned out to be too expensive for thelaser printer market and was killed. The design team went to Intel and there designed the Pentium processor, which is very similar to the NS32764 core internally. The big success of the Series 32000 was in the laser printer market, where the NS32CG16 with microcoded BitBlt instructions had very good price/performance and was adopted by large companies like Canon. By the mid-1980s,Sequent introduced the first SMP server-class computer using the NS 32032. This was one of the design's few wins, and it disappeared in the late 1980s. TheMIPSR2000 (1984) andR3000 (1989) were highly successful 32-bit RISC microprocessors. They were used in high-end workstations and servers bySGI, among others. Other designs included theZilog Z80000, which arrived too late to market to stand a chance and disappeared quickly.
TheARM first appeared in 1985.[65] This is aRISC processor design, which has since come to dominate the 32-bitembedded systems processor space due in large part to its power efficiency, its licensing model, and its wide selection of system development tools. Semiconductor manufacturers generally license cores and integrate them into their ownsystem on a chip products; only a few such vendors such as Apple are licensed to modify the ARM cores or create their own. Mostcell phones include an ARM processor, as do a wide variety of other products. There are microcontroller-oriented ARM cores without virtual memory support, as well assymmetric multiprocessor (SMP) applications processors with virtual memory.
From 1993 to 2003, the 32-bitx86 architectures became increasingly dominant indesktop,laptop, and server markets, and these microprocessors became faster and more capable. Intel had licensed early versions of the architecture to other companies, but declined to license the Pentium, soAMD andCyrix built later versions of the architecture based on their own designs. During this span, these processors increased in complexity (transistor count) and capability (instructions/second) by at least three orders of magnitude. Intel's Pentium line is probably the most famous and recognizable 32-bit processor model, at least with the public at broad.
While64-bit microprocessor designs have been in use in several markets since the early 1990s (including theNintendo 64gaming console in 1996), the early 2000s saw the introduction of 64-bit microprocessors targeted at the PC market.
With AMD's introduction of a 64-bit architecture backwards-compatible with x86,x86-64 (also calledAMD64), in September 2003, followed by Intel's near fully compatible 64-bit extensions (first called IA-32e or EM64T, later renamedIntel 64), the 64-bit desktop era began. Both versions can run 32-bit legacy applications without any performance penalty as well as new 64-bit software. With operating systemsWindows XP x64,Windows Vista x64,Windows 7 x64,Linux,BSD, andmacOS that run 64-bit natively, the software is also geared to fully utilize the capabilities of such processors. The move to 64 bits is more than just an increase in register size from the IA-32 as it also doubles the number of general-purpose registers.
The move to 64 bits byPowerPC had been intended since the architecture's design in the early 90s and was not a major cause of incompatibility. Existing integer registers are extended as are all related data pathways, but, as was the case with IA-32, both floating-point and vector units had been operating at or above 64 bits for several years. Unlike what happened when IA-32 was extended to x86-64, no new general purpose registers were added in 64-bit PowerPC, so any performance gained when using the 64-bit mode for applications making no use of the larger address space is minimal.[citation needed]
In 2011, ARM introduced the new 64-bit ARM architecture.
In the mid-1980s to early 1990s, a crop of new high-performance reduced instruction set computer (RISC) microprocessors appeared, influenced by discrete RISC-like CPU designs such as theIBM 801 and others. RISC microprocessors were initially used in special-purpose machines andUnixworkstations, but then gained wide acceptance in other roles.
The first commercial RISC microprocessor design was released in 1984, byMIPS Computer Systems, the 32-bitR2000 (the R1000 was not released). In 1986, HP released its first system with aPA-RISC CPU. In 1987, in the non-UnixAcorn computers' 32-bit, then cache-less,ARM2-basedAcorn Archimedes became the first commercial success using theARM architecture, then known as Acorn RISC Machine (ARM); first siliconARM1 in 1985. The R3000 made the design truly practical, and theR4000 introduced the world's first commercially available 64-bit RISC microprocessor. Competing projects would result in the IBMPOWER andSunSPARC architectures. Soon every major vendor was releasing a RISC design, including theAT&T CRISP,AMD 29000,Intel i860 andIntel i960,Motorola 88000,DEC Alpha.
In the late 1990s, only two 64-bit RISC architectures were still produced in volume for non-embedded applications:SPARC andPower ISA, but as ARM has become increasingly powerful, in the early 2010s, it became the third RISC architecture in the general computing segment.
SMPsymmetric multiprocessing[66] is a configuration of two, four, or more CPU's (in pairs) that are typically used in servers, certain workstations and in desktop personal computers, since the 1990s. Amulti-core processor is a single CPU that contains more than one microprocessor core.
This popular two-socket motherboard fromAbit was released in 1999 as the first SMP enabled PC motherboard, theIntel Pentium Pro was the first commercial CPU offered to system builders and enthusiasts. The Abit BP9 supports two Intel Celeron CPU's and when used with a SMP enabled operating system (Windows NT/2000/Linux) many applications obtain much higher performance than a single CPU. The earlyCelerons are easily overclockable and hobbyists used these relatively inexpensive CPU's clocked as high as 533Mhz - far beyond Intel's specification. After discovering the capacity of these motherboards Intel removed access to the multiplier in later CPU's.
In 2001 IBM released thePOWER4 CPU, it was a processor that was developed over five years of research, began in 1996 using a team of 250 researchers. The effort to accomplish the impossible was buttressed by development of and through—remote-collaboration and assigning younger engineers to work with more experienced engineers. The teams work achieved success with the new microprocessor, Power4. It is a two-in-one CPU that more than doubled performance at half the price of the competition, and a major advance in computing. The business magazineeWeek wrote:"The newly designed 1GHz Power4 represents a tremendous leap over its predecessor". An industry analyst, Brad Day of Giga Information Group said:"IBM is getting very aggressive, and this server is a game changer".
The Power4 won "Analysts’ Choice Award for Best Workstation/Server Processor of 2001", and it broke notable records, including winning a contest against the best players on the Jeopardy![67] U.S. television show.
Intel'scodename Yonah CPU's launched on Jan 6, 2006, and were manufactured with two dies packaged on amulti-chip module. In a hotly contested marketplaceAMD and others released new versions of multi-core CPU's, AMD's SMP enabledAthlon MP CPU's from theAthlonXP line in 2001, Sun released theNiagara andNiagara 2 with eight-cores, AMD'sAthlon X2 was released in June 2007. The companies were engaged in a never-ending race for speed, indeed more demanding software mandated more processing power and faster CPU speeds.
By 2012dual and quad-core processors became widely used in PCs and laptops, newer processors - similar to the higher cost professional level Intel Xeon's - with additional cores that execute instructions in parallel so software performance typically increases, provided the software is designed to utilize advanced hardware. Operating systems provided support for multiple-cores and SMD CPU's, many software applications including large workload and resource intensive applications - such as 3-D games - are programmed to take advantage of multiple core and multi-CPU systems.
Apple, Intel, and AMD currently lead the market with multiple core desktop and workstation CPU's. Although they frequently leapfrog each other for the lead in the performance tier. Intel retains higher frequencies and thus has the fastest single core performance,[68] while AMD is often the leader in multi-threaded routines due to a more advanced ISA and the process node the CPU's are fabricated on.
In 1997, about 55% of allCPUs sold in the world were 8-bitmicrocontrollers, of which over 2 billion were sold.[69]
In 2002, less than 10% of all the CPUs sold in the world were 32-bit or more. Of all the 32-bit CPUs sold, about 2% are used in desktop or laptop personal computers. Most microprocessors are used in embedded control applications such as household appliances, automobiles, and computer peripherals. Taken as a whole, the average price for a microprocessor, microcontroller, orDSP is just overUS$6 (equivalent to $10.49 in 2024).[70]
In 2003, about $44 billion (equivalent to about $75 billion in 2024) worth of microprocessors were manufactured and sold.[71] Although about half of that money was spent on CPUs used in desktop or laptoppersonal computers, those count for only about 2% of all CPUs sold.[70] The quality-adjusted price of laptop microprocessors improved −25% to −35% per year in 2004–2010, and the rate of improvement slowed to −15% to −25% per year in 2010–2013.[72]
About 10 billion CPUs were manufactured in 2008. Most new CPUs produced each year are embedded.[73]
^Warnes, Lionel (2003). "Microprocessors and microcontrollers".Electronic and Electrical Engineering. London: Macmillan Education UK. pp. 443–477.doi:10.1007/978-0-230-21633-4_23.ISBN978-0-333-99040-7.microprocessor is not a stand-alone computer, since it lacks memory and input/output control. These are the missing parts that the microcontroller supplies, making it more nearly a complete computer on a chip.
^Wayne Freeman."11 Myths About 8-Bit Microcontrollers"Archived 12 August 2022 at theWayback Machine.2016.quote:"Basically, by getting your work done faster, you can put the CPU in sleep mode for longer periods of time. Thus, 32-bit MCUs are more power-efficient than 8-bit MCUs, right? Wrong."
^RW (3 March 1995)."Interview with Gordon E. Moore".LAIR History of Science and Technology Collections. Los Altos Hills, California: Stanford University.Archived from the original on 4 February 2012.
^Faggin, Federico; Hoff, Marcian E. Jr.; Mazor, Stanley; Shima, Masatoshi (December 1996). "The History of the 4004".IEEE Micro.16 (6):10–20.doi:10.1109/40.546561.
^The 80187 only has a 16-bit data bus because it used the 80387SX core.
^"Essentially, the 80C187 can be treated as an additional resource or an extension to the CPU. The 80C186 CPU together with an 80C187 can be used as a single unified system." Intel 80C187 datasheet, p. 3, November 1992 (Order Number: 270640-004).
^"Timeline: 1982–1984".Physical Sciences & Communications at Bell Labs. Bell Labs, Alcatel-Lucent. 17 January 2001. Archived fromthe original on 14 May 2011. Retrieved23 December 2009.
^abTurley, Jim (18 December 2002)."The Two Percent Solution".Embedded Systems Design. TechInsights (United Business Media).Archived from the original on 3 April 2015. Retrieved23 December 2009.
^Barr, Michael (1 August 2009)."Real men program in C".Embedded Systems Design. TechInsights (United Business Media). p. 2.Archived from the original on 22 October 2012. Retrieved23 December 2009.