By contrast,software is a set of written instructions that can be stored and run by hardware. Hardware derived its name from the fact it ishard or rigid with respect to changes, whereas software issoft because it is easy to change.
Hardware is typically directed by the software to execute any command orinstruction. A combination of hardware and software forms a usablecomputing system, althoughother systems exist with only hardware.
Some of the earliest computing devices date back to the seventeenth century. For example, in 1642, French mathematicianBlaise Pascal designed a gear-based device called thePascaline that could add and subtract. Then, in 1676, thestepped reckoner was invented byGottfried Leibniz, which could also divide and multiply. Due to the limitations of contemporary fabrication and design flaws, Leibniz' reckoner was not very functional, but similar devices (Leibniz wheel) remained in use into the 1970s.[3] In the 19th century, EnglishmanCharles Babbage invented thedifference engine, a mechanical device to calculatepolynomials for astronomical purposes.[4] Babbage also designed a general-purpose computer that was never built. Much of the design was incorporated into the earliest computers:punch cards for input and output,memory, an arithmetic unit analogous tocentral processing units, and even a primitiveprogramming language similar toassembly language.[5]
In 1936,Alan Turing developed the concept of theuniversal Turing machine to model any type of computer, demonstrating that no machine could solve thedecision problem.[6] The universal Turing machine was a type ofstored-program computer capable of mimicking the operations of anyTuring machine (computer model) based on thesoftware instructions passed to it. The storage ofcomputer programs is key to the operation of modern computers and is the connection between computer hardware and software.[7] Even prior to this, in the mid-19th century mathematicianGeorge Boole inventedBoolean algebra—a system of logic where eachproposition is either true or false. Boolean algebra is now the basis of thecircuits that model thetransistors and other components ofintegrated circuits that make up modern computer hardware.[8] In 1945, Turing finished the design for a computer (theAutomatic Computing Engine) that was never built.[9]
Growth in processor performance (as measured by benchmarks),[13] 1978–2010
Computer architecture involves balancing various goals, such as cost, speed, availability, and energy efficiency. Designers must have a thorough understanding of hardware requirements and diverse aspects of computing, ranging fromcompilers toIntegrated circuit design.[14] Cost has also become a significant constraint for manufacturers seeking to sell their products for less money than competitors offering a very similar hardware component. Profit margins have also been reduced.[15] Even when the performance is not increasing, the cost of components has been dropping over time due to improved manufacturing techniques that have fewer components rejected atquality assurance stage.[16]
The most commoninstruction set architecture (ISA)—the interface between a computer's hardware and software—is based on the one devised by von Neumann in 1945.[17] Despite the separation of the computing unit and the I/O system in many diagrams, typically the hardware is shared, with a bit in the computing unit indicating whether it is in computation or I/O mode.[18] Common types of ISAs include CISC (complex instruction set computer), RISC (reduced instruction set computer),vector operations, and hybrid modes.[19] CISC involves using a larger expression set to minimize the number of instructions the machines need to use.[20] Based on a recognition that only a few instructions are commonly used, RISC shrinks the instruction set for added simplicity, which also enables the inclusion of moreregisters.[21] After the invention of RISC in the 1980s, RISC based architectures that usedpipelining andcaching to increase performance displaced CISC architectures, particularly in applications with restrictions on power usage or space (such asmobile phones). From 1986 to 2003, the annual rate of improvement in hardware performance exceeded 50 percent, enabling the development of new computing devices such astablets and mobiles.[22] Alongside the density of transistors, DRAM memory as well as flash and magnetic disk storage also became exponentially more compact and cheaper. The rate of improvement slackened off in the twenty-first century.[23]
In the twenty-first century, increases in performance have been driven by increasing exploitation ofparallelism.[24] Applications are often parallelizable in two ways: either the same function is running across multiple areas of data (data parallelism) or different tasks can be performed simultaneously with limited interaction (task parallelism).[25] These forms of parallelism are accommodated by various hardware strategies, includinginstruction-level parallelism (such asinstruction pipelining), vector architectures andgraphical processing units (GPUs) that are able to implement data parallelism, thread-level parallelism and request-level parallelism (both implementing task-level parallelism).[25]
Microarchitecture, also known as computer organization, refers to high-level hardware questions such as the design of the CPU, memory, and memoryinterconnect.[26]Memory hierarchy ensures that the memory quicker to access (and more expensive) is located closer to the CPU, while slower, cheaper memory for large-volume storage is located further away.[27] Memory is typically segregated to separate programs from data and limit an attacker's ability to alter programs.[28] Most computers usevirtual memory to simplify addressing for programs, using theoperating system to map virtual memory to different areas of the finite physical memory.[29]
Computer processors generate heat, and excessive heat impacts their performance and can harm the components. Many computer chips will automatically throttle their performance to avoid overheating. Computers also typically have mechanisms for dissipating excessive heat, such as air or liquid coolers for the CPU and GPU and heatsinks for other components, such as theRAM.Computer cases are also often ventilated to help dissipate heat from the computer.[30]Data centers typically use more sophisticated cooling solutions to keep the operating temperature of the entire center safe. Air-cooled systems are more common in smaller or older data centers, while liquid-cooled immersion (where each computer is surrounded by cooling fluid) and direct-to-chip (where the cooling fluid is directed to each computer chip) can be more expensive but are also more efficient.[31] Most computers are designed to be more powerful than their cooling system, but their sustained operations cannot exceed the capacity of the cooling system.[32] While performance can be temporarily increased when the computer is not hot (overclocking),[33] in order to protect the hardware from excessive heat, the system will automatically reduce performance or shut down the processor if necessary.[32] Processors also will shut off or enter a low power mode when inactive to reduce heat.[34] Power delivery as well as heat dissipation are the most challenging aspects of hardware design,[35] and have been the limiting factor to the development of smaller and faster chips since the early twenty-first century.[34] Increases in performance require a commensurate increase in energy use and cooling demand.[36]
Thepersonal computer is one of the most common types of computer due to its versatility and relatively low price.
Desktop personal computers have amonitor, akeyboard, amouse, and acomputer case. The computer case holds themotherboard, fixed or removabledisk drives for data storage, thepower supply, and may contain other peripheral devices such asmodems or network interfaces. Some models of desktop computers integrated the monitor and keyboard into the same case as the processor and power supply. Separating the elements allows the user to arrange the components in a pleasing, comfortable array, at the cost of managing power and data cables between them.
Laptops are designed for portability but operate similarly to desktop PCs.[37] They may use lower-power or reduced size components, with lower performance than a similarly priced desktop computer.[38] Laptops contain the keyboard, display, and processor in one case. The monitor in the folding upper cover of the case can be closed for transportation, to protect the screen and keyboard. Instead of a mouse, laptops may have atouchpad orpointing stick.
Tablets are portable computers that use atouch screen as the primary input device. Tablets generally weigh less and are smaller than laptops.[citation needed] Some tablets include fold-out keyboards or offer connections to separate external keyboards. Some models of laptop computers have a detachable keyboard, which allows the system to be configured as a touch-screen tablet. They are sometimes called 2-in-1 detachable laptops or tablet-laptop hybrids.[39]
Mobile phones are designed to have an extended battery life and light weight, while having less functionality than larger computers. They have diverse hardware architecture, often including antennas, microphones, cameras,GPS devices, and speakers. Power and data connections vary between phones.[40]
Amainframe computer is a much larger computer that typically fills a room and may cost many hundreds or thousands of times as much as a personal computer. They are designed to perform large numbers of calculations for governments and large enterprises.
In the 1960s and 1970s, more and more departments started to use cheaper and dedicated systems for specific purposes likeprocess control andlaboratory automation. Aminicomputer, or colloquiallymini, is a class of smallercomputers that was developed in the mid-1960s[41][42] and sold for much less thanmainframe[43] and mid-size computers fromIBM and its direct competitors.
Supercomputers can cost hundreds of millions of dollars. They are designed to maximize performance infloating-point arithmetic and executebatch programs that may take weeks to complete. Due to the need for efficient communication betweenparallel programs, the speed of the internal network is a critical priority.[44]
Warehouse scale computers are larger versions ofcluster computers that came into fashion withsoftware as a service provided viathe internet. Their design is intended to minimize cost per operation and power usage, as they can cost over $100 million for a warehouse and the computers which go inside (the computers must be replaced every few years). Although availability is crucial for SaaS products, the software is designed to compensate for availability failures—unlike supercomputers.[44]
Embedded systems have the most variation in their processing power and cost: from an 8-bit processor that could cost less thanUSD$0.10, to higher-end processors capable of billions of operations per second and costing over USD$100. Cost is a particular concern with these systems, with designers often choosing the cheapest option that satisfies the performance requirements.[46]
A computer case encloses most of the components of a desktop computer system. It provides mechanical support and protection for internal elements such as the motherboard, disk drives, and power supply, and controls and directs the flow of cooling air over internal components. The case is also part of the system to control electromagnetic interference radiated by the computer and protects internal parts from electrostatic discharge. Largetower cases provide space for multiple disk drives or other peripherals and usually stand on the floor, while desktop cases provide less expansion room. All-in-one style designs include a video display built into the same case. Portable and laptop computers require cases that provide impact protection for the unit. Hobbyists may decorate the cases with colored lights, paint, or other features, in an activity calledcase modding.
Most personal computer power supply units meet theATX standard and convert fromalternating current (AC) at between 120 and 277volts provided from apower outlet todirect current (DC) at a much lower voltage: typically 12, 5, or 3.3 volts.[47]
Components directly attached to or to part of the motherboard include:
At least oneCPU (central processing unit), which performs the majority of computational tasks required for a computer to operate.[49] Often described informally as thebrain of the computer,[50] the CPU fetches program instructions fromrandom-access memory (RAM), decodes and executes them, then returns results for further processing by other components. This process is known as theinstruction cycle. Modern CPUs aremicroprocessors fabricated on ametal–oxide–semiconductor (MOS)integrated circuit (IC) using advancedsemiconductor device fabrication techniques, often employingphotolithography. They are typically cooled using aheatsink andfan or aliquid-cooling system. Many contemporary CPUs integrate an on-diegraphics processing unit (GPU), eliminating the need for a discrete GPU in basic systems. CPU performance is influenced by clock speed—measured in gigahertz (GHz)—with common consumer processors ranging from 1 GHz to 5 GHz.[citation needed] Additionally, there is a growing trend towardmulti-core designs, where multiple processing cores are included on a single chip, enabling greaterparallelism and improved multitasking performance.[50]
The internal bus connects the CPU to main memory via multiple communication lines—typically 50 to 100—divided into address, data, and control buses, each handling specific types of signals.[51] Historically, parallel buses were dominant, but in the twenty-first century, high-speed serial buses (often usingserializer/deserializer (SerDes) technology) have largely replaced them, enabling greater data throughput over fewer physical connections. Examples includePCI Express andUSB.[52] In systems with multiple processors, an interconnect bus is used, traditionally coordinated by anorthbridge chip, which links the CPU, memory, and high-speed peripherals such asPCI. Thesouthbridge handles communication with slower I/O devices such as storage and USB ports.[53] However, in modern architectures likeIntel QuickPath Interconnect orAMD Ryzen-based systems, these functions are increasingly integrated into the CPU itself, forming asystem on a chip (SoC)-like design.
Random-access memory (RAM) stores code and data actively used by the CPU, organized in amemory hierarchy optimized for access speed and predicted reuse. At the top of this hierarchy areregisters, located within the CPU core, offering the fastest access but extremely limited capacity.[54] Below registers are multiple levels ofcache memory—L1, L2, and sometimes L3—typically implemented usingstatic random-access memory (SRAM). Caches have greater capacity than registers but less than main memory, and while slower than registers, they are significantly faster thandynamic random-access memory (DRAM), which is used for main RAM.[55] Caching improves performance byprefetching frequently used data, thereby reducingmemory latency.[55][56] When data is not found in the cache (acache miss), it is retrieved from main memory. RAM is volatile, meaning its contents are lost when the system loses power.[57] In modern systems, DRAM is often of theDDR SDRAM type, such as DDR4 or DDR5.
Permanent storage or non-volatile memory is typically higher capacity and cheaper than memory, but takes much longer to access. Historically, such storage was typically provided in the form of a hard drive, butsolid-state drives (SSD) are becoming cheaper and are much faster, thus leading to their increasing adoption. USB drives and network or cloud storage are also options.[58]
Read-only memory (ROM) contains firmware such as theBIOS (Basic Input/Output System), which initializes hardware during the boot process—known asbooting orbootstrapping—when the computer is powered on.[citation needed] This firmware is stored in a non-volatile memory chip, traditionally ROM orflash memory, allowing updates in modern systems viafirmware update.[59]
The BIOS manages essential functions including boot sequence and power management through theACPI standard. However, most modern motherboards have transitioned to theUnified Extensible Firmware Interface (UEFI), which offers enhanced capabilities, faster startup times, support forGUID Partition Table (GPT), and secure boot features.
TheCMOS (complementary MOS)battery, which powers theCMOS memory for date and time in the BIOS chip. This battery is generally awatch battery.
Anexpansion card in computing is a printed circuit board that can be inserted into an expansion slot of a computer motherboard or backplane to add functionality to a computer system via the expansion bus. Expansion cards can be used to obtain or expand on features not offered by the motherboard.[61] Using expansion cards for a video processor used to be common, but modern computers are more likely to instead have a GPU integrated into the motherboard.[62]
Most computers also have an external data bus to connect peripheral devices to the motherboard. Most commonly,Universal Serial Bus (USB) is used.[63] Unlike the internal bus, the external bus is connected using a bus controller that allows the peripheral system to operate at a different speed from the CPU.[63]Input andoutput devices are used to receive data from the external world or write data respectively. Common examples include keyboards andmice (input) and displays and printers (output).Network interface controllers are used to accessthe Internet.[64] USB ports also allow power to connected devices—a standard USB supplies power at 5 volts and up to 500milliamps (2.5watts), while powered USB ports with additional pins may allow the delivery of more power—up to 6amps at 24v.[65]
Because computer parts containhazardous materials, there is a growing movement to recycle old and outdated devices.[67] Computer hardware contains hazardous substances such as lead, mercury, nickel, and cadmium. According to theEPA, these e-wastes negatively affect theenvironment if not disposed of properly. Hardware manufacturing also requires significant energy, whilerecycling components helps reduce air and waterpollution as well as greenhouse gas emissions.[68] In many regions, improper disposal of computer equipment is illegal, and legislation requires recycling throughgovernment-approved facilities. Recycling can be facilitated by removing reusable parts such asRAM, DVD drives,graphics cards,hard drives,SSDs, and other similar components.
Many materials used in computer hardware can be recovered through recycling for use in future production. The reuse oftin,silicon,iron,aluminum, and variousplastics commonly found in computers and other electronics helps reduce the costs of manufacturing new systems. Hardware components also frequently containcopper,gold,tantalum,[69][70]silver,platinum,palladium, andlead, along with other valuable materials suitable for reclamation.[71][72]
Thecentral processing unit contains several toxic materials. It may include lead and chromium in metal plates. Resistors, semiconductors, infrared detectors, stabilizers, cables, and wires can contain cadmium, while computer circuit boards may also contain mercury and chromium.[73] Improper disposal of these materials and chemicals can pose serious hazards to the environment.
When e-waste byproducts leach into groundwater, are burned, or get mishandled during recycling, it causes harm. Health problems associated with such toxins include impaired mental development, cancer, and damage to the lungs, liver, and kidneys.[74] Computer components contain many toxic substances, likedioxins,polychlorinated biphenyls (PCBs),cadmium,chromium,radioactive isotopes andmercury. Circuit boards contain considerable quantities of lead-tin solders that are more likely to leach into groundwater or createair pollution due to incineration.[75]
Recycling of computer hardware is considered environmentally friendly because it preventshazardous waste, includingheavy metals and carcinogens, from entering the atmosphere, landfill or waterways. While electronics consist a small fraction of total waste generated, they are far more dangerous. There is stringent legislation designed to enforce and encourage the sustainable disposal of appliances, the most notable being the Waste Electrical and Electronic Equipment Directive of theEuropean Union and the United States National Computer Recycling Act.[76]
E-cycling, the recycling of computer hardware, refers to the donation, reuse, shredding and general collection of used electronics. Generically, the term refers to the process of collecting, brokering, disassembling, repairing and recycling the components or metals contained in used or discarded electronic equipment, otherwise known aselectronic waste (e-waste). E-cyclable items include, but are not limited to: televisions, computers, microwave ovens, vacuum cleaners, telephones and cellular phones, stereos, and VCRs and DVDs just about anything that has a cord, light or takes some kind of battery.[77]
Some companies, such asDell andApple, will recycle computers of their make or any other make. Otherwise, a computer can be donated toComputer Aid International which is an organization that recycles and refurbishes old computers for hospitals, schools, universities, etc.[78]
^Ahmed, Rizwan; Dharaskar, Rajiv V. (2009).Mobile Forensics: An Introduction from Indian Law Enforcement Perspective. Springer. p. 177.ISBN978-3-642-00405-6.
^Henderson, Rebecca M.; Newell, Richard G., eds. (2011).Accelerating energy innovation : insights from multiple sectors. Chicago: University of Chicago Press. p. 180.ISBN978-0226326832.
^Huang, Han-Way (2014).The Atmel AVR microcontroller : MEGA and XMEGA in assembly and C. Australia; United Kingdom: Delmar Cengage Learning. p. 4.ISBN978-1133607298.