A single-frame IBM z15 mainframe. Larger capacity models can have up to four total frames. This model has blue accents, as compared with the LinuxONE III model with orange highlights.A pair of IBM mainframes. On the left is theIBM z Systems z13. On the right is theIBM LinuxONE Rockhopper.AnIBM System z9 mainframe
Amainframe computer, informally called amainframe orbig iron,[1] is acomputer used primarily by large organizations for critical applications like bulkdata processing for tasks such ascensuses, industry and consumerstatistics,enterprise resource planning, and large-scaletransaction processing. A mainframe computer is large but not as large as asupercomputer and has more processing power than some other classes of computers, such asminicomputers,servers,workstations, andpersonal computers. Most large-scale computer-system architectures were established in the 1960s, but they continue to evolve. Mainframe computers are often used as servers.
The termmainframe was derived from the large cabinet, called amain frame,[2] that housed thecentral processing unit and mainmemory of early computers.[3][4][5] Later, the termmainframe was used to distinguish high-end commercial computers from less powerful machines.[6]
High hardware and computational utilization rates through virtualization to support massivethroughput
Hot swapping of hardware, such as processors and memory
The high stability and reliability of mainframes enable these machines to run uninterrupted for very long periods of time, withmean time between failures (MTBF) measured in decades.
Mainframes havehigh availability, one of the primary reasons for their longevity, since they are typically used in applications where downtime would be costly or catastrophic. The termreliability, availability and serviceability (RAS) is a defining characteristic of mainframe computers. Proper planning and implementation are required to realize these features. In addition, mainframes are more secure than other computer types: theNIST vulnerabilities database,US-CERT, rates traditional mainframes such asIBM Z (previously called z Systems, System z, and zSeries),[vague]Unisys Dorado, and Unisys Libra as among the most secure, with vulnerabilities in the low single digits, as compared to thousands forWindows,UNIX, andLinux.[7] Software upgrades usually require setting up theoperating system or portions thereof, and are non disruptive only when usingvirtualizing facilities such as IBMz/OS andParallel Sysplex, or Unisys XPCL, which support workload sharing so that one system can take over another's application while it is being refreshed.
In the late 1950s, mainframes had only a rudimentary interactive interface (the console) and used sets ofpunched cards,paper tape, ormagnetic tape to transfer data and programs. They operated inbatch mode to supportback office functions such as payroll and customer billing, most of which were based on repeated tape-basedsorting and merging operations followed byline printing to preprintedcontinuous stationery. When interactive user terminals were introduced, they were used almost exclusively for applications (e.g.airline booking) rather than program development. However, in 1961 the first[8] academic, general-purpose timesharing system that supported software development,[9]CTSS, was released atMIT on anIBM 709, later 7090 and 7094.[10]Typewriter andTeletype devices were common control consoles for system operators through the early 1970s, although ultimately supplanted bykeyboard/display devices.
By the early 1970s, many mainframes acquired interactive user terminals[NB 1] operating astimesharing computers, supporting hundreds of users simultaneously along with batch processing. Users gained access through keyboard/typewriter terminals and later character-mode text[NB 2]terminalCRT displays with integral keyboards, or finally frompersonal computers equipped withterminal emulation software. By the 1980s, many mainframes supported general purpose graphic display terminals, and terminal emulation, but not graphical user interfaces. This form of end-user computing became obsolete in the 1990s due to the advent of personal computers provided withGUIs. After 2000, modern mainframes partially or entirely phased out classic "green screen" and color display terminal access for end-users in favour of Web-style user interfaces.[citation needed]
The infrastructure requirements were drastically reduced during the mid-1990s, whenCMOS mainframe designs replaced the olderbipolar technology. IBM claimed that its newer mainframes reduced data center energy costs for power and cooling, and reduced physical space requirements compared toserver farms.[11]
Inside anIBM System z9 mainframe that has an IBMThinkPad integrated into the system as aHardware Management Console (HMC). The HMC is used by theoperator to control, e.g., the hardware, thePR/SM configuration. A secondary function is to serve as a low performanceoperator console via a proprietary interface. The HMC is not supported as a terminal, and remote access to it is limited to HTTP. Two other ThinkPads serve asSupport Elements and backup HMCs
Modern mainframes can run multiple different instances of operating systems at the same time. This technique ofvirtual machines allows applications to run as if they were on physically distinct computers. In this role, a single mainframe can replace higher-functioning hardware services available to conventionalservers. While mainframes pioneered this capability, virtualization is now available on most families of computer systems, though not always to the same degree or level of sophistication.[12]
Mainframes can add orhot swap system capacity without disrupting system function, with specificity and granularity to a level of sophistication not usually available with most server solutions.[citation needed] Modern mainframes, notably theIBM Z servers, offer two levels ofvirtualization: logical partitions (LPARs, via thePR/SM facility) and virtual machines (via thez/VM operating system). Many mainframe customers run two machines: one in their primary data center and one in theirbackup data center—fully active, partially active, or on standby—in case there is a catastrophe affecting the first building. Test, development, training, and production workload for applications and databases can run on a single machine, except for extremely large demands where the capacity of one machine might be limiting. Such a two-mainframe installation can support continuous business service, avoiding both planned and unplanned outages. In practice, many customers use multiple mainframes linked either byParallel Sysplex and sharedDASD (in IBM's case),[citation needed] or with shared, geographically dispersed storage provided by EMC or Hitachi.
Mainframes are designed to handle very high volume input and output (I/O) and emphasize throughput computing. Since the late 1950s,[NB 3] mainframe designs have included subsidiary hardware[NB 4] (calledchannels orperipheral processors) which manage the I/O devices, leaving the CPU free to deal only with high-speed memory. It is common in mainframe shops to deal with massivedatabases and files.Gigabyte toterabyte-size record files are not unusual.[13] Compared to a typical PC, mainframes commonly have hundreds to thousands of times as muchdata storage online,[14] and can access it reasonably quickly. Other server families also offload I/O processing and emphasize throughput computing.
Mainframereturn on investment (ROI), like any other computing platform, is dependent on its ability to scale, support mixed workloads, reduce labor costs, deliver uninterrupted service for critical business applications, and several other risk-adjusted cost factors.
Mainframes also have execution integrity characteristics forfault tolerant computing. For example, z900, z990, System z9, and System z10 servers effectively execute result-oriented instructions twice, compare results, arbitrate between any differences (through instruction retry and failure isolation), then shift workloads "in flight" to functioning processors, including spares, without any impact to operating systems, applications, or users. This hardware-level feature, also found in HP'sNonStop systems, is known as lock-stepping, because both processors take their "steps" (i.e. instructions) together. Not all applications absolutely need the assured integrity that these systems provide, but many do, such as financial transaction processing.[citation needed]
IBM, with theIBM Z series, continues to be a major manufacturer in the mainframe market. In 2000,Hitachi co-developed thezSeries z900 with IBM to share expenses, and the latest Hitachi AP10000 models are made by IBM.Unisys manufacturesClearPath Libra mainframes, based on earlierBurroughs MCP products and ClearPath Dorado mainframes based onSperry UnivacOS 1100 product lines.Hewlett Packard Enterprise sells its uniqueNonStop systems, which it acquired withTandem Computers and which some analysts classify as mainframes.Groupe Bull'sGCOS,StratusOpenVOS,Fujitsu (formerly Siemens)BS2000, and Fujitsu-ICL VME mainframes are still available in Europe, and Fujitsu (formerly Amdahl)GS21 mainframes globally.NEC withACOS and Hitachi with AP10000-VOS3[15] still maintain mainframe businesses in the Japanese market.
The amount of vendor investment in mainframe development varies with market share. Fujitsu and Hitachi both continue to use custom S/390-compatible processors, as well as other CPUs (including POWER and Xeon) for lower-end systems. Bull uses a mixture ofItanium andXeon processors. NEC uses Xeon processors for its low-end ACOS-2 line, but develops the custom NOAH-6 processor for its high-end ACOS-4 series. IBM also develops custom processors in-house, such as theTelum. Unisys produces code compatible mainframe systems that range from laptops to cabinet-sized mainframes that use homegrown CPUs as well asXeon processors. Furthermore, there exists a market for software applications to manage the performance of mainframe implementations. In addition to IBM, significant market competitors includeBMC[16] andPrecisely;[17] former competitors includeCompuware[18][19] andCA Technologies.[20] Starting in the 2010s,cloud computing is now a less expensive, more scalable alternative.[citation needed]
Several manufacturers and their successors produced mainframe computers from the 1950s until the early 21st century, with gradually decreasing numbers and a gradual transition to simulation on Intel chips rather than proprietary hardware. The US group of manufacturers was first known as "IBM and the Seven Dwarfs":[21]: p.83 usuallyBurroughs,UNIVAC,NCR,Control Data,Honeywell,General Electric andRCA, although some lists varied. Later, with the departure of General Electric and RCA, it was referred to as IBM and theBUNCH. IBM's dominance grew out of their700/7000 series and, later, the development of the360 series mainframes. The latter architecture has continued to evolve into their current zSeries mainframes which, along with the then Burroughs and Sperry (nowUnisys)MCP-based andOS1100 mainframes, are among the few mainframe architectures still extant that can trace their roots to this early period. While IBM's zSeries can still run 24-bit System/360 code, the 64-bitIBM Z CMOS servers have nothing physically in common with the older systems. Notable manufacturers outside the US wereSiemens andTelefunken inGermany,ICL in theUnited Kingdom,Olivetti in Italy, andFujitsu,Hitachi,Oki, andNEC inJapan. TheSoviet Union andWarsaw Pact countries manufactured close copies of IBM mainframes during theCold War;[citation needed] theBESM series andStrela are examples of independently designed Soviet computers.Elwro in Poland was another Eastern Bloc manufacturer, producing theODRA, R-32 and R-34 mainframes.
Shrinking demand and tough competition started ashakeout in the market in the early 1970s—RCA sold out to UNIVAC and GE sold its business to Honeywell; between 1986 and 1990 Honeywell was bought out byBull; UNIVAC became a division ofSperry, which later merged with Burroughs to formUnisys Corporation in 1986.
In 1984 estimated sales of desktop computers ($11.6 billion) exceeded mainframe computers ($11.4 billion) for the first time. IBM received the vast majority of mainframe revenue.[22] During the 1980s,minicomputer-based systems grew more sophisticated and were able to displace the lower end of the mainframes. These computers, sometimes calleddepartmental computers, were typified by theDigital Equipment CorporationVAX series.
In 1991,AT&T Corporation briefly owned NCR. During the same period, companies found that servers based on microcomputer designs could be deployed at a fraction of the acquisition price and offer local users much greater control over their own systems given the IT policies and practices at that time. Terminals used for interacting with mainframe systems were gradually replaced bypersonal computers. Consequently, demand plummeted and new mainframe installations were restricted mainly to financial services and government. In the early 1990s, there was a rough consensus among industry analysts that the mainframe was a dying market as mainframe platforms were increasingly replaced by personal computer networks.InfoWorld's Stewart Alsop infamously predicted that the last mainframe would be unplugged in 1996; in 1993, he cited Cheryl Currid, a computer industry analyst as saying that the last mainframe "will stop working on December 31, 1999",[23] a reference to the anticipatedYear 2000 problem (Y2K).
That trend started to turn around in the late 1990s as corporations found new uses for their existing mainframes and as the price of data networking collapsed in most parts of the world, encouraging trends toward more centralized computing. The growth ofe-business also dramatically increased the number of back-end transactions processed by mainframe software as well as the size and throughput of databases. Batch processing, such as billing, became even more important (and larger) with the growth of e-business, and mainframes are particularly adept at large-scale batch computing. Another factor currently increasing mainframe use is the development of theLinux operating system, whicharrived on IBM mainframe systems in 1999. Linux allows users to take advantage ofopen source software combined with mainframe hardwareRAS. Rapid expansion and development inemerging markets, particularlyPeople's Republic of China, is also spurring major mainframe investments to solve exceptionally difficult computing problems, e.g. providing unified, extremely high volume online transaction processing databases for 1 billion consumers across multiple industries (banking, insurance, credit reporting, government services, etc.) In late 2000, IBM introduced 64-bitz/Architecture, acquired numerous software companies such asCognos and introduced those software products to the mainframe. IBM's quarterly and annual reports in the 2000s usually reported increasing mainframe revenues and capacity shipments. However, IBM's mainframe hardware business has not been immune to the recent overall downturn in the server hardware market or to model cycle effects. For example, in the 4th quarter of 2009, IBM'sSystem z hardware revenues decreased by 27% year over year. But MIPS (millions of instructions per second) shipments increased 4% per year over the past two years.[24] Alsop had himself photographed in 2000, symbolically eating his own words ("death to the mainframe").[25]
In 2012,NASA powered down its last mainframe, an IBM System z9.[26] However, IBM's successor to the z9, thez10, led a New York Times reporter to state four years earlier that "mainframe technology—hardware, software and services—remains a large and lucrative business for I.B.M., and mainframes are still the back-office engines behind the world's financial markets and much of global commerce".[27] As of 2010[update], while mainframe technology represented less than 3% of IBM's revenues, it "continue[d] to play an outsized role in Big Blue's results".[28]
IBM has continued to launch new generations of mainframes: theIBM z13 in 2015,[29] thez14 in 2017,[30][31] thez15 in 2019,[32] and thez16 in 2022, the latter featuring among other things an "integrated on-chip AI accelerator" and the newTelum microprocessor.[33]
Asupercomputer is a computer at the leading edge of data processing capability, with respect to calculation speed. Supercomputers are used for scientific and engineering problems (high-performance computing) which crunch numbers and data,[34] while mainframes focus on transaction processing. The differences are:
Mainframes are built to be reliable fortransaction processing (measured byTPC-metrics; not used or helpful for most supercomputing applications) as it is commonly understood in the business world: the commercial exchange of goods, services, or money. A typical transaction, as defined by theTransaction Processing Performance Council,[35] updates a database system for inventory control (goods), airline reservations (services), or banking (money) by adding a record. A transaction may refer to a set of operations including disk read/writes, operating system calls, or some form of data transfer from one subsystem to another which is not measured by the processing speed of theCPU. Transaction processing is not exclusive to mainframes but is also used by microprocessor-based servers and online networks.
Supercomputer performance is measured infloating point operations per second (FLOPS)[36] or intraversed edges per second or TEPS,[37] metrics that are not very meaningful for mainframe applications, while mainframes are sometimes measured in millions of instructions per second (MIPS), although the definition depends on the instruction mix measured.[38] Examples of integer operations measured by MIPS include adding numbers together, checking values or moving data around in memory (while moving information to and from storage, so-calledI/O is most helpful for mainframes; and within memory, only helping indirectly). Floating point operations are mostly addition, subtraction, and multiplication (ofbinary floating point in supercomputers; measured by FLOPS) with enough digits of precision to model continuous phenomena such as weather prediction and nuclear simulations (only recently standardizeddecimal floating point, not used in supercomputers, are appropriate formonetary values such as those useful for mainframe applications). In terms of computational speed, supercomputers are more powerful.[39]
Mainframes and supercomputers cannot always be clearly distinguished; up until the early 1990s, many supercomputers were based on a mainframe architecture with supercomputing extensions. An example of such a system is theHITAC S-3800, which was instruction-set compatible withIBM System/370 mainframes, and could run theHitachi VOS3 operating system (a fork ofIBM MVS).[40] The S-3800 therefore can be seen as being both simultaneously a supercomputer and also an IBM-compatible mainframe.
In 2007,[41] an amalgamation of the different technologies and architectures forsupercomputers and mainframes has led to a so-calledgameframe.
^Beach, Thomas E. (August 29, 2016)."Types of Computers".Computer Concepts and Terminology. Los Alamos: University of New Mexico.Archived from the original on August 3, 2020. RetrievedOctober 2, 2020.
^Singh, Jai P.; Morgan, Robert P. (October 1971).Educational Computer Utilization and Computer Communications(PDF) (Report). St. Louis, MO: Washington University. p. 13. National Aeronautics and Space Administration Grant No. Y/NGL-26-008-054. RetrievedMarch 8, 2022.Much of the early development in the time-sharing field took place on university campuses.8 Notable examples are the CTSS (Compatible Time-Sharing System) at MIT, which was the first general purpose time-sharing system...
^Resource consumption for billing and performance purposes is measured in units of amillion service units (MSUs), but the definition of MSU varies from processor to processor so that MSUs are useless for comparing processor performance.