Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Blade server

From Wikipedia, the free encyclopedia
(Redirected fromServer blade)
Modular compute server design
Supermicro SBI-7228R-T2X blade server, containing two dual-CPU server nodes

Ablade server is a stripped-downserver computer with amodular design optimized to minimize the use of physical space and energy. Blade servers have many components removed to save space, minimize power consumption and other considerations, while still having all the functional components to be considered acomputer.[1] Unlike arack-mount server, a blade server fits inside ablade enclosure, which can hold multiple blade servers, providing services such as power, cooling, networking, various interconnects and management. Together, blades and the blade enclosure form a blade system, which may itself be rack-mounted. Different blade providers have differing principles regarding what to include in the blade itself, and in the blade system as a whole.

In astandard server-rack configuration, one rack unit or1U—19 inches (480 mm) wide and 1.75 inches (44 mm) tall—defines the minimum possible size of any equipment. The principal benefit and justification of blade computing relates to lifting this restriction so as to reduce size requirements. The most common computer rackform-factor is 42U high, which limits the number of discrete computer devices directly mountable in a rack to 42 components. Blades do not have this limitation. As of 2014[update], densities of up to 180 servers per blade system (or 1440 servers per rack) are achievable with blade systems.[2]

Blade enclosure

[edit]

The enclosure (or chassis) performs many of the non-core computing services found in most computers. Non-blade systems typically use bulky, hot and space-inefficient components, and may duplicate these across many computers that may or may not perform at capacity. By locating these services in one place and sharing them among the blade computers, the overall utilization becomes higher. The specifics of which services are provided varies by vendor.

HP BladeSystem c7000 enclosure (populated with 16 blades), with two 3U UPS units below

Power

[edit]

Computers operate over a range of DC voltages, but utilities deliver power asAC, and at higher voltages than required within computers. Converting this current requires one or morepower supply units (or PSUs). To ensure that the failure of one power source does not affect the operation of the computer, even entry-level servers often have redundant power supplies, again adding to the bulk and heat output of the design.

The blade enclosure's power supply provides a single power source for all blades within the enclosure. This single power source may come as a power supply in the enclosure or as a dedicated separate PSU supplying DC to multiple enclosures.[3][4] This setup reduces the number of PSUs required to provide a resilient power supply.

The popularity of blade servers, and their own appetite for power, has led to an increase in the number of rack-mountableuninterruptible power supply (or UPS) units, including units targeted specifically towards blade servers (such as theBladeUPS).

Cooling

[edit]

During operation, electrical and mechanical components produce heat, which a system must dissipate to ensure the proper functioning of its components. Most blade enclosures, like most computing systems, remove heat by usingfans.

A frequently underestimated problem when designing high-performance computer systems involves the conflict between the amount of heat a system generates and the ability of its fans to remove the heat. The blade's shared power and cooling means that it does not generate as much heat as traditional servers. Newer[update] blade-enclosures feature variable-speed fans and control logic, or evenliquid cooling systems[5][6] that adjust to meet the system's cooling requirements.

At the same time, the increased density of blade-server configurations can still result in higher overall demands for cooling with racks populated at over 50% full. This is especially true with early-generation blades. In absolute terms, a fully populated rack of blade servers is likely to require more cooling capacity than a fully populated rack of standard 1U servers. This is because one can fit up to 128 blade servers in the same rack that will only hold 42 1U rack-mount servers.[7]

Networking

[edit]

Blade servers generally include integrated or optionalnetwork interface controllers forEthernet orhost adapters forFibre Channel storage systems orconverged network adapter to combine storage and data via oneFibre Channel over Ethernet interface. In many blades, at least one interface is embedded on the motherboard and extra interfaces can be added usingmezzanine cards.

A blade enclosure can provide individual external ports to which each network interface on a blade will connect. Alternatively, a blade enclosure can aggregate network interfaces into interconnect devices (such as switches) built into the blade enclosure or innetworking blades.[8][9]

Storage

[edit]

While computers typically use hard disks to store operating systems, applications and data, these are not necessarily required locally. Many storage connection methods (e.g.FireWire,SATA,E-SATA,SCSI,SASDAS,FC andiSCSI) are readily moved outside the server, though not all are used in enterprise-level installations. Implementing these connection interfaces within the computer presents similar challenges to the networking interfaces (indeed iSCSI runs over the network interface), and similarly these can be removed from the blade and presented individually or aggregated either on the chassis or throughother blades.

The ability to boot the blade from astorage area network (SAN) allows for an entirely disk-free blade, an example of which implementation is theIntel Modular Server System.

Other blades

[edit]

Since blade enclosures provide a standard method for delivering basic services to computer devices, other types of devices can also utilize blade enclosures. Blades providing switching, routing, storage, SAN and fibre-channel access can slot into the enclosure to provide these services to all members of the enclosure.

Systems administrators can use storage blades where a requirement exists for additional local storage.[10][11][12]

Uses

[edit]
Cray XC40 supercomputer cabinet with 48 blades, each containing 4 nodes with 2 CPUs each

Blade servers function well for specific purposes such asweb hosting,virtualization, andcluster computing. Individual blades are typicallyhot-swappable. As users deal with larger and more diverse workloads, they add more processing power, memory and I/O bandwidth to blade servers.Although blade-server technology in theory allows for open, cross-vendor systems, most users buy modules, enclosures,racks and management tools from the same vendor.

Eventual standardization of the technology might result in more choices for consumers;[13][14] as of 2009[update] increasing numbers of third-party software vendors have started to enter this growing field.[15][needs update]

Blade servers do not, however, provide the answer to every computing problem. One can view them as a form of productizedserver-farm that borrows frommainframe packaging, cooling, and power-supply technology. Very large computing tasks may still require server farms of blade servers, and because of blade servers' high power density, can suffer even more acutely from theheating, ventilation, and air conditioning problems that affect large conventional server farms.

History

[edit]

Developers first placed completemicrocomputers on cards and packaged them in standard19-inch racks in the 1970s, soon after the introduction of 8-bitmicroprocessors. This architecture was used in the industrialprocess control industry as an alternative tominicomputer-based control systems. Early models stored programs inEPROM and were limited to a single function with a smallreal-time executive.

TheVMEbus architecture (c. 1981) defined a computer interface that included implementation of a board-level computer installed in a chassis backplane with multiple slots for pluggable boards to provide I/O, memory, or additional computing.

In the 1990s, the PCI Industrial Computer Manufacturers GroupPICMG developed a chassis/blade structure for the then emerging Peripheral Component Interconnect busPCI calledCompactPCI. CompactPCI was actually invented by Ziatech Corp of San Luis Obispo, CA and developed into an industry standard. Common among these chassis-based computers was the fact that the entire chassis was a single system. While a chassis might include multiple computing elements to provide the desired level of performance and redundancy, there was always one master board in charge, or two redundant fail-over masters coordinating the operation of the entire system. Moreover, this system architecture provided management capabilities not present in typical rack mount computers, much more like in ultra-high reliability systems, managing power supplies, cooling fans as well as monitoring health of other internal components.

Demands for managing hundreds and thousands of servers in the emerging Internet Data Centers where the manpower simply didn't exist to keep pace a new server architecture was needed. In 1998 and 1999 this new Blade Server Architecture was developed at Ziatech based on their Compact PCI platform to house as many as 14 "blade servers" in a standard 19" 9U high rack mounted chassis, allowing in this configuration as many as 84 servers in a standard 84 Rack Unit 19" rack. What this new architecture brought to the table was a set of new interfaces to the hardware specifically to provide the capability to remotely monitor the health and performance of all major replaceable modules that could be changed/replaced while the system was in operation. The ability to change/replace or add modules within the system while it is in operation is known as Hot-Swap. Unique to any other server system the Ketris Blade servers routed Ethernet across the backplane (where server blades would plug-in) eliminating more than 160 cables in a single 84 Rack Unit high 19" rack. For a large data center tens of thousands of Ethernet cables, prone to failure would be eliminated. Further this architecture provided the capabilities to inventory modules installed in the system remotely in each system chassis without the blade servers operating. This architecture enabled the ability to provision (power up, install operating systems and applications software) (e.g. a Web Servers) remotely from a Network Operations Center (NOC). The system architecture when this system was announced was called Ketris, named after theKetri Sword, worn by nomads in such a way as to be drawn very quickly as needed. First envisioned by Dave Bottom and developed by an engineering team at Ziatech Corp in 1999 and demonstrated at the Networld+Interop show in May 2000. Patents were awarded for the Ketris blade server architecture[citation needed]. In October 2000 Ziatech was acquired by Intel Corp and the Ketris Blade Server systems would become a product of the Intel Network Products Group.[citation needed]

PICMG expanded the CompactPCI specification with the use of standard Ethernet connectivity between boards across the backplane. The PICMG 2.16 CompactPCI Packet Switching Backplane specification was adopted in Sept 2001.[16] This provided the firstopen architecture for a multi-server chassis.

The Second generation of Ketris would be developed at Intel as an architecture for the telecommunications industry to support the build out of IP base telecom services and in particular the LTE (Long Term Evolution) Cellular Network build-out. PICMG followed with this larger and more feature-richAdvancedTCA specification, targeting the telecom industry's need for ahigh availability and dense computing platform with extended product life (10+ years). While AdvancedTCA system and boards typically sell for higher prices than blade servers, the operating cost (manpower to manage and maintain) are dramatically lower, where operating cost often dwarf the acquisition cost for traditional servers. AdvancedTCA promote them fortelecommunications customers, however in the real world implementation in Internet Data Centers where thermal as well as other maintenance and operating cost had become prohibitively expensive, this blade server architecture with remote automated provisioning, health and performance monitoring and management would be a significantly less expensive operating cost.[clarification needed]

The first commercialized blade-server architecture[citation needed] was invented byChristopher Hipp andDavid Kirkeby, and their patent was assigned to Houston-basedRLX Technologies.[17] RLX, which consisted primarily of formerCompaq Computer Corporation employees, including Hipp and Kirkeby, shipped its first commercial blade server in 2001.[18] RLX was acquired byHewlett-Packard in 2005.[19]

The nameblade server appeared when a card included the processor, memory, I/O and non-volatile program storage (flash memory or smallhard disk(s)). This allowed manufacturers to package a complete server, with its operating system and applications, on a single card/board/blade. These blades could then operate independently within a common chassis, doing the work of multiple separate server boxes more efficiently. In addition to the most obvious benefit of this packaging (less space consumption), additional efficiency benefits have become clear in power, cooling, management, and networking due to the pooling or sharing of common infrastructure to support the entire chassis, rather than providing each of these on a per server box basis.

In 2011, research firmIDC identified the major players in the blade market asHP,IBM,Cisco, andDell.[20] Other companies selling blade servers includeSupermicro,Hitachi.

Blade models

[edit]
Cisco UCS blade servers in a chassis

The prominent brands in the blade server market areSupermicro,Cisco Systems,HPE,Dell andIBM, though the latter sold itsx86 server business toLenovo in 2014 after selling its consumer PC line to Lenovo in 2005.[21]

In 2009, Cisco announced blades in itsUnified Computing System product line, consisting of 6U high chassis, up to 8 blade servers in each chassis. It had a heavily modifiedNexus 5K switch, rebranded as afabric interconnect, and management software for the whole system.[22]HP's initial line consisted of two chassis models, the c3000 which holds up to 8 half-heightProLiant line blades (also available in tower form), and the c7000 (10U) which holds up to 16 half-height ProLiant blades.Dell's product, theM1000e is a 10U modular enclosure and holds up to 16 half-heightPowerEdge blade servers or 32 quarter-height blades.

See also

[edit]

References

[edit]
  1. ^"Data Center Networking – Connectivity and Topology Design Guide"(PDF). Enterasys Networks, Inc. 2011. Archived fromthe original(PDF) on 2013-10-05. Retrieved2013-09-05.
  2. ^"HP updates Moonshot server platform with ARM and AMD Opteron hardware". Incisive Business Media Limited. 9 Dec 2013. Archived fromthe original on 16 April 2014. Retrieved2014-04-25.
  3. ^"HP BladeSystem p-Class Infrastructure". Archived fromthe original on 2006-05-18. Retrieved2006-06-09.
  4. ^Sun Blade Modular System
  5. ^Sun Power and Cooling
  6. ^"HP Thermal Logic technology"(PDF). Archived fromthe original(PDF) on 2007-01-23. Retrieved2007-04-18.
  7. ^"HP BL2x220c". Archived fromthe original on 2008-08-29. Retrieved2008-08-21.
  8. ^Sun Independent I/O
  9. ^HP Virtual Connect
  10. ^IBM BladeCenter HS21Archived October 13, 2007, at theWayback Machine
  11. ^"HP storage blade". Archived fromthe original on 2007-04-30. Retrieved2007-04-18.
  12. ^Verari Storage Blade
  13. ^http://www.techspot.com/news/26376-intel-endorses-industrystandard-blade-design.html TechSpot
  14. ^"Dell calls for blade server standards".news.cnet.com. Archived fromthe original on 2011-12-26.
  15. ^https://www.theregister.co.uk/2009/04/07/ssi_blade_specs/ The Register
  16. ^PICMG specificationsArchived 2007-01-09 at theWayback Machine
  17. ^US 6411506, Hipp, Christopher & Kirkeby, David, "High density web server chassis system and method", published 2002-06-25, assigned toRLX Technologies 
  18. ^"RLX helps data centres with switch to blades". ARN. October 8, 2001. Retrieved2011-07-30.
  19. ^"HP Will Acquire RLX To Bolster Blades". www.informationweek.com. October 3, 2005. Archived fromthe original on January 3, 2013. Retrieved2009-07-24.
  20. ^"Worldwide Server Market Revenues Increase 12.1% in First Quarter as Market Demand Continues to Improve, According to IDC" (Press release). IDC. 2011-05-24. Archived fromthe original on 2011-05-26. Retrieved2015-03-20.
  21. ^"Transitioning x86 to Lenovo".IBM.com. Archived fromthe original on April 5, 2014. Retrieved27 September 2014.
  22. ^"Cisco Unleashes the Power of Virtualization with Industry's First Unified Computing System".Press release. March 16, 2009. Archived fromthe original on March 21, 2009. RetrievedMarch 27, 2017.

External links

[edit]
Wikimedia Commons has media related toBlade servers.
Micro
Static
Appliances
Computers
By use
By size
Mobile
Laptop
Tablet
Handheld
Calculator
Wearable
Midrange
Large
Others
Retrieved from "https://en.wikipedia.org/w/index.php?title=Blade_server&oldid=1283308096"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp