HPC integratessystems administration (including network and security knowledge),parallel computing anddistributed computing into a multidisciplinary field that combinesdigital electronics,computer architecture,system software,programming languages,algorithms and computational techniques.[1]HPC technologies are the tools and systems used to implement and create high performance computing systems.[2] Since around 2005, HPC systems have shifted from supercomputing to computingclusters andgrids.[1] Because of the need of networking in clusters and grids, High Performance Computing Technologies are achieved by the use of acollapsed network backbone, because the collapsed backbone architecture is simple to troubleshoot and upgrades can be applied to a single router as opposed to multiple ones. HPC integrates with data analytics inAI engineering workflows to generate new data streams that increase simulation ability to answer the "what if" questions.[3]
High-performance computing (HPC) as a term arose after the term "supercomputing".[4] HPC is sometimes used as a synonym for supercomputing; but, in other contexts, "supercomputer" is used to refer to a more powerful subset of "high-performance computers", and the term "supercomputing" becomes a subset of "high-performance computing". The potential for confusion over the use of these terms is apparent.
Because most current applications are not designed for HPC technologies but are retrofitted, they are not designed or tested for scaling to more powerful processors or machines.[2] Since networking clusters and grids usemultiple processors and computers, these scaling problems can cripple critical systems in future supercomputing systems. Therefore, either the existing tools do not address the needs of the high performance computing community or the HPC community is unaware of these tools.[2] A few examples of commercial HPC technologies include:
structural engineering for building design
the simulation of car crashes for structural design
molecular interaction for new drug design
the airflow over automobiles or airplanes
climate modeling and weather prediction
genetic research and DNA sequencing
robotics and autonomous vehicle development
electromagnetic simulations for wireless communication
In government and research institutions, scientists simulategalaxy formation and evolution, fusion energy, and global warming, as well as work to create more accurate short- and long-term weather forecasts.[5] The world's tenth most powerful supercomputer in 2008,IBM Roadrunner (located at theUnited States Department of Energy'sLos Alamos National Laboratory)[6] simulated the performance, safety, and reliability of nuclear weapons and certifies their functionality.[7]
TOP500 ranks the world's 500 fastest high-performance computers, as measured by theHigh Performance LINPACK (HPL) benchmark. Not all existing computers are ranked, either because they are ineligible (e.g., they cannot run the HPL benchmark) or because their owners have not submitted an HPL score (e.g., because they do not wish the size of their system to become public information, for defense reasons). In addition, the use of the single LINPACK benchmark is controversial, in that no single measure can test all aspects of a high-performance computer. To help overcome the limitations of the LINPACK test, the U.S. government commissioned one of its originators,Jack Dongarra of the University of Tennessee, to create a suite of benchmark tests that includes LINPACK and others, called the HPC Challenge benchmark suite. This evolving suite has been used in some HPC procurements, but, because it is not reducible to a single number, it has been unable to overcome the publicity advantage of the less useful TOP500 LINPACK test. The TOP500 list is updated twice a year, once in June at the ISC European Supercomputing Conference and again at a US Supercomputing Conference in November.
Many ideas for the new wave ofgrid computing were originally borrowed from HPC.
Traditionally, HPC has involved anon-premises infrastructure, investing in supercomputers or computer clusters. Over the last decade,cloud computing has grown in popularity for offering computer resources in the commercial sector regardless of their investment capabilities.[8] Some characteristics like scalability andcontainerization also have raised interest in academia.[9] Howeversecurity in the cloud concerns such as data confidentiality are still considered when deciding between cloud or on-premise HPC resources.[8]
Frontier: boasting 1.353 exaFLOPS, this HPE Cray EX235a system features 614,656 CPU cores and 8,451,520 accelerator cores, making a total of 9,066,176 cores. It operates with Slingshot-11 interconnects atOak Ridge National Laboratory, USA.[12]
Eagle: powered by Intel Xeon Platinum 8480C 48C 2GHz processors and NVIDIA H100 GPUs, Eagle reaches 561.20 petaFLOPS of computing power, with 2,073,600 cores. It features NVIDIA Infiniband NDR for high-speed connectivity and is hosted byMicrosoft Azure, USA.[14]
HPC6: the most powerful industrial supercomputer in the world, HPC6 was developed byEni and launched in November 2024. With 606 petaFLOPS of computing power, it is used for energy research and operates inItaly. It is located in the Eni Green Data Center inFerrera Erbognone (PV).[15]
Fugaku: developed byFujitsu, this system achieves 442.01 petaFLOPS using A64FX 48C 2.2GHz processors and Tofu interconnect D technology. It is located atRIKEN Center for Computational Science,Japan.[16]
Alps: this HPE Cray EX254n system reaches 434.90 petaFLOPS, powered byNVIDIA Grace 72C 3.1GHz processors and NVIDIA GH200 Superchips, connected through Slingshot-11 interconnects. It is located atCSCS,Switzerland.[17]
LUMI: one of Europe's fastest supercomputers, LUMI achieves 379.70 petaFLOPS withAMD Optimized 3rd Generation EPYC 64C 2GHz processors and AMD Instinct MI250X accelerators. It is hosted byCSC,Finland, as part of theEuroHPC initiative.[18]
Leonardo: developed under the EuroHPC initiative, thisBullSequana XH2000 system reaches 241.20 petaFLOPS withXeon Platinum 8358 32C 2.6GHz processors and NVIDIA A100 SXM4 64GB accelerators. It is installed atCINECA,Italy.[19]
Tuolumne: Tuolumne achieves 208.10 petaFLOPS and is powered by AMD 4th Gen EPYC 24C 1.8GHz processors and AMD Instinct MI300A accelerators. It operates atLawrence Livermore National Laboratory, USA.[20]
MareNostrum 5 ACC: this BullSequana XH3000 system runs at 175.30 petaFLOPS, featuring Xeon Platinum 8460Y+ 32C 2.3GHz processors and NVIDIA H100 64GB accelerators. It is hosted by theBarcelona Supercomputing Center (BSC),Spain, as part of EuroHPC.[21]
^abBrazell, Jim; Bettersworth, Michael (2005).High Performance Computing (Report). Texas State Technical College.Archived from the original on 2010-07-31. Retrieved2011-05-16.
^abcCollette, Michael; Corey, Bob; Johnson, John (December 2004).High Performance Tools & Technologies(PDF) (Report). Lawrence Livermore National Laboratory, U.S. Department of Energy. Archived fromthe original(PDF) on 2017-08-30.