| LINPACK benchmarks | |
|---|---|
| Original authors | Jack Dongarra, Jim Bunch,Cleve Moler, and Gilbert Stewart |
| Initial release | 1979 (1979) |
| Website | netlib |
TheLINPACK benchmarks are a measure of a system'sfloating-point computing power. Introduced byJack Dongarra, they measure how fast a computer solves a densen × nsystem of linear equationsAx = b, which is a common task inengineering.
The latest version of thesebenchmarks is used to build theTOP500 list, ranking the world's most powerful supercomputers.[1]
The aim is to approximate how fast a computer will perform when solving real problems. It is a simplification, since no single computational task can reflect the overall performance of a computer system. Nevertheless, the LINPACK benchmark performance can provide a good correction over the peak performance provided by the manufacturer. The peak performance is the maximal theoretical performance a computer can achieve, calculated as the machine's frequency, in cycles per second, times the number of operations per cycle it can perform. The actual performance will always be lower than the peak performance.[2] Theperformance of a computer is a complex issue that depends on many interconnected variables. The performance measured by the LINPACK benchmark consists of the number of64-bit floating-point operations, generally additions and multiplications, a computer can perform per second, also known asFLOPS. However, a computer's performance when running actual applications is likely to be far behind the maximal performance it achieves running the appropriate LINPACK benchmark.[3]
The name of these benchmarks comes from theLINPACK package, a collection of algebraFortran subroutines widely used in the 1980s, and initially tightly linked to the LINPACK benchmark. The LINPACK package has since been replaced by other libraries.
The LINPACK benchmark report appeared first in 1979 as an appendix to theLINPACK user's manual.[4]
LINPACK was designed to help users estimate the time required by their systems to solve a problem using the LINPACK package, by extrapolating the performance results obtained by 23 different computers solving a matrix problem of size 100.
This matrix size was chosen due to memory and CPU limitations at that time:
Over the years, additional versions with different problem sizes, like matrices of order 300 and 1000, and constraints were released, allowing new optimization opportunities as hardware architectures started to implement matrix–vector and matrix–matrix operations.[5]
Parallel processing was also introduced in the LINPACK parallel benchmark in the late 1980s.[2]
In 1991, the LINPACK was modified for[6]solving problems of arbitrary size, enablinghigh-performance computers (HPC) to get near to their asymptotic performance.
Two years later this benchmark was used for measuring the performance of the firstTOP500 list.
LINPACK 100 is very similar to the original benchmark published in 1979 along with the LINPACK users' manual.[7]The solution is obtained byGaussian elimination withpartial pivoting, with floating-point operations, wheren = 100 is the order of the dense matrixA that defines the problem. Its small size and the lack of software flexibility doesn't allow most modern computers to reach their performance limits. However, it can still be useful to predict performances in numerically intensive user-written code usingcompiler optimization.[2]
LINPACK 1000 can provide a performance nearer to the machine's limit because in addition to offering a bigger problem size, a matrix of order 1000, changes in the algorithm are possible. The only constraints are that the relative accuracy can't be reduced and the number of operations will always be considered to be withn = 1000.[2]
The previous benchmarks are not suitable for testing parallel computers,[8] and the so-called Linpack's HighlyParallel Computing benchmark, or HPLinpack benchmark, was introduced. In HPLinpack the sizen of the problem can be made as large as it is needed to optimize the performance results of the machine. Once again, will be taken as the operation count, with independence of the algorithm used. Use of theStrassen algorithm is not allowed because it distorts the real execution rate.[9]The accuracy must be such that the following expression is satisfied:
where
For each computer system, the following quantities are reported:[2]
These results are used to compile theTOP500 list twice a year, with the world's most powerful computers.[1] TOP500 measures these indouble-precision floating-point format (FP64). The ratioRmax/Rpeak is called parallel efficiency or HPL efficiency.[12] It is typically lower the more nodes a system has due to communication overhead. For example, a 1990sCray Y-MP achieves about 90% HPL efficiency,[13] whileFrontier achieves about 70% in 2023.[14]
The previous section describes the ground rules for the benchmarks. The actualimplementation of the program can diverge, with some examples being available inFortran,[15]C[16] orJava.[17]
HPL is a portable implementation of HPLinpack that was written in C, originally as a guideline, but that is now widely used to provide data for the TOP500 list, though other technologies and packages can be used. HPL generates a linear system of equations of ordern and solves it using LU decomposition with partial row pivoting. It requires installed implementations ofMPI and eitherBLAS orVSIPL to run.[18]
Coarsely, the algorithm has the following characteristics:[19][20]
The LINPACK benchmark is said to have succeeded because of the scalability[21] of HPLinpack, the fact that it generates a single number, making the results easily comparable and the extensive historical data base it has associated.[22]However, soon after its release, the LINPACK benchmark was criticized for providing performance levels "generally unobtainable by all but a very few programmers who tediously optimize their code for that machine and that machine alone",[23] because it only tests the resolution of dense linear systems, which are not representative of all the operations usually performed in scientific computing.[24]Jack Dongarra, the main driving force behind the LINPACK benchmarks, said that, while they only emphasize "peak" CPU speed and number of CPUs, not enough stress is given to local bandwidth and the network.[25]
Thom Dunning Jr., director of theNational Center for Supercomputing Applications, had this to say about the LINPACK benchmark: "The Linpack benchmark is one of those interesting phenomena – almost anyone who knows about it will deride its utility. They understand its limitations but it has mindshare because it's the one number we've all bought into over the years."[26]
According to Dongarra, "the organizers of the TOP500 are actively looking to expand the scope of the benchmark reporting" because "it is important to include more performance characteristic and signatures for a given system".[27]One of the possibilities that is being considered to extend the benchmark for the TOP500 is theHPC Challenge Benchmark suite.[28] With the advent ofpetascale computers,traversed edges per second have started to emerge as a complementary metric to FLOPS measured by LINPACK. Another such metric is theHPCG benchmark, proposed by Dongarra.[29]
According toJack Dongarra, the running time required to obtain good performance results with HPLinpack is expected to increase. At a conference held in 2010, he said he expects running times of 2.5 days in "a few years".[30]
LINPACK is a benchmark that people often cite because there's such a historical data base of information there, because it's fairly easy to run, it's fairly easy to understand, and it captures in some sense the best and worst of programming.