![]() | This article has multiple issues. Please helpimprove it or discuss these issues on thetalk page.(Learn how and when to remove these messages) (Learn how and when to remove this message)
|
Petascale computing refers to computing systems capable of performing at least1 quadrillion (10^15) floating-point operations per second (FLOPS). These systems are often calledpetaflops systems and represent a significant leap from traditional supercomputers in terms of raw performance, enabling them to handle vast datasets and complex computations.
Floating point operations per second (FLOPS) are one measure ofcomputer performance. FLOPS can be recorded in different measures of precision, however the standard measure (used by theTOP500 supercomputer list) uses 64 bit (double-precision floating-point format) operations per second using theHigh Performance LINPACK (HPLinpack)benchmark.[1][2]
The metric typically refers to single computing systems, although can be used to measure distributed computing systems for comparison. It can be noted that there are alternative precision measures using the LINPACK benchmarks which are not part of the standard metric/definition.[2] It has been recognized that HPLinpack may not be a good general measure of supercomputer utility in real world application, however it is the common standard for performance measurement.[3][4]
The petaFLOPS barrier was first broken on 16 September 2007 by thedistributed computingFolding@home project.[5] The first single petascale system, theRoadrunner, entered operation in 2008.[6] TheRoadrunner, built byIBM, had a sustained performance of 1.026 petaFLOPS. TheJaguar became the second computer to break the petaFLOPS milestone, later in 2008, and reached a performance of 1.759 petaFLOPS after a 2009 update.[7]
In 2020,Fugaku became the fastest supercomputer in the world, reaching 415 petaFLOPS in June 2020. Fugaku later achieved an Rmax of 442 petaFLOPS in November of the same year.
By 2022,exascale computing had been reached with the development ofFrontier, surpassing Fugaku with an Rmax of 1.102 exaFLOPS in June 2022.[8]
Modernartificial intelligence (AI) systems require large amounts of computational power to train model parameters.OpenAI employed 25,000Nvidia A100 GPUs to trainGPT-4, using a total of 133 septillion floating-point operations.[9]