Block floating point (BFP) is a method used to provide an arithmetic approachingfloating point while using afixed-point processor. BFP assigns a group ofsignificands (the non-exponent part of the floating-point number) to a single exponent, rather than single significand being assigned its own exponent. BFP can be advantageous to limit space use in hardware to perform the same functions as floating-point algorithms, by reusing the exponent; some operations over multiple values between blocks can also be done with a reduced amount of computation.[1]
The common exponent is found by data with the largest amplitude in the block. To find the value of the exponent, the number of leading zeros must be found (count leading zeros). For this to be done, the number of left shifts needed for the data must be normalized to the dynamic range of the processor used. Some processors have means to find this out themselves, such as exponent detection and normalization instructions.[2][3]
Microscaling (MX) formats are a type of Block Floating Point (BFP) data format specifically designed for AI and machine learning workloads. Very small floating point numbers (minifloats) are used in machine learning for performance, but like fixed-point numbers they suffer from reduced representable ranges. Using a shared exponent allows for increasing the range of representable values at very little space and performance overhead.[7][8] The MX format, endorsed and standardized by major industry players such as AMD, Arm, Intel, Meta, Microsoft, NVIDIA, and Qualcomm, represents a significant advancement in narrow precision data formats for AI.[9]
The MX format contains a block ofk (usually set to 32) elements, each beingd bits long. These elements share a scaling factor ofw bits, so that the entire block isw +kd bits in size. Standard MX data types include:[10]
Name
Element data type
d
k
Scale data type
w
Bits per block
MXFP8 (E5M2)
FP8 (E5M2)
8
32
E8M0
8
264
MXFP8 (E4M3)
FP8 (E4M3)
8
32
E8M0
8
264
MXFP6 (E6M2)
FP6 (E3M2)
6
32
E8M0
8
200
MXFP6 (E2M3)
FP6 (E2M3)
6
32
E8M0
8
200
MXFP4
FP4 (E2M1)
4
32
E8M0
8
136
MXINT8
INT8
8
32
E8M0
8
264
Here E8M0 is effectively the exponent part of a single-precision floating number, being able to represent powers of 2 between 2-127 and 2127. A single value is reserved for NaN. For descriptions of data types such as FP8-E5M2, seeMinifloat § Machine learning.
MX formats have been demonstrated to be effective in a variety of AI tasks, including large language models (LLMs), image classification, speech recognition and recommendation systems.[11] For instance, MXFP6 closely matches FP32 for inference tasks after quantization-aware fine-tuning, and MXFP4 can be used for training generative language models with only a minor accuracy penalty.
The MX format has been standardized through theOpen Compute Project (OCP) as Microscaling Formats (MX) Specification v1.0.[9][10] An emulation libraries also has been published to provide details on the data science approach and select results of MX in action.[12]
The MXFP4 format groups 32 4-bit minifloats with very low dynamic range together. In an effort to reduce quantization artifacts, Nvidia has introduced NVFP4, which instead only groups 16 FP4-E2M1 numbers in a block and changes the scaling factor to E4M3 for more precision. To regain dynamic range, the many blocks in atensor is then subject to a shared fp32 (E8M23) scaling factor for a two-layer setup. No M0 numbers are used for scaling; as a result, all scaling requires an actual multiplication instead of bit shifting or simple manipulation of the exponent part of a floating-point value.[13]
Hardware support for BFP exists in two layers: support for the underlying data type of the members (integer fixed-point or minifloats) and faster implementation of the scaling operation.
d-Matrix Jayhawk II handles BFP12, BFP16, and SBFP12.[14] These data types are fixed-point-based BFP: BFP12 refers to having a shared 8-bit exponent with a group of UINT4 elements, BFP16 refers to having a shared 8-bit exponent with a group of UINT8 elements, and SBFP12 refers to having a shared 8-bit exponent with a group of signed INT4 elements.[15]
Tenstorrent Grayskull e75 and e150 as well as Wormhole n150 and n300 support what the manufacturer calls BFP8, BFP4 and BFP2.[16] The members have no exponent of their own; in other words, they are scaled fixed-point.[17]
The parallel handling of minifloat numbers is more complex to emulate in software compared to handling of packed integers. As a result, hardware support for the underlying minifloat goes a long way in offering BFP-with-minifloat support.
x86 processors implementing theAVX10.2 extension set support the OCP-FP8 formats E5M2 and E4M3 in packed format. The extension does not add accelerated block scaling, which can be done using singular existing operations.[21]
AMD Instinct GPUs support OCP-FP8 and packed MXFP8 (E5M2, E4M3) sinceCDNA 3. CDNA 4 adds support for MXFP4 and MXFP6. CDNA4 also supports MXINT8, a format with fixed-point elements.
The tensor cores of Nvidia GPUs support FP8 sinceHopper (microarchitecture). FP4 and FP6 are added inBlackwell (microarchitecture).[22] The 32-wide size of MX formats is well-suited to the structure of tensor cores which also provide accelerated hardware scaling.[23] Blackwell also provides accelerated scaling for NVFP4.[13]
Intel Gaudi 2 and later accelerators also support FP8.[24]
AMD Versal AI Edge Series Gen 2 supports MX6 and MX9 data types. These data types have two-tiered sharing of exponents, a compromise between standard MX and fixed-point-based BFP.[25] The hardware also supports fast INT8 operations for use in traditional BFP.[26]
^Overton, Michael L. (2001).Numerical Computing with IEEE Floating Point Arithmetic - Including One Theorem, One Rule of Thumb and One Hundred and One Exercises (1 ed.).Society for Industrial and Applied Mathematics (SIAM).ISBN0-89871-482-6. 9-780898-714821-90000.
^Rouhani, Bita Darvish; Zhao, Ritchie; More, Ankit; Hall, Mathew; Khodamoradi, Alireza; Deng, Summer; Choudhary, Dhruv; Cornea, Marius; Dellinger, Eric (2023-10-19). "Microscaling Data Formats for Deep Learning".arXiv:2310.10537 [cs.LG].