Hybrid Memory Cube (HMC) is a high-performance computerrandom-access memory (RAM) interface forthrough-silicon via (TSV)-based stacked DRAM memory. HMC competes with the incompatible rival interfaceHigh Bandwidth Memory (HBM).
Hybrid Memory Cube was co-developed bySamsung Electronics andMicron Technology in 2011,[1] and announced by Micron in September 2011.[2] It promised a 15 times speed improvement overDDR3.[3] The Hybrid Memory Cube Consortium (HMCC) is backed by several major technology companies includingSamsung,Micron Technology,Open-Silicon,ARM,HP (since withdrawn),Microsoft (since withdrawn),Altera (acquired by Intel in late 2015), andXilinx.[4][5] Micron, while continuing to support HMCC, is discontinuing the HMC product[6] in 2018 when it failed to achieve market adoption.
HMC combinesthrough-silicon vias (TSV) andmicrobumps to connect multiple (currently 4 to 8)dies of memory cell arrays on top of each other.[7] The memory controller is integrated as a separate die.[2]
HMC uses standard DRAM cells but it has more data banks than classicDRAM memory of the same size. The HMC interface is incompatible with current DDRn (DDR2 orDDR3) and competingHigh Bandwidth Memory implementations.[8]
HMC technology won the Best New Technology award from The Linley Group (publisher ofMicroprocessor Report magazine) in 2011.[9][10]
The first public specification, HMC 1.0, was published in April 2013.[11] According to it, the HMC uses 16-lane or 8-lane (half size) full-duplex differential serial links, with each lane having 10, 12.5 or 15Gbit/sSerDes.[12] Each HMC package is named acube, and they can be chained in a network of up to 8 cubes with cube-to-cube links and some cubes using their links as pass-through links.[13] A typical cube package with 4 links has 896 BGA pins and a size of 31×31×3.8 millimeters.[14]
The typical rawbandwidth of a single 16-lane link with 10 Gbit/s signalling implies a total bandwidth of all 16 lanes of 40GB/s (20 GB/s transmit and 20 GB/s receive); cubes with 4 and 8 links are planned, though the HMC 1.0 spec limits link speed to 10 Gbit/s in the 8-link case. Therefore, a 4-link cube can reach 240GB/s memory bandwidth (120 GB/s each direction using 15 Gbit/s SerDes), while an 8-link cube can reach 320 GB/s bandwidth (160 GB/s each direction using 10 Gbit/s SerDes).[15] Effective memory bandwidth utilization varies from 33% to 50% for smallest packets of 32 bytes; and from 45% to 85% for 128 byte packets.[7]
As reported at the HotChips 23 conference in 2011, the first generation of HMC demonstration cubes with four 50 nm DRAM memory dies and one 90 nm logic die with total capacity of 512MB and size 27×27 mm had power consumption of 11 W and was powered with 1.2 V.[7]
Engineering samples of second generation HMC memory chips were shipped in September 2013 by Micron.[3] Samples of 2 GB HMC (stack of 4 memory dies, each of 4 Gbit) are packed in a 31×31 mm package and have 4 HMC links. Other samples from 2013 have only two HMC links and a smaller package: 16×19.5 mm.[16]
The second version of the HMC specification was published on 18 November 2014 by HMCC.[17] HMC2 offers a variety of SerDes rates ranging from 12.5 Gbit/s to 30 Gbit/s, yielding an aggregate link bandwidth of 480GB/s (240 GB/s each direction), though promising only a total DRAM bandwidth of 320 GB/sec.[18] A package may have either 2 or 4 links (down from the 4 or 8 in HMC1), and a quarter-width option is added using 4 lanes.
The first processor to use HMCs was theFujitsuSPARC64 XIfx,[19] which is used in theFujitsu PRIMEHPC FX100 supercomputer introduced in 2015.
JEDEC's Wide I/O and Wide I/O 2 are seen as the mobile computing counterparts to the desktop/server-oriented HMC in that both involve 3D die stacks.[20]
In August 2018, Micron announced a move away from HMC to pursue competing high-performance memory technologies such asGDDR6 andHBM.[21]