SK Hynix was the primary reminiscence vendor to start out speaking about HBM3 and was the primary firm full growth of reminiscence underneath that spec. As we speak the corporate stated that it had begun to mass produce HBM3 reminiscence and these DRAMs can be utilized by Nvidia for its H100 compute GPUs and DGX H100 methods that can ship within the third quarter.
SK Hynix’s HBM 3 identified good stack dies (KGSDs) provide peak reminiscence bandwidth of 819 GB/s, which implies that they assist information switch charges of as much as 6400 GT/s. As for capability, every stack packs eight 2GB DRAM gadgets for a complete of 16GB per package deal. SK Hynix additionally has 12-Hello 24GB KGSDs, however since Nvidia appears to be the corporate’s main buyer for HBM3, the corporate kicks off manufacturing with 8-Hello stacks.
The beginning of HBM3 mass manufacturing is sweet information for SK Hynix’s backside line; for some time, no less than, the corporate would be the solely provider of this reminiscence kind and can be capable to cost a hefty premium for these gadgets. What’s necessary for SK Hynix’s public picture is that it’s starting mass manufacturing of HBM3 forward of its arch-rival Samsung.
Ultimately, SK Hynix and different makers of reminiscence will provide HBM3 packages with as much as 16 32Gb DRAM gadgets and with capacities of 64GB per KGSD, however this can be a longer-term query.
Nvidia’s H100 compute GPU is provided with 96GB of HBM3 DRAM, although due to ECC assist and another components, customers can entry 80GB of ECC-enabled HBM3 reminiscence related utilizing a 5120-bit interface. To win the contract with Nvidia, SK Hynix has labored carefully with the corporate to make sure good interoperability between the processor and reminiscence gadgets.
“We purpose to turn out to be an answer supplier that deeply understands and addresses our prospects’ wants by way of steady open collaboration,” stated Kevin (Jongwon) Noh, president and chief advertising and marketing officer at SK Hynix.
However Nvidia is not going to be the one firm to make use of HBM3 within the foreseeable future. SiFive taped out its first HBM3-supporting system-on-chip on TSMC’s N5 node a few yr in the past, so the corporate can provide related know-how to its purchasers. Moreover, Rambus and Synopsys have each provided silicon-proven HBM3 controllers and bodily interfaces for fairly some time and have landed quite a few prospects, so anticipate an arrival of varied HBM3-supporting SoCs (primarily for AI and supercomputing purposes) within the coming quarters.