Tuesday, January 10, 2023
HomeNetworkingAMD unveils exascale data-center accelerator at CES

AMD unveils exascale data-center accelerator at CES


The Shopper Electronics Present (CES) is perhaps the final place you’d count on an enterprise product to debut, however AMD unveiled a brand new server accelerator among the many slew of shopper CPUs and GPUs it launched on the Las Vegas present.

AMD took the wraps off its Intuition MI300 accelerator, and it’s a doozy.

The accelerated processing unit (APU) is a mixture of 13 chiplets, together with CPU cores, GPU cores, and excessive bandwidth reminiscence (HBM). Tallied collectively, AMD’s Intuition MI300 accelerator is available in at 146 billion transistors. For comparability, Intel’s formidable Ponte Vecchio processor might be round 100 billion transistors, and Nvidia’s Hopper H100 GPU is a mere 80 billion transistors.

The Intuition MI300 has 24 Zen 4 CPU cores and 6 CDNA chiplets. CDNA is the information middle model of AMD’s RDNA shopper graphics know-how. AMD has not mentioned what number of GPU cores per chiplet there are. Rounding off the Intuition MI300 is 128MB of HBM3 reminiscence stacked in a 3D design.

The 3D design permits for large information throughput between the CPU, GPU and reminiscence dies. Information doesn’t have to go from the CPU or GPU to DRAM; it goes out to the HBM stack, drastically decreasing latency. It additionally permits the CPU and GPU to work on the identical information in reminiscence concurrently, which hastens processing.

AMD CEO Lisa Su introduced the chip on the finish of her 90-minute CES keynote, saying MI300 is “the primary chip that brings collectively a CPU, GPU, and reminiscence right into a single built-in design. What this permits us to do is share system sources for the reminiscence and IO, and it leads to a major improve in efficiency and effectivity in addition to [being] a lot simpler to program.”

Su mentioned the MI300 delivers eight occasions the AI efficiency and 5 occasions the efficiency per watt of the Intuition MI250. She talked about the much-hyped AI chatbot ChatGPT and famous it takes months to coach the fashions; the MI300 will lower the coaching time from months to weeks, which might save hundreds of thousands of {dollars} on electrical energy, Su mentioned.

Thoughts you, AMD’s MI250 is a formidable piece of silicon, used within the first exascale supercomputer, Frontier, on the Oak Ridge Nationwide Lab.

AMD’s MI300 chip is just like what Intel is doing with Falcon Shores, due in 2024, and Nvidia is doing with its Grace Hopper Superchip, due later this 12 months. Su mentioned the chip is within the labs now and sampling to pick prospects, with a launch anticipated within the second half of the 12 months.

New AI accelerator on faucet from AMD

The Intuition is not the one enterprise announcement at CES. Su additionally launched the Alveo V70 AI inference accelerator. Alveo is a part of the Xilinx FPGA line AMD acquired final 12 months, and it is constructed with AMD’s XDNA AI engine know-how. It will probably ship 400 million AI operations per second on a wide range of AI fashions, together with video analytics and buyer advice engines, in accordance with AMD.

Su mentioned that in video analytics, the Alveo V70 delivers 70% extra road protection for smart-city functions, 72% extra hospital mattress protection for affected person monitoring, and 80% extra checkout lane protection in a wise retail retailer than the competitors, however she didn’t say what the competitors is.

All of that is inside a 75-watt energy envelope and a small kind issue. AMD goes to take pre-orders for the V70 playing cards at this time, with availability this spring.

Copyright © 2023 IDG Communications, Inc.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments