//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>
It’s that point of yr once more—Scorching Chips will quickly be upon us. Happening as a digital occasion on August 21–23, the convention will as soon as once more current the very newest in microprocessor architectures and system improvements.
As EE Instances’ AI reporter, I’ll after all be looking for brand new and attention-grabbing AI chips. As lately, this yr this system has a transparent give attention to AI and accelerated computing, however there are additionally periods on networking chips, integration applied sciences, and extra. Chips offered will run the gamut from wafer–scale to multi–die excessive–efficiency computing GPUs, to cell phone processors.
The primary session on day 1 will host the most important chip corporations on the planet as they current the most important GPU chips on the planet. Nvidia is up first to current its flagship Hopper GPU, AMD will current the MI200, and Intel will current Ponte Vecchio. Presenting these one after one other contrasts their type elements: Hopper is a monolithic die (plus HBM), the MI200 has two monumental compute chiplets, and Ponte Vecchio has dozens.
Alongside the large three, a shock entry within the at–scale GPU class: Biren. The Chinese language normal–goal graphics processing unit (GPGPU) maker, based in 2019, lately lit up its first–gen 7–nm GPGPU, the BR100. All we all know up to now is that the corporate makes use of chiplets to construct the GPGPU with “the most important computing energy in China,” based on its web site. Biren’s chip has been hailed as a breakthrough for the home IC business, because it “straight benchmarks in opposition to the newest flagships lately launched by worldwide producers.” Hopefully, the corporate’s Scorching Chips presentation will reveal whether or not this actually is the case.
The primary machine studying processor session is on day 2. We’ll hear from Groq’s chief architect on the startup’s inference accelerator for the cloud. Cerebras can even current a deep–dive on the {hardware}–software program codesign for its second–gen wafer–scale engine.
There can even be two displays from Tesla on this class—each on its forthcoming AI supercomputer Dojo. Dojo has been offered as “the primary exascale AI supercomputer” (1.1 EFLOPS for BF16/CFP8) that makes use of the corporate’s specifically designed Tesla D1 ASIC in modules the corporate calls Coaching Tiles.
Knowledge middle AI chip firm Untether will current its model new second–gen inference structure, known as Boqueria. We don’t know the main points but, however we all know the chip has at the least 1,000 RISC–V cores (will it take Esperanto’s crown as largest industrial RISC–V design?) and that it depends on an identical at–reminiscence compute structure to the first technology.
AI people can also wish to look out for the tutorial session on Aug. 21 on the subject of compiling for heterogeneous methods with MLIR.
The opposite tutorial session is on CPU/accelerator/reminiscence interconnect commonplace Compute Categorical Hyperlink (CXL). CXL simply introduced the third model of its expertise, which seems set to grow to be the business commonplace since beforehand competing requirements lately threw their weight behind CXL.
Elsewhere on this system, we’ll hear from Lightmatter about its Passage machine, a wafer–scale programmable photonic communication substrate. Ranovus will current on its monolithic integration expertise for photonic and digital dies.
I’ll even be looking for Nvidia’s presentation on its Grace CPU, a presentation on a processing material for mind–laptop interfaces from Yale College, and keynotes from Intel’s Pat Gelsinger and Tesla Motors’ Ganesh Venkataramanan.
The advance program for Scorching Chips 34 will be discovered right here.