Thursday, March 30, 2023
HomeNetworkingIntel pronounces 144 core Xeon processor

Intel pronounces 144 core Xeon processor


Intel has introduced a brand new processor with 144 cores designed for easy data-center duties in a power-efficient method.

Known as Sierra Forest, the Xeon processor is a part of the Intel E-Core (Effectivity Core) lineup that that forgoes superior options resembling AVX-512 that require extra highly effective cores. AVX-512 is Intel Superior Vector Extensions 512, “a set of recent directions that may speed up efficiency for workloads and usages resembling scientific simulations, monetary analytics, synthetic intelligence (AI)/deep studying, 3D modeling and evaluation, picture and audio/video processing, cryptography and knowledge compression,” in keeping with Intel.

Sierra Forest indicators a shift for Intel that splits its data-center product line into two branches, the E-Core and the P-Core (Efficiency Core), which is the normal Xeon data-center design that makes use of high-performance cores.

Sierra Forest’s 144 cores performs out Intel’s perception that x86 CPU income will comply with core traits extra intently than socket traits within the coming years, mentioned Sandra Rivera, government vice chairman and basic supervisor of the information heart and AI group at Intel talking at a briefing for data-center and AI buyers. She mentioned Intel sees a market alternative of greater than $110 billion for its data-center and AI silicon enterprise by 2027.

In a means, Sierra Forest shouldn’t be not like what Ampere is doing with its Altra processors and AMD is doing with its Bergamo line, with numerous small, environment friendly cores for less complicated workloads. Like Ampere, Intel is focusing on the cloud the place numerous digital machines carry out non-intensive duties like working containers.

Intel plans to launch Sierra Forest within the first half of 2024.

Intel additionally introduced Sierra Forest’s successor, Clearwater Forest. It didn’t go into particulars past the discharge date in 2025 timeframe and that it’s going to use the 18A course of to construct the chip. This would be the first Xeon chip with the 18A course of, which is principally 1.8 nanometers. That signifies that Intel is on monitor to ship on the roadmap set down by CEO Pat Gelsinger in 2021.

Emerald Rapids and Granite Rapids Xeons are scheduled.

Intel latest Xeon, Sapphire Rapids, was launched in January and already has This autumn 2023 set as the discharge date for its successor, Emerald Rapids. It can provide sooner efficiency, higher energy effectivity, and extra cores than Sapphire Rapids, and can be socket-compatible with it. Which means sooner validation by OEM companions making servers since they will use the present socket.

After that comes Granite Rapids in 2024. Through the briefin Rivera demoed a dual-socket server working a pre-rele model of Granite Rapids, with an unimaginable 1.5 TB/s of DDR5 reminiscence bandwidth. For perspective, Nvidia’s Grace CPU superchip has 960 GB/s and AMD’s Genoa era of Epyc processor has a theoretical peak of 920 GB/s.

The demo featured for the primary time a brand new sort of reminiscence Intel developed with SK Hynix referred to as DDR5-8800 Multiplexer Mixed Rank (MCR) DRAM. This reminiscence is bandwidth-optimized and is way sooner than conventional DRAM. MCR begins at 8000 megatransfers (MT) per second, nicely above the 6400 MT/s of DDR5 and 3200 MT/s of DDR4.

Intel additionally mentioned non-x86 components, like FPGAs, GPUs, and purpose-built accelerators. Intel mentioned it might launch 15 new FPGAs in 2023, probably the most ever in a single yr. It didn’t go into element on how the FPGAs could be positioned within the market.

Is Intel competing With CUDA?

One of many key benefits that Nvidia has had has been its GPU programming language referred to as CUDA, which permits builders to program on to the GPU slightly than by way of libraries. AMD and Intel have had no various so far, but it surely seems like Intel is engaged on one.

On the briefing, Greg Lavender, Intel’s Chief Expertise Officer and basic supervisor of the software program and superior expertise group, set down his software program imaginative and prescient for the corporate. “One among my priorities is to drive a holistic and end-to-end systems-level strategy to AI software program at Intel. We’ve the accelerated heterogeneous {hardware} prepared immediately to fulfill buyer wants. The important thing to unlocking that worth within the {hardware} is driving scale by way of software program,” he mentioned.

To attain “the democratization of AI,” Intel is growing an open-AI software program ecosystem, he mentioned, enabling software program optimizations upstream to AI frameworks like PyTorch and TensorFlow and machine studying frameworks to advertise programmability, portability, and ecosystem adoption.

In Might 2022, Intel launched an open-source toolkit referred to as Cyclomatic to assist builders extra simply migrate their code from CUDA to its Knowledge Parallel C++ for Intel platforms. Lavender mentioned the device is usually capable of migrate 90% of CUDA supply code mechanically to the C++ supply code, leaving little or no for programmers  to tune manually.

Copyright © 2023 IDG Communications, Inc.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments