Xeon Platinum 9200 series
Xeon Platinum 9200 series

Xeon Platinum 9200 series is Intel’s upcoming line of CPUs. The processors are the Cascade Lake line of Xeon Scalable data centre processors.

The chip maker unveiled the series on its April 3 Data-Centric Innovation in San Francisco.

It is the second generation of scalable Xeon processors. The series packs 56 cores, 12 memory channels per chip and carries the first persistent memory DIMM modules from Optane DC. Also, it is delivered with several new data centre SSDs, Ethernet controllers, Xeon D processors and Agilex 10nm FPGA.

A release date is sometime late 2019.

Xeon Platinum 9200 series
Xeon Platinum 9200 series. Source: TechSpot.

Scalable Xeon Platinum 9200 processors lineup

Intel created the new series to improve storage, data movement, and datas processing from the edge to the data centre. Hence, the new 9000-series processors comne with 56 cores and 112 threads.

Thesen 9000 series chip packs a dual-die MCM (Multi-Chip Module), which means that two die come in a single chip.n

Intel claims Xeon Platinum is the highest performance CPU for HPC, AI, and IAAS workflows.

The series also has the most channels. Thus they have access ro the highest memory bandwidth of any data centre processor.

Cores / Threads
Base / Boost Freq. (GHz)
L3 Cache
TDP
Xeon Platinum 9282 56 / 112 2.6 / 3.8 77 MB 400W
Xeon Platinum 9242 48 / 96 2.3 / 3.8 71.5 MB 350W
Xeon Platinum 9222 32 / 64 2.3 / 3.(7 71.5 MB 250W
Xeon Platinum 9221 32 / 64 2.1 / 3.7 71.5 MB 250W

 

Xeon Platinum 9200 series
Xeon Platinum 9200 series. Source: Technopedia.

The 9200 series comes in 32, 48, and 56 cores models.

Intel states the 400W models require water cooling, while the 250W and 300W models can use an air cooling system.

These chips are not compatible with previous generation sockets. Instead, the Xeon Platinum 9200 series come in a BGA (Ball Grid Array) package sold to the host.

Xeon Platinum chips carry up to 40 PCIe 3.0 lanes per chip. The total is 80 lanes in a dual socket server. Additionally, each die packs 64 PCIe lanes, plus more lanes for UPI (Ultra-Path Interconnect) that tie together with the two die within the processor.

Optane DC persistent memory DIMMs

The next line is a broad spectrum of products that take advantage of Intel’s overwhelming presence in the world’s data centre. The chip manufacturer currently occupies about 95% of the world’s server sockets.

Next Intel Xeon scalable processor with Cascade Lake architecture
Next Intel Xeon scalable processor with Cascade Lake architecture. Source: Wccftech.

This technology is a springboard to create new opportunities in other markets. For example, it could be a new assault on memory space with the Optane DC persistent memory DIMMs.

The long-awaited DIMMs open the way for Intel to become the leader of memory and disrupt the hierarchy of the market. It also serves as a critical component that can help the company compete and even win again against the upcoming AMN 7nm EPYC Rome processors.

Current DIMM DDR4 memories exceed 128 GB, while 64 GB is the most common capacity. Optane improved this with a minimun size module of 128 GB, while capacities go up to 512 GB per DIMM.

You could install up to 6 TB of Optane DIMM modules un a double socket system. Take notice that you would need at least one DIMM in any system Optane uses for memory. However, customers have the option to pair the 1x 128 GB DDR4 memory with an Optane 512 GB module. Such combination would give 768 GB of RAM in two different DIMM slots.

LEAVE A REPLY

Please enter your comment!
Please enter your name here