The 2nd-Generation Intel Xeon Scalable platform provides the foundation for a powerful data center platform that creates a leap in agility and scalability. Intel Corporation on April 2, 2019, introduced a portfolio of data-centric tools to help its customers extract more value from their data. (Credit: Intel Corporation)
Intel has released new performance numbers of its next-generation Ice Lake-SP Xeon Platinum CPUs, comparing it against AMD’s 2nd Generation EPYC Rome CPUs. According to Intel, Ice Lake Xeon CPUs with their updated core architecture will deliver an 18% IPC jump over previous-gen Cascade Lake Xeon CPUs, allowing them to be competitive against AMD’s high-core count CPU offerings.
Intel Teases 32 Core Ice Lake-SP Xeon CPU Beating A 64 Core AMD EPYC Rome CPU But There’s A Catch
Within the SC20 presentation, Intel reassured its partners and customers that Ice Lake-SP Xeon CPUs (3rd Gen Scalable Processor Family) is on track for volume ramp in Q1 2021 followed by a formal launch sometime around mid-2021. The Intel Ice Lake-SP generation of Xeon CPUs will be utilizing a 10nm process node along with a brand new microarchitecture and a new platform to support increased memory bandwidth.
We know that Ice Lake-SP Xeon CPUs will be using the Sunny Cove core architecture which offers an 18% IPC uplift over Skylake so based on that, Intel claims that it is looking to deliver better perf per core, more memory channels for increased bandwidth, full PCIe Gen 4.0 support and up to 6 TB of memory support per socket (Intel Optane PMem).
As for the performance benchmarks versus AMD’s 64 Core EPYC 7742 CPU, Intel claims that its 32 core Ice Lake-SP Xeon CPU can deliver up to 30% faster performance in key life sciences and FSI workloads. The performance was measured within NAMD STMV, Monet Carlo, and LAMMPS. The Intel Xeon Ice Lake-SP CPU was configured with 32 cores and 64 cores per socket. The actual run used two Ice Lake-SP Xeon CPUs for a total of 64 cores and 128 threads versus two AMD EPYC 7742 Rome CPUs with a total of 128 cores and 256 threads.
The Intel platform was running at clocks of 2.2 GHz and had a total of 256 GB of DDR4-3200 MHz memory while the AMD EPYC platform was also configured at its stock speeds of 2.25 GHz and 256 GB of DDR4-3200 MHz memory. So one would say that this is a pretty fair benchmark comparison but those 20-30% performance uplifts over the AMD EPYC Rome 64 Core CPU is mainly derived by comparing AVX512 to non AVX512 numbers. All three workloads that are reported here make use of Intel’s AVX-512 instructions which grants them a big gain. To be fair, Intel does pretty well with AVX-512 workloads, beating a chip with twice the amount of cores and on a better process node & architecture.
But in terms of overall efficiency and standard performance, AMD’s EPYC lineup will dominate Intel in every benchmark. The fact that they are showing a 32 core chip against a 64 core chip and had to turn AVX-512 to match it up shows how far behind Intel has fallen in the server race and things are going to get more heated in a few months when AMD will be unveiling its 3rd Gen EPYC lineup, the Milan series which is scheduled for Q1 2021 as we learned yesterday.
Intel Ice Lake-SP ‘Next-Gen CPU’ 28 Core Die & Whitley Platform Detailed
Looking at the block diagram of the Ice Lake-SP 28 core CPU, the chip offers a new interconnect in the form of an enhanced Mesh Fabric that runs through all of the 28 CPU cores. The Ice Lake-SP die features two 4-channel memory controllers whereas the Cascade Lake-SP die offered two tri-channel memory controllers.
The Intel Ice Lake-SP processors also feature four PCIe Gen 4 controllers, each offers 16 Gen 4 lanes for a total of 64 lanes on the 28 core die. The Cascade Lake-SP chips offered Hexa-channel memory support while Ice Lake-SP will offer octa-channel memory support on the Whitley platform at launch. The platform will be able to support up to DDR4-3200 MHz memory (16 DIMM per socket with 2nd Gen persistent memory support.
Intel is also adding a range of latency and coherence optimizations to Ice Lake-SP chips. But you can see that the memory bandwidth-latency gets a big jump with the 8-channel memory interface and the higher DIMM speeds.
Intel Ice Lake-SP ‘Next-Gen CPU’ New Interconnect Infrastructure
In addition to the standard Mesh interconnect, Intel has further expanded its interconnect design for Ice Lake-SP Xeon CPUs. The new control fabric and data fabric do connect with the cores and different controllers of the chip but also manage the data flower and power control for the chips themselves. These new interconnects will deliver even lower latency and faster clock updates than 3rd Gen Cooper Lake-SP chips. For example, the core frequency transition takes 12us and the mesh frequency transition takes 20us on Cascade Lake-SP chips. Ice Lake-SP in comparison takes less than 1us and 7us, respectively.
The less frequency drain means higher efficiency over Cascade Lake. Ice Lake-SP will also improve upon the AVX frequency since not all AVX-512 workloads consume higher power. This also isn’t specific to just AVX-512. Even AVX-256 instructions on Ice Lake-SP will deliver better frequencies profile over Cascade Lake CPUs.
Some of the major upgrades that 10nm will deliver include:
The Intel Ice Lake-SP lineup would be directly competing against AMD’s enhanced 7nm based EPYC Milan lineup which will feature the brand new 7nm Zen 3 core architecture which offers a huge 19 percent uplift in IPC over the original Zen core.