AMD vs. Intel: Which CPU is best suited to AI and Deep Learning?

Updated July 2024

When it comes to processors for Artificial Intelligence (AI) and Deep learning (DL), your two main options are AMD and Intel. 

It can be hard to know which to choose, so this blog will focus on the fundamentals of each processor to help you decide which will be more beneficial to your AI/DL project.  

Latest releases 

For your reference, the latest releases from each brand are the Threadripper™ PRO 7000 series from AMD, and the 4th Generation Xeon® W 3-9 Series Processors from Intel, designed specifically to compete with the Threadrippers™.

Price 

Budget is a big concern, particularly for R&D departments

Each series offers processors at different price points, so the good news is you’ll probably find a CPU in your price range, even if performance takes a hit.  

Intel’s most expensive Xeon® W processor comes in at a recommended £4,659.15, and its most affordable at £2,965.80. AMD’s Threadripper™ Series can run from £7,121.15 to £1,399.99.  

The higher the specifications, the more expensive the processor will be. As a result, AMD’s higher performance is reflected in its higher prices.

Intel® Xeon® w9-3495X Processor
AMD Threadripper™ PRO 7995WX
Intel® Xeon® w9-3475 Processor
AMD Threadripper™ PRO 7955WX
Price £4,659.15 £7121.15 £2,965.80 £1,399.99
PCIe Lanes 112 5.0 lanes 148 (128 5.0) lanes 64 5.0 lanes 148 (128 5.0)
Cores / Threads 56 / 112 96 / 192 36 / 72 12 / 24
Base Clock Speed 1.9 GHz 2.5 GHz 2.2 GHz 4.7 GHz

Performance 

Deep Learning workstations require PCIe bandwidth, so you should pay attention to how many PCIe lanes a CPU has. The lanes on your CPU are necessary for high-speed storage, which is vital in AI training. They are also primarily assigned to your graphics card (GPU), with each graphics card requiring 16 (known as x16) to run at 'full speed'.   

GPUs still do most of the heavy lifting within AI/DL workloads, and to run multiple simultaneously, DL workstations generally require a CPU with at least 40 PCIe lanes.

AMD’s Threadripper™ series has the advantage here, offering 148 PCIe lanes (128 of which are gen 5) on both its most and least expensive model, compared to Intel’s offering of 112 and 64 PCIe lanes.

Junior Solutions Architect, Alex, talks AMD vs Intel for AI and Deep Learning Workloads.

Cores and Threads 

AI and DL workloads require enough cores to pre-process datasets and multithreaded workloads. 

AMD’s latest Threadripper™ range offers up to 96 cores and 192 threads. In contrast, Intel’s latest Xeon® W Processors offer up to 56 cores with 112 threads. 

Clock speed 

Clock speed can be equally as important as the number of cores with AI/DL workloads. For example. when calculations must be completed sequentially, for example, a CPU with less cores and a higher frequency will perform better.  

Intel’s latest Xeon® series offers base frequencies of up to 2.2 GHz, whereas AMD’s Threadripper™ series offers up to 4.7 GHz.  

With sufficient cooling, all Threadripper™ processors can provide max speeds of up to 5.3 GHz, with Intel’s Xeon’s reaching up to 4.8 GHz. 

 

We hope that this comparison was useful to you. If you need any help setting up a workstation for Deep Learning, or you need hardware for an upcoming project, get in touch to find out how we can help.


Previous
Previous

AMD’s CFO believes the CPU is still the core of many server workloads 

Next
Next

Use Cases for AMD’s EPYC Processors