AMD and Intel may have weakened CPU and iGPU performance for AI

People will hate AI even more.

3D render of an AI chip.
Credit: Igor Omilaev.

Intel and AMD may be chasing AI capabilities a bit too aggressively. Recent reports talk about hampered general performance as NPUs take up much-needed silicon space. PC enthusiasts will not be happy with this.

Seeing the great success of ChatGPT and other AI tools, AMD and Intel both want to join the hype. Most tools run on cloud databases with massive amounts of compute horsepower, largely driven by Nvidia’s +$30K GH100 Hopper and Blackwell B200 GPUs. Intel and AMD, however, want to bring these capabilities to your local system.

To prepare for local AI, chip manufacturers are scrambling to add NPUs (Neural Processing Units) to their chips. Unfortunately, according to Anandtech forum user uzzi38, AMD chops off a portion of its CPU’s SLC cache to make room. Knowing that the CPU and iGPU would have benefited from the extra cache, improving performance, stings. It’s especially irritating for those of us who don’t care for Windows Copilot. The chip in question is likely AMD’s upcoming Strix Point APU featuring Zen 5 cores and RDNA 3+ iGPU.

These NPUs are expected to triple AI performance with up to 50 TOPs on AMD’s Strix Point, 35 TOPs on Intel Meteor Lake, and 70 TOPs on Panther Lake. But at what cost? That’s a question we’ll have to wait to get an answer for. Hopefully, the generational improvement will hide any cut-downs due to NPUs.

Though, on paper, having a locally run AI is preferable for privacy reasons, especially for professional use, we wonder if the potential loss of compute performance is worth it. Seeing how limited local use cases are currently, I would prefer to have more performance in tasks that I use the computer for, particularly as many AI tools are available online anyway.