In the late 1990s into the early 2000s, Intel was being hit with something the industry and market dubbed the “Megahertz Myth.” The idea was that Intel was pushing clock speed as the sole indicator of performance, but AMD and Apple (with their PowerPC Macs) were proving that they could achieve similar performance with a much lower clock speed.
I recall during that time that there might’ve been misconceptions in similar specs. For example, there was the idea that more cache on the CPU was better. Again Intel’s CPUs around that time also had more cache on its processors than its immediate competitor. But the real-world performance painted a different picture.
Then there’s the deal with the number of cores. Again with the AMD and Intel front, both sides started pushing out similar numbers, but AMD pushed ahead ever so slightly with the six-core Phenom II. It still wasn’t able to convincingly outperform Intel’s quad-core parts in most tasks, even if you take HyperThreading out of the picture. And then there was Bulldozer, where AMD touted it had 8-cores but a similar story happened. Outside of the x86 realm, we have a similar story with ARM. Apple’s SoCs can run circles around Qualcomm’s Snapdragon using fewer “big” cores.
One that cropped up recently for me was that the rumor mills suggest Intel’s upcoming discrete GPUs are not only going to be manufactured by a third party, but that it would be in 6nm. The article I read this from gave this rumor the hyped up headline of something like “Intel’s HPG gaming graphics card could beat NVIDIA and AMD’s best in one area”. Process node only helps to cram more transistors in a given space and help with the power envelope, granting higher clock speeds in the same power envelope or less power consumed for the same performance. But it still doesn’t really mean anything. NVIDIA tried to jump on the smaller node train back in 2002 when it wanted its upcoming GeForce FX to be on 130nm, but process delays held it back and ultimately AMD won that generation by starting off with the more mature 150nm process. 10 years later, NVIDIA managed to squeeze a lot of performance out of the 28nm process going from Kepler to Maxwell. Meanwhile, a few years later AMD was excited to reveal that it had the “world’s first 7nm GPU” in the form of its Vega series of cards, but the cards could not convincingly outperform NVIDIA’s 12nm parts like you might want to expect such a “number jump” to do.
Ultimately, all of the numbers you see about hardware are irrelevant except for one: how it actually performs on the tasks you want to do.