I had already discussed the behavior of the new Intel CPUs on different motherboards a few days ago, but the whole thing didn’t leave me alone in the end. The measured differences, especially on the OC boards, were big enough not to book them as measurement tolerance and also reproducible, but then not big enough to explain the partly very different efficiency differences to the majority of the other reviewers. While only very few reviewers used a Radeon RX 6900XT for the launch review and the majority preferred to use a GeForce RTX 3090, there were a lot of variations. But I do not want to anticipate the whole thing and first of all describe the development of knowledge.
I had sent one of my motherboards to Xaver, who in turn was to confirm with his Intel Core i9-12900K if and how the power consumption changes on different boards under otherwise identical conditions. There were measurable differences and I had asked him to send me his data sets with logged graphics performance and power consumption. The reason was simple, because he had in this game and setting with his GeForce RTX 3090 FE almost to the FPS exactly the same performance result, but a significantly higher power consumption than I with the Radeon RX 6900XT and also a visibly larger CPU load.
Well, now of course you can blame this on the driver overhead that the NVIDIA driver likes to produce at very low resolutions. And that’s exactly why I sat down and did all the gaming benchmarks again in 720p and 1080p with the Core i9-12900KF and GeForce RTX 3090 FE. Laziness won out, as all the test systems were still untouched right where I left off last week. However, some of the results with my RTX 3090 FE were so divergent in terms of CPU power consumption that I redid some runs over and over again. And yet, each time there was the same (different) result.
That was part one of my week, because you will have noticed that I made myself a bit scarce visually. But if you do something, you should do it systematically and properly. And so the final consequence was also to test the Ryzen 9 5950X again in the same setup with the GeForce RTX 3090 FE. And my jaw dropped on the very first test run. What I could (and had to) measure now clearly contradicts the thesis that the CPUs from Intel and AMD also handle the driver bottle necks identically when it comes to power consumption and load.
I can spoiler that my “crossload” ended up being exactly the (in my view distorted) picture that accuses Intel CPUs of being significantly more inefficient. And I can already announce one more insight: AMD CPUs seem to prefer drinking together with the own AMD GPUs without foreign guests. And so the “crossload” takes on a whole new meaning. Exactly which ones will be revealed in the article that has me tied to the chair here right now. So please be patient, I do need to document it in a resilient way and will run as many iterations as due diligence dictates. But our colleagues in the community have some nice content for you.
But one thing is already a fact: NVIDIA’s drivers are a funny bag of tricks in some respects. So stay tuned, this will surely be a real surprise for some 🙂