FatKingBallman
Banned
even funnier with all this real rdna2 fuss
Yeah the test is interesting because they kept clocks the same on all 4 cards.
When compared to the full 80 CU Navi21,
50% CUs = 60-68% FPS
75% CUs = 80-86% FPS
90% CUs = 93-94% FPS
So by adding more CUs even at same clocks, there are clear diminishing returns. They went from 40 to 60 CUs at same 2.0 GHz (and increased bandwidth), so thats a 50% increase in shaders, yet only saw about a 33% increase in average performance.
Therefore if one was to add say, 44% more CUs, but also lowered the clock speed by almost 20%, then one could expect almost no improvement in performance.
1.0x1.5x.88=1.33 (going from 40 to 60 CUs @ same clocks, but just 33% perf gain.)
.818x1.44x.88=1.037 (36 vs 52 CUs @ 2.23 vs 1.825 GHz would give just 4% perf gain, if all else was equal)
And remember, when they went from 40 to 60 CUs for Navi 21/22, they kept the same 10 CUs per shader array, like PS5, instead of just adding more CUs into each shader array like they did for XSX. They doubled the shader engines and everything else when going from Navi 22 to Navi 21. Neither console has the 4 SEs of Navi21. So that minor 4% perf gain could be even smaller, aka zero, if adding more CUs per SA decreases efficiency. Not to mention how various other things like split memory bandwidth or slower I/O could bottleneck performance.
charts
So, are we back to discussion about this around a month ago :
In fact, it is not! Clock would be higher if they chose over 25%, right. And power consumption lower??
But anyway, same from Anandtech
and
Here is the official full HotChips conference for XSX.
Timestamped at 16:50