Why does Cerny like running the GPU at a higher frequency?
Suppose Sony had gone with the same 56 CUs as MS with 4 disabled (52 CUs active) for PS5 and with the goal of 10.3 TF. They would have to set the frequency at 1544 MHz to achieve that. They would surely achieve their goal of 10.3 TF but the performance would be noticeably different between 36 CU at 2230 MHz vs 52 CU at 1544 MHz. Cerny even gave this e.g. but with 36 vs 48 CU.
At Hotchips MS revealed their "GPU Evolution" slide for Series X comparing its GPU all the way from the original Xbox One GPU.
For the GPU evolution slide, they focus on 4 metrics here to show the evolution.
- Computational power
- Memory bandwidth
- Rasterization rate and
- Pixel fillrate.
Here's how a notional PS5's GPU perf with 52 CU @1544 MHz config would look like:
10.3 TFLOPS, 448 GB/sec, 6.18 Gtri/sec, 98.8 Gpix/sec
Here's what the actual PS5's GPU perf looks like with the current 36 CU @2230 config:
10.3 TFLOPS, 448 GB/sec, 8.92 Gtri/sec, 142.7 Gpix/sec
You reach your TF goal but look at rasterization and pixel fillrate. They take a massive hit. With 36 CUs those other units are now 44% higher than they were with 52 CUs. Not only does rasterization go up, but pixel fillrate also goes up along with the processing of the command buffer. And L2 and other caches now get 44% more bandwidth. Not to mention it would cost more money to go with a larger GPU. Please note I'm not downplaying MS here. Just pointing out the strategy that Sony has taken this time around. Whatever strategy MS followed for their machine works the best for them and I'm not downplaying that one bit.
Hope this post was able to better explain Cerny's variable frequency topic just a bit more and why they went with it. There is obviously more to talk about it and there's still not much detail and info like what's the power consumption like, etc.