Apologies if everyone is tired of the PS5 variable clocks discussion, I just wanna quickly chime in with my understanding of it.
The devkit profiles are there to just help developers optimise code. Fixing the CPU or GPU will help them find areas of the code they can improve, by being able to see a comparable execution time of the code each time they run it.
Without the use of profiles optimisation would be very difficult, as the same code would have different execution times based on the (variable) frequency chosen by the PS5 at that time.
The profiles aren't used for testing the game, that would be pointless as it would limit the CPU or GPU, which won't happen normally on a PS5.
I'm not a games developer but I develop iPhone apps. When using profiling tools in Xcode I can see (for example) the duration of functions, and If needed I can spend time optimising the code to bring those times down. If iPhones ran at variable frequencies I'd need to be able to fix the frequency, during optimisation, to test/ensure any optimised code is indeed running quicker.
Isn't this simply what the PS5 devkit profiles are for, fixing the frequencies to help the developer reduce (optimise) code execution time, if needed?
I really can't see the variable clocks being a big deal during PS5 development. If a game with a 60fps target is sometimes dipping a bit, fix the CPU clock and optimise the code a bit, and do the same with the GPU, fix the clock and see where time can be saved.
Cerny has said that it's very unlikely that games will demand full 100% CPU and GPU usage at the same time, and if a game does require that, for short periods of time, the PS5 can handle it without downclocking anyway.
So basically, under a fixed clock system the PS5 may have been fixed at maybe 2ghz for GPU and probably a lower CPU clock too and games optimised for that, effectively leaving performance on the table most the time.
With the new variable system games are optimised at locked profiles probably approaching or maybe even at the max limits, let's go with it and say 2.23ghz and 3.5ghz.
Now under a fixed clock system this would be well beyond the systems cooling capability on worst case scenario code areas and therefore can't happen, as without being able to lower clocks even temporarily on the fly, the demand at those moments may be too much and so clocks are set lower to account for this.
But on the PS5 the reason it can happen is because game code is not near to fully utilising the GPU/CPU at all times, so games optimised at these much higher clock speeds are able to run at these higher levels and perform much better than a fixed clock version because they have the ability to lower, if needed, at the time of those worst case scenarios.
Now to smartshift and am just trying to clarify here…. if the GPU is being highly utilised and running at max frequency and the CPU wasn't at that moment (or visa versa), you could transfer more of the available power budget to enable the GPU to run at max, even if it already was, because the workload has increased and is now using more power to stay at max. Where without smartshift I presume it wouldn't be able to increase that performance at that higher workload and may have had to downclock for the duration of that higher workload without that additional power boost to maintain it? Because prolonged spikes of high utilisation are apparently so rare this is why both should be able to run at max most the time as power can be shifted to each? So in cases where one has spare power, which with game code not being at max utilisation seems like it could be a lot, there's scope to push these components at max frequency and higher workloads than possible before?
If both GPU and CPU are running at max frequency, this is possible but depends on workload, if the workload increased on both to push the power budget to or beyond the maximum, the component under least load would downclock or maybe even both would downclock. Downclocks could be for a few ms in a frame upwards and very minor (a 10% drop in power results in a few % drop in frequency but theres no reason to jump to 10% power dropped, it could be much less and depend on whats needed to be dropped to meet the power and cooling limit). But this is likely very rare as the loads on both the GPU and CPU are rarely anything approaching max especially at the same time?
So to me the variable clocks and smartshift seem like a very clever solution and together should ensure very high levels of performance not able to be attained from the same part under a fixed clock system….
Maybe someone can correct, add anything, fill in any gaps to this as it's just me trying to fully understand all aspects of this.