not really, neither console will use all flops, that is theoretical value you wont hit that because games dont work that way, a game is not only making floating point operations and doesnt require exactly 33 o 16 ms to resolve every frame tha is the time available for to resolve each frame at 30 or 60 fps but not every frame has same cost the same apply with other metrics, the theoretical maximum is exactly that, the work load can change depending what the game is doing
PS5 can downclok with smarthshift when full clock is not needed based in the workload it can be at full clock all time if needed and its theretical maximum is 10.3 just as xbsx is 12.15 it, ps5 can of course operate at its maximum clock if required it doesnt mean it will require exactly 10.3 tf operations to run the game in question, the same for XBSX it wont downlock but it doesnt mean it will use 12.15 TF because again a game is not composed of a row of 12.15 floating points operations to be resolved in a second the very same apply with current consoles they wont downclock but they dont do their maximum floating point operations when running games and not all parts of a game require most of the console capabilities you can tell by the fan activity when it do that, PS5 can downclock during those moments and run full speed the fan may o may not run faster depending temp but the clock is based in workload not temp it was already explained to death
Alot of what you wrote is semantical in nature.
No one is talking about utilization. Utilization is how much of the available HP is in use at anytime.
When Sony gives its number, it isn't talking about both peak utilization and peak availability although in their implementation they go necessarily hand in hand. Their system will only use enough power at any given time to match the requirements of the softwares requests. Great we got it.
But their marketing, while acknowledging this variableness, still anchors the buyer on their peak rate. So now we all have to change how we view available resources to match Sony's marketing by determining utilization vs peak and arguing THAT? Why? We know utilization fluctuates but we never argued "theoretical" output in terms of efficiency by balancing the power draw between CPU and GPU before.
Now people want to say 12.47 of XSX is theoretical.. but it isn't. That speed isn't based on efficiency, its based on clock and workers.
The maximum output available by the GPU at any instant and perpetually is 12.147. The clock doesn't change even if the power draw does. You can access 12.147 at all times and all the time and not have to worry about the power draw of the CPU in the equation. If you need to, they can both run at full speed forever and have no need to change clock. You will get 12.147 if an when you need it.
Only in the case of the PS5 do we have to re-lens the world to match their PR and try to rethink everyone else's known historical and accurate view about available resources. Why?
So what is the output of the PS5 at 2.23 GHZ? =
2304*2*2.23 = 10.275TF
2304*2*2.20 = 10.137TF
2304*2*2.18 = 10.05TF < any speed below 2.18 GHZ takes the GPU into sub 10TF range.
So a 50 MHZ drop in speed changes the narrative. Put another way a %3 drop in speed puts the GPU in just below 10TF. That is not a story that anyone in that camp wants to tell but we can calculate it very easily.
"Free VFXVETERAN!"
LMAO