I'm sitting on page 2518 and lost hope of ever catching up with this thread... its on

. So again I'm posting blind. It's not worth sharing my thoughts of the tragic xbox show, I can't add anything new to what's been said many times already no doubt. Just wanna highlight tho Microsoft is going to great lengths to hide/convolute what was running on XSX and what was not, something is amiss?!
I'm gonna try expand on some info about PS5's 'SmartShift' implementation and also helps me get a better grasp at the same time. Really wish Cerny/Sony had given their variable frequency technology a specific name cos it is much more than just AMD's SmartShift... I'll just refer to it as 'tech' here to make things easier
This article that been linked by 'dodrake' and others in the past does an excellent explanation in which the tech helps increase GPU occupancy, also relates to CPU usage
Sony did tell us how their design works. The thing you're missing is that the PS5 approach is not just letting clocks be variable, like uncapping a framerate. That would indeed have no effect on the lowest dips in frequency. But they've also changed the trigger for throttling from temperature to...
www.resetera.com
The article best describes what's happening above, but to sum up...
The graph itself is shown from point of view of the power drawn from the PSU, its an indirect (non-linear) correlation of the frequency. As rightfully pointed out by 'raul3d' in the past, the article indicates how the PSU efficiency range is important as it is most efficient at about ~80% of its rated output, so the tech will optimise for this range
The pink band in the graph represents the most visually demanding GPU effects, it won't be the yellow peaks which are spikes caused by situations like unbounded tight loops which have no affect on the output
The tech preemptively lowers the frequency for the peaks leaving a clear power headroom above, thus allowing it to efficiently utilize the bandwidth by increasing the total average frequency which will increase the GPU output (in affect occupancy) as shown in the pink
This does not cover how the tech mitigates for 'race to idle' that was mentioned by Cerny. Race to idle 'generally' is where a process will peak finish it's task then sit idle between each workloads. Analogous to a car racing to get to a destination but getting stopped by every red light burnning shit load of fuel in the process. Whereas if it slowed down timed itself to pass all green lights it will still reach the destination in the same time but saving a lot of fuel
A scenario for GPU refreshing at 30fps (33.3ms), but it finishes rendering each frame at 25ms so it will be sitting idle for 8.3ms. It will be beneficial for the tech to slow the rendering down just enough so it finishes completely in the 33.3ms window thus not having any impact on what's being displayed but saving power/bandwidth which can be utilised somewhere else. The explanation is a bit brief but hopefully clear enough for you to get an idea of the advantages