Mark Cerny has warned against AVX usage.
You've repeated this claim about six times now—even linked to it—but have completely failed to understand the context yourself.
With fixed clocks and varying power consumption you have to make assumptions about how much power a game will need. That means also making assumptions on power hungry instructions like AVX. In this case assuming (and hoping) they're not used much so you can set your fixed clock a little higher.
This is how things worked on PS4 and something like XSX.
But what if AVX instructions aren't used sparingly, or if a game design needs them? Then you'd need to reduce your fixed clock to have enough headroom on power.
That's the "warning against using them".
When targeting fixed power and allowing the frequency to vary, you
don't have to assume anything about AVX usage, and there are no longer any unknowns on power usage.
Actual calculation and work done costs watts, regardless of clock frequency. Targeting fixed power consumption over fixed frequency means a fixed amount of work being done.
Identifying unproductive transient spikes in power usage (uncapped frame rates in simple map screens) and reducing clocks to keep that work at maximum power means that
actual efficient CU occupancy in real world game code can have its frequency ramped up beyond what you could if you had to set a fixed clock and assume unproductive transients and unpredictably high AVX usage.
By catering to these unproductive transients in a fixed frequency domain it wouldn't be possible to go this high on clock speed, and real game code would need to be run significantly slower to cater for these spikes.
Targeting fixed power is targeting fixed performance as far as actual numbers crunched goes. It's leaving nothing on the table.
A fixed clock does not mean fixed performance. Nor does it mean uncapped performance. Power draw varies with work load, which is why you could have a 5Ghz clock that is stable all day in Windows, gets hot but is stable in games, and crashes in a synthetic stress test like Linpack.
Modern GPU overclocking is done by providing enough cooling so that your performance is limited by maximum allowed TDP, with no thermal throttling.
It's targeting fixed power consumption. It's the ideal thing to be limited by, rather than temperature.
PS5 does not throttle based on temperature. It cannot vary clocks based on temperature.
It doesn't boost until a temperature threshold, but to stay at maximum power consumption.
PS5 is a big lad in a big boy chassis, it is not in a laptop chassis with limited cooling. It is not in the PC and mobile domain of boosting based on ambient temperature and die temperature, and backing off as they are hit.
AMD SmartShift only varies the maximum TDP allowed to each component of the same shared APU, and is only part of the variable frequency.
In your laptop with limited cooling and which boosts up based on die temperature, SmartShift extends the duration of maximum boosts. That's what it's designed to do based on AMD's own literature on the subject.
In a PS5 it augments the fixed power target system so that each component has an even higher TDP to play with individually.
They have balanced the GPU and CPU frequencies so that they aren't compromising the other and have similar thermal density.
Incidentally, 10% drop in frequency from 2230 Mhz lands on 2007 Mhz.
Cerny's quote about a "couple of percent" drop in clock rate yielding a 10% drop in power consumption (which the article you quote flips around) is a quote to show how little you need to drop clocks to reduce power consumption.
It's highlighting the
relationship between the two variables. It isn't in either case saying how low GPU clocks will fall.
Cerny chose dropping power by 10% as his starting point for that relationship. The article you quote chose 10% drop in clock for the starting point for that relationship.
If you'd been paying attention you'd see that PS5 targets a fixed power draw.
Fixed power draw.
Not an alternating 100% or 90% power draw (using Cerny's power/clock relationship example figures).
The GPU doesn't run into some power hungry instructions and decide to drop the clocks enough to now start running at 90% power until they're finished.
It drops the clocks just enough to stay at 100% power because it targets fixed power usage by varying clocks. It doesn't vary clocks and power.
You've not only taken numbers to highlight the non-linear relationship between power consumption and clock-speed to assume what a low clock speed might be, but you've chosen something the explain the relationship that arbitrarily chose a 10% reduction in clock-speed as its starting point, before kindly doing the math to tell us that equals 2Ghz as if that now somehow says this is what PS5 GPU clocks fall to.
Why not use Cerny's own arbitrarily chosen 10% reduction in power consumption?
What would that yield as the lowest the clocks go?
How about if I say a 15% reduction in clock speed reduces power consumption by 75%
Have I now just proved that PS5 clocks go as low as 1.9Ghz?
You arbitrarily chose 2Ghz because that sounds more dramatic of the two relationships quoted, because you are an ill-informed troll.
Cerny has repeatedly said both clocks will run at maximum frequency most of the time.
In the same Eurogamer interview he even further explains to the interviewer who is also looking at the situation from a thermally throttling PC point-of-view that there is no "base clock", and that even when the GPU spent an entire 33ms frame budget in work it sat at maximum clocks, without relying on a race to idle condition to artificially keep the clock-speed high.
When pushed further by the interviewer to figure out what the "base clock" is Cerny talks about what a synthetic benchmark that flips all transistors every tick would do, which would likely cause PS5 to reduce clocks more significantly and cause a fixed frequency system like PS4 to overheat and crash.
If you're using your APU at 100% TDP at all times during a game with some kind of instruction snooping system, then you're boosting useful game code while taming useless transients without having to cater to them.
Typically efficient game code only has around 30-40% CU occupancy. Which is why peak figures are meaningless. Especially if one system can recognise the difference between efficient game code and unproductive transients that would otherwise spike power on fixed clocks.
And before you misunderstand again, "efficient" in this case means above average CU occupancy. It means code written to really stretch the hardware in a useful way.
Synthetic stress tests are full of loops with no calculated result just for the sake of consuming watts/generating heat.
Fixed power variable frequency is like normalising an audio wave form to flatten out spikes so that you can amplify the rest of the audio without those spikes clipping.
Spikes aren't efficient useful game code but typically oversights like uncapped low triangle scenes. Not extremely busy scenes with lots going on.
GPU work done is watts consumed, not clock speed.
wtfl;dr you misunderstood Cerny's context for bringing up power hungry instructions to the extent it makes the opposite point you think it does. You're taking an arbitrarily chosen number used to highlight a relationship to infer what PS5 clocks are while forgetting it's fixed power variable clock, not variable power and variable clock
swtfl;dr trolling and derailing doesn't go down well here