Neither are any of the other upscalers - short of doing bicubic upsample, and even that has a cost associated with it at 4k simply because of memory writes.
Basically - unless you're comparing 1080p native, with letting TV doing the upscale to PSSR from 1080p->4k, the cost is never simply 'whatever PSSR costs' - it would only be the relative difference between say - FSR3.x to PSSR on target hardware.
Also we know nothing about how much of that can be async-ed with the rest of the frame (since we don't know anything about the execution units that accelerate AI inference in PS5Pro or how they work). So the amortized cost could be significantly lower than the fixed-time block it consumes if executed on its own. But of course - we're making a lot of assumptions about the base-cost in the first place.
The whole thing is kind of moot - when VR hw launched in 2016, Oculus would eat as much as 4ms on the recommended spec(970) just to do its reprojection/warping/tracking and other stuff and noone ever talked about consuming '30% of the frame at 90fps', it was just a fixed cost VR titles worked with. Likewise, DLSS was much more impactful on 20xx series than it is today, and again - it was barely a footnote, everyone just talked about visual results and ignored the trade-offs.
Btw them using Perf modes instead of Fidelity is a big deal
I expect most of this has to do with Fidelity modes potentially having a different CPU footprint, so when you are given 2 days to make a ProPatch and ship it (that's what PS4Pro updates were like in Launch period) you are gonna go with the safest possible option - not choose options that require you to retest the entire game multiple times with no time for it.
Ie. I would caution on making assumption on GPU specific elements in these early updates, especially with next to no benchmarking. It's not like PS4Pro paper specs were a good predictor of what we actually ended up getting either.