Nah in IT is nothing is ever enough, enough is only in a sense where wider bus/more cache would not net any form of benefit. If you get me, I don't think that the compute part of Xbox APU would bring more from the GPU now than when it would have 512-bit bus or/and massive L3 cache (512-bit bus is unrealistic, just a nice number). Caches are only beneficial when you have some higher powered part of silicon in need to talk to some constrained part of silicon or outside component.
XSX is by no means perfect, we know that. However the need to have cache is stronger (only speculation, maybe it's enough-ly balanced) on PS5 due to narrower memory bus. Also XSX sucks on API front, PS5 has massive advantage there. My initial message, was a hot take as to what XSX needs. I don't think that neither of the consoles needs some secret sauce, especially in comparison to price tag, which in both cases is nothing short of miracle.
Latency part is very much warranted and we already see that sharing the memory bus with both CPU and GPU is constraint, for example on Alpha effect and mainly on AF, where on PC you have this neat 16x AF basically forced all the times for 20 years, on consoles is still not used today. However management of cache with CPU and GPU instruction is something challenging and it does not work in every game. Like I said example of this is SAM feature ( Resiable BAR feature ) on Windows, which shows meaningful increase on PC, however like I said, I believe this was already on PS4, because some ACE operations on PS4 was really quick, while their sucks/sucked on Xbox, some physics/detection of collision for example which many times can be computed (is that a word or my Second language is lagging...) in very small memory foot print and thus can be done in cache and directly drawn on screen. On PS4, not a problem, on Xbox, you have to get hrough hell of random APIs and still be at double time as on PS4 with faster CPU.
So this discussion is quite a complex one.