Both Zen 2 CPU and RDNA GPU has large multi-MB caches to minimize context switch overheads with the memory controllers.
XSX is already delivering RTX 2080 class result with 2 weeks raw Gears 5 port's benchmark at PC Ultra settings.
RX 5600's reduced 192 bit bus hit was relatively minor. If you scale Saphire RX 5600 XT (7.9 TFLOPS average), PS5's 10.28 TFLOPS GPU still lands on very close to RTX 2070 Super and above RX 5700 XT (average 9.66 TFLOPS) i.e. 130% level.
Scaling TP's results to XSX's GPU power lands between RTX 2080 and RTX 2080 Super range.
I know you are only parroting the same type of rubbish that DF promotes with misleading PC versus APU console extrapolations, but surely you can see that a game with gameplay logic designed around an old console (Xbox One) with a tiny amount of Edram and slow DDR3 isn't thrashing any memory bandwidth with contention on a PC or XsX, or One X, no matter the resolution or GPU eye-candy fx that don't alter gameplay, yes?
It is no test for the XsX and certainly provides none or very little insight into how an XsX's asymmetric memory setup - with slower Cpu 192bit access and faster 320 bit gpu access (to just 10GB) - will hold up against a simpler PS5 memory setup - with regular 256 bit width access for both GPU/CPU and still 80% of the faster XsX GPU access bandwidth for a unified 16GB access.
Lets take a worst case scenario for getting data from a peripheral into the XsX GPU(10GB) compared to the PS5. The PS5 has complete access to all 16GB at the same 256 bit width, so the data is bubbled into the GDDR6 at the earliest convenience and can be subsequently accessed by the GPU at full bandwidth as necessary.
When the XsX bubbles the data to its GDDR6, the overall bandwidth drops to 336GB/s for the bubbles of getting the data into the 6GB – taking 1.33x the time it took the PS5 – and then for the internal data copy from the 6GB to the 10GB (AFAIK)the bus is then dropped down to 336GB/s for bubbling a read back to the CPU cache, and then again for writing the data back to 10GB/s resulting in 2.666x more time at the 336GB/s bandwidth, before the next read by the GPU at 560GB/s at 0.8x time of the PS5 read.
Assuming the mixed bit alignment of XsX accesses doesn't add wastage in bandwidth padding, this crude comparison results in a setup cost for first read by the GPU on ps5 as 1x write data, 1x read data = relative time of 2
The Xsx by comparison does = 1.33 write data, 1.33 read data, 1.33 write data, and finally 0.8 read data = relative time of 4.8, so normalising gives PS5 time at 1, and XsX time at 2.4 (or 140% advantage to the PS5)
It would therefore require both GPUs to read/write that data 7 times more between memory and Gpu for the XsX to be level in bandwidth cost for that workload, and 8 times to gain a 20% advantage.
Seeing numbers like 140% setup cost more, against a 20% GPU gain per 10GB access for XSX, and 25% loss for CPU access to 6GB -ignoring the statistically increased likelihood of wasted bandwidth to workload padding for 192 and 320bit access, or the PS5 using an IO complex with 6 priority levels of SSD data streaming – probably means swithout a less crude working situation in which un-unified data access of the XsX wins out I going to struggle to believe the PS5 isn't going to have a big advantage in memory bandwidth with real workloads,