...
yea yea. Doesn't change my point. I already edited it in.
What point? You didn't address the more realistic simplified bandwidth compromises on the XsX - from the two memory pool contention - I explained and just doubled down on info that misrepresents the XsX memory balancing act when trying to use a unified 12GB with the GPU.
Let's put some depth into the simple theoretical exercise of a unified 12GB for the XsX GPU.
Off the bat we'll look at CPU bandwidth.
40GB/s is in the ballpark from the Road to PS5, so we'll use that for both systems.
on PS5 that's a simple unified deduction,
448GB/s - 40GB/s = 408GB/s for a 12GB VRAM pool for use with the GPU.
On XsX we have 10GB/s at 560GB/s and 6GBs at 336GB/s but the CPU 40GB/s is from 336GB/s where both bandwidths share unified processing time so 40/336 = ~12% instantly means the 560GBs is depleted to use just 88% of the time (~494GB/s) before we start looking at the 2GB of VRAM on the CPU memory side.
If we say a typical game sticking with 10GBs of VRAM averagely uses (494/10) 49.4GB/s per GB for argument sake - omitting all inefficiencies for this example. But as we are now moving up to 12GB that distribution no longer holds, so first of all we need to work out how much longer the GPU processing will take to complete a CPU memory bottlenecked task on the 2GB compared to a GPU memory bottlenecked task on 2GBs of the 10GBs. Which should be merely dividing 560/336, giving us a ratio of x 1.66 (5/3rds).
To then workout how much of that remaining 88% of the processing time is needed for 2GB at 336GB/s versus the 10GBs at 560GBs - to give each GB equally processing capacity we need to scale the 2GBs by the 5/3rds and treat it as (10/3) 3.3GBs vs 10GBs, effectively normalizing the bandwidths for both pools.
That way you get A) = 3.3/13.3 x 0.88 and B) = 10/13.3 x 0.88 to get
A) (10/3) /(40/3) x0.88 = (30/120) x 0.88 = (1/4) x 0.88 = 0.22
B) 10/(40/3) x 0.88 = (30/40) x 0.88 = (3/4) x 0.88 = 0.66
so then we get in total for the CPU memory (0.12 + A) x 336 = (0.12 + 0. 22) x 336 = 114.24 GB/s
+
the GPU side 0.66 x 560 = 369.6 GB/s
= 483.84 GB/s
which in these idealised circumstances is already a lot closer to the unified 448GB/s of the PS5 bandwidth and in real terms the 22%(0.22) percent of the 336GBs isn't 114.24GBs for the GPU, because it is wasteful by a factor of 5/3rds meaning in real bandwidth terms to the GPU it is actually 44.35GBs, which when added back to the 66% (0.66) 369.6 GB/s produces a GPU total effective bandwidth of ~414GB/s versus the PS5's 408GB/s.
The major missing piece here is that this is all idealised for the XsX memory use, without the big efficiency losses of copying between the pools with redundant data that are necessary, the need for a second garbage collector/memory defragger for the slower pool taking further bandwidth time, and the small percentage memory controller efficiency loss for switching modes/pools, even if assuming all of those solutions are perfect too despite the greater complexity.
Realistically the XsX using 10GBs for VRAM probably matches the PS5 for real-time GPU bandwidth, and at 12GBs in real game code is probably (90% of that 414GB number via complexity) around the 370GB/s - obviously still ignoring that all these figures will be actually lower scaled equivalents of their theoretical maximums used here.