PaintTinJr
Gold Member
Digital Foundry said it in an article or a video for definite, but I have a sneaky feeling the article or video has been revised as I can't find it going back through. They definitely said that accessing any part of the 6GB with the GPU dropped the bandwidth to 336GB/s, and any access from the CPU to the 10GB/s dropped the bandwidth to 336GB/s, also.I'm not sure why accessing the "slower" 6GB over its 3 x 64 bit channels (assuming it's 64-bit channels like RDNA1) would disable the remaining two 64-bit channels.
MS said that access to the 6GB normal memory was at 336GB/s, not that accessing that 6GB of memory reduce entire system bandwidth to 336GB/s.
Now it might be that depending on what data the GPU was waiting for, and how it was striped across the memory channels, that the CPU, IO etc could effectively block or slow accesses that also used the remaining channels on the remaining 128-bits of the bus .... but that's very different from saying that a CPU access inherently blocks access to memory channels it's not using.
There's a whole lot of imagining worst case scenarios for the XSX memory setup!
It is possible that they were mistaken to say that, but it feels far more like them trying to avoid the narrative that people who understand bus contention can illustrate that it is more likely a loss than a win on the memory front– DF themselves use the word 'compromise' when describing the asymmetric memory model, because they know it was done for either GDDR6 availability/GDDR6 chip-speed/size costs, and stability issues when pushing to 3.8GHz and 12TF.