On one hand, the Series X would probably gain in development effectiveness by just putting a 2GB GDDR6 on all channels ,giving it the full 560GB/s on all memory.
On the other hand, during the hardware design they might have concluded that the console doesn't really need 560GB/s to start with, and they didn't predict the scale of the memory contention issues they ended up having.
If anything, the PS5 seems to be pretty well balanced and it "only" uses 448GB/s. There's a common (mis)conception that memory bandwidth should be adjusted to compute throughput in GPUs, so a Series X with 18% higher shader throughput than the PS5 should also get higher memory bandwidth. However, IIRC shader processors aren't the most memory bandwidth intensive components in a GPU, those would be the ROPs which are usually hardwired to the memory controllers in discrete GPUs. There's the notable case of the PS4 Pro being a "monster" in theoretical pixel fillrate with its 64 ROPs but official documentation being clear about the chip not being able to reach anywhere close to its limit because of a memory bandwidth bottleneck.
The PS5 and the Series X have the same pixel rasterizer (ROP) throughput per clock but the PS5 has higher clocks, so the PS5's design might actually be more bandwidth-demanding than the Series X.
So It could be that the reason the Series X uses a 320bit memory controller has little to do with running videogames.
The PS5 has one purpose alone, which is to run videogames. The Series X serves two purposes: to run videogames and to accelerate compute workloads in Azure servers. The Series X chip was co-designed by the Azure Silicon Architecture team and
that's actually the team that originally presented the solution at HotChips 2020. The 320bit memory controller could be there to let the SoC access a total of 20GB (or even 40GB if they use clamshell) of system memory in Azure server implementations.
Microsoft's dual-use design was obviously going to bring some setbacks and the most obvious one is the fact that they needed to produce a 20% larger chip on the same process, to run videogames with about the same target IQ.
As for the memory pools with uneven bandwidths and the memory contention issues that they brought, it might have been something Microsoft didn't see coming and perhaps they should have used only 8 channels / 256bit on the gaming implementation of the Series X SoC.
Or perhaps someone did see it coming, but the technical marketing teams wanted to have stuff to gloat about and developers were going to have to adapt to the uneven memory pools regardless, for the Series S.