AMD gave Sony and MS what they wanted, inside a budget. Remember that AMD already had Zen3 with a single 32Mb of L3 cache, by the time the PS5 and Series X released.
But Sony And MS choose to sacrifice CPU cache to have more CUs on the GPU. And to save die space on the SoC.
Also, AMD probably would charge more for a Zen3 SoC, than for one with Zen2, as this one would be more recent.
We have to remember that consoles are very price sensitive.
I think it's true in theory that Sony & MS could have gone with Zen 3 CPU, or at least Zen 3 unified L3$. But it might also be possible that AMD did not have an APU design ready with those features at the time Sony & MS needed APUs to test and iterate in time for launch in 2020.
Agreed with pricing being a factor; Zen 3 would have definitely costed more than Zen 2.
Doesn't matter how much HBM prices have lowered, it's still way too expensive for a console.
Again, maybe currently. But at large economies of scale, the price for a company like Sony per-chip would be a lot lower than, say, what some server company pays for HBM in their rack ad-hoc rack setup. I don't think Sony would pay that much more for decent HBM memory, than they currently do for GDDR6, at the scale of orders they put in for their consoles.
I mean even Microsoft admitted they considered HBM at one point, but went against it due to JEDEC. Didn't seem it was due to pricing concerns on their end, and they have much less economies-of-scale benefits than Sony does with PlayStation.
The issue with latency is not so much GDDR6. But rather the memory controller, as this is tuned for bandwidth, sacrificing latency.
True. But, this is where HBM would be an obvious advantage, because you get better latency without sacrificing bandwidth.
On a PC, with 2 memory pools, one is memory controller is tuned for latency, as this is better for a CPU. And the other is tuned for bandwidth, as this is better for a GPU.
Yeah, and that is great for PC. But I don't see that approach being cost-effective in console. You'd lose economies-of-scale by splitting up order volume between two different memory types, so costs per-chip increase just by that alone. Then you lose out on hUMA advantages, so you need things in place to assist in data management, enforce coherency, and probably need some buffer memory on the controllers between the two memories to mitigate performance drops by splitting up the memory pools.
That essentially complicates things for developers. In a sense I could see Microsoft taking that approach, but not Sony.
Once again, consoles are very price sensitive. So having 2 memory controllers means more die space.
But cache also means greater cost.
Yep. It really just comes down to what works best for market and product needs.
I can see a future where Microsoft prioritizes modular PC-like memory setup in their next system (whether it's on a console business model or not is uncertain) to address CPU latency issue and get good GPU bandwidth while opening up possibility for capacity expansion.
Meanwhile, can see Sony opting for a more cache-orientated approach while sticking to a hUMA memory setup, potentially HBM 3-based, with fixed memory capacity. That way they still address CPU latency issues and get bandwidth for the GPU, decent capacity, and maximize benefits from economies-of-scale.
Ah imagine a game 100% optimized for a 4090 based machine... one can dream.
If arcades still existed (as actual arcades, not FECs), you'd be getting that. Probably from Sega.
THey didn't have Zen released, but it could have been in the design phase, just like Zen 2.
And don't blame me. DF said it.
DF say a lot of things. Maybe Series X CPU was decided by 2016 (strong doubt), but I have many reasons to suspect actual full design of the X and S was in a compressed development stage starting mid-late 2017 up through to late 2019 to early 2020.
"Compressed", as in bulk of design and development. Microsoft likely had some amount of concept work on a 9th-gen Xbox prior and pulled parts of that towards Series X and S, but R&D into that earlier design was likely quite slow due to uncertainty around the division's future after XBO's launch. The division getting funding reduced during the Myerson years would have also had this effect.
Then after the One S and One X came about Microsoft likely took that feedback and predicated a bulk of the Series X and S design and product strategy around those two devices.
Why use Zen3+ but make it like Zen4c? The mobile variation had most of the frequency and cache with I believe less instruction set support.
No for PS5 Pro I was thinking they could use Zen 4. Or Zen 3. Whatever, just something not Zen 2. I'm going off the idea that Zen 3 and 4 are BC with Zen 2 microcode, whereas the CPUs AMD had at the time of PS4 Pro weren't BC in microcode with Jaguar CPUs. Hence why PS4 Pro stuck with Jaguar cores.