so developers will likely get proportionally more memory out of the gate because less will be reserved
PS5 has 3.5GB reserved.
PS5 Pro has 4.3GB reserved. (2.3GB G6, 2GB D5). More not less.
NS2 has 3GB reserved.
PS6 GPU allocation should top out at 20~23GB.
They can more easily use more "VRAM for the GPU" with higher settings than they can with "RAM for the CPU"
Because 3.5GB is on the low end. If the average game was completely fine with that, 8GB RAM on PC would be stutter free.
BG3 Console is 5.3GB, CP2077 PC is 4.4GB, SoTR PC is 2.9GB, Wukong PC is 5.2GB, etc.
I am not ignoring the flexibility of UMA. It's already factored in. My response more speaks of 9GB VRAM as being on the high end of what can be allocated for only the GPU.
Is it though? Why would a game on a Series S require less RAM "for the CPU" than a PS5/XSX?
I didn't make that claim. The difference is OS. XSS allows developers to use 8.8GB total between CPU/GPU (OS uses 1.2GB). PS5? 12.5GB.
In practice though XSS memory is too brutal to deal with. So developers have to save every last MB they can.
I already shared the tweet about BG3 on XSS. They only managed to shave CPU usage to 4.7GB from 5.3GB. They were forced to push down VRAM use from 3.4GB to 2.3GB. (To leave enough headroom for overflows/not to crash).
As you can see it's a lot harder to reduce CPU usage than GPU usage. In practice XSS is beneath 6GB dGPUs. You can call XSS a 5-5.5 GB VRAM console I guess.
This speaks to how utterly terrible XSS's memory buffer is relative to what it was supposed to do.
8GB cards have performed like trash in a lot of games while performing fine on PS5/XSX.
I already addressed these unoptimized dogshit games. I've already talked about how PC's GPU bandwidth plummets once you have to use host mapped memory off PCIe. Don't make me repeat myself.
TLDR: Games that use 8.5-9GB VRAM on consoles often expect to be able to copy paste that onto PC. They can't. Effective bandwidth plummets once even a few MBs spillover.
(Note: a few of the sony first party ports legitimately use over 8GB VRAM with PS5 equivalent settings).
Fortunately now PC is bigger than just PS5. PS5 has to bend to 8GB now. Games that don't respect PC's 8GB line in the sand flop automatically regardless of how well they do on PS5.
The GPU can get as much as it likes. What conventions or importance? You seem to be creating limitations and constraints that don't exist.
This is so stupid. "The GPU can use as it likes" is so stupid because obviously the CPU will use memory. The "limitation" that I am "creating" on PS5 GPU is that the CPU must use a few GBs of memory.
It goes without saying that it will. You just want to pretend that it doesn't so it looks better vs PC.
Can you also explain what you mean by "SX is 15% slower at frontend". What exactly do you mean by frontend here?
Note: as a PC gamer. I am not that familiar or interested with RDNA architecture since Radeon isn't a relevant player on PC (6-8% share). So some of the details below could be wrong. If you wanna verify @ at Kepler.
The Geometry engine. The rasterizers. WGP level resources.
Xbox stuffed an additional 16CUs into the same 4SE's. It then ran them at a 15% slower clock. Meaning it's frontend was 15% slower but backend was 15% better.
This should include: vertex assembly, tessellation, geometry shading, culling and rasterization. All should be 15% slower on XSX.
If a game is not bound by the compute, but by the frontend, XSX straight up loses. As we see it to often do.
Microsoft just wanted a big marketing number. Then they got blindsided by Sony pushing clockspeed to 2.15-2.23 GHz. It reduced PS5's deficit in TFLOPs and made PS5's advantage in frontend much more real. There's nothing inherently wrong in the games where PS5 beats XSX.
Sony just out designed Microsoft. Which they did in the Gen 8 consoles as well.