The issue is that the cost per transistor is not going down significantly with newer processes. If you want more CUs on 3nm, you're going to have to pay for them. The 9070 is 357 mm^2 on 4nm, compared with 279 mm^2 for the PS5 Pro on 5nm/4nm. It has Infinity Cache which could be removed, but doesn't have a CPU. So if we take the Pro as the base, we're looking at adding more memory, still increasing the size of the chip even to hit 9070 level performance, and then reducing the price to $599? If the PS5 Pro breaks even at $550 say, then Sony would only have $50 to play with before they start taking losses.We know even 9070 non xt, with its 220W tdp is at 225% of performance vs base ps5 gpu, and that card launched march 2025 on 5nm.
Gotta use logic here, for 2028 tech on 3 or 2nm process node we will get vastly more performance in ps6, compared to current desktop gpu's think something around 4090, including ai upscaling and rt performance too.
Ofc unpredictable stuff could happen like ww3 or dunno alien invasion, but if things go smoothly ps6 gonna be at least 3x stronger from base ps5 on top of few times better ai upscaling and rt capabilities(we can safely assume at least some form of basic raytracing will be default/baseline for next gen games).
(3nm looks like it would enable a ~25% boost to clock speed, and 2nm looks to be too expensive, hence Kepler implying it's not being used).
Edit: Another thing to point out is that a 256-bit bus on GDDR7 is only expected to give around 1 TB/s of bandwidth. That's what the 4090 has, but the 4090 has a huge increase in L2 cache to match the Infinity Cache on Radeon cards. If you're stripping out the IC to save die space, you won't get that benefit.
Last edited: