Hahaha, no, you don't get it! That's only theoretical TF, not more not less. You have to fill all of the CU's with work, to get there. But it is very hard to reach, and that's the reason why Cerny prefers to take fewer CUs with higher clocks, simply because they are better to fill with meaningful work!
This "very hard" myth is false; that's one of the points in the presentation Cerny PR'd up somewhat to try shifting the narrative away from such a thing. The truth is GPUs are pretty easy to saturate with work, that's just how they are by design. They are highly parallelized computational architectures, like DSPs on steroids. In fact, they've been replacing DSPs for years due to their advantages (there's still a few areas where DSPs have some edges, such as power consumption and cost-for-performance).
You can look at most GPU benchmarks today and see that, for architecturally similar cards, the card with more hardware gets better results in almost every single instance, across multiple categories. So the "it's hard to parallelize for GPUs" myths was a PR spin by Cerny in an otherwise informative presentation to try cutting away at one of the perceived weaknesses of their system, simple as that.
Smart developers can reasonably keep a wider net of CUs occupied, especially given the frontend improvements AMD have made with RDNA2. And trust me, those improvements are definitely there, otherwise they would not be able to create these massive GPUs (both them and AMD, let alone Intel with that giant XE GPU) in the first place.
If in the pursuit of using the GPU’s transistors to run machine learning algorithms to upscale their assets they squander a good chunk of their FLOPS advantage to match Sony’s SSD advantage (as I said “in the pursuit” as it is another power of the cloud scenario to apply that to whole classes of games as a generic solution IMHO) that seems to say Sony made a very very smart decision with their SSD and I/O HW.
You (nor any of us on the forum) have any idea how much of the GPU is being utilized for those upscaling asset tasks. We also don't know if they have made customizations to the GPU specifically for those tasks (it's very likely they have), thus taking some of the workload off of the general compute units for that.
AT the end of the day MS's approach regards that and Sony's approach are both valid options, but MS's offers more flexibility depending on the needs of the underlying game design and programming techniques. I.e if a game isn't relying on a large set of unique visual assets of insane size, those GPU resources can potentially be put to use in different ways/tasks (of varying degrees).
It's baffling to you because you seem to believe that the XSX SSD is close in parity to the PS5's. But developers in general say otherwise. The head of Epic says otherwise and the dev this thread is discussing is saying that the PS5 feels like what you would have expects to come in a heavily SSD focused mid gen update 4 years from now.
We cannot really say anything about the real world performance since we cannot test it ourselves, the people have have are saying that despite the XSX GPU being extremely impressive the PS5's is a world ahead.
Being able to save on VRAM usage does apply to both SSD's but for all we know the PS5 could let them save 10x more.
So every single "developer" has put out a public statement on this? Every single developer is specifically working with next-gen devkits, on next-gen projects, with next-gen API dev tools, in areas of game design where they need to actually utilize that particular hardware?
Or are we talking about some developers who have a preference to PS5 in a certain way that isn't too different from other certain developers who have a preference to XSX in certain other ways? I guess we should take their opinions as absolute statements of fact now? No, that's not how critical thinking actually works.
The head of Epic hasn't "said otherwise"; he is not legally permissible to say such publicly. Him being positive about PS5's SSD I/O is not an automatic indictment on the competitor's SSD I/O and that is regardless of anything he has said since partial minds can see what he's saying for what it is (and understanding both historical precedent with Epic demos on PS consoles and any potential backend PR between Epic and Sony with regards the demo).
Matt is ultimately one person, a single opinion. It's not absolute. I'm sure he feels the way he does earnestly but again it's more or less his opinion even if there are aspects of it based in probable truth. Also it's funny that you are understating Matt's own comments regarding XSX's GPU, going by his words it would be inferred it's potentially magnitudes ahead in particular areas. I can speculate what those areas would be but I don't know if I would state it to the degree Matt's own comments implied.
What is this "10x" multiplier coming from? What factors are you considering here? What aspects of the tech and how they work in relation to each other? What established performance metrics, formulas, etc? If I'm not allowed to throw around random spec claims/benefits without detailing how those figures are reached and what methods/factors are being used to arrive at them, I'm definitely not letting others try slipping away with suspect claims of their own x3.