Without a doubt.Oof this guy got vindicated hard AF. I bet we’re going to see some monster exclusives in a few years from Sony.
A good example of this is the Xbox Series X hardware. Microsoft two seprate pools of Ram. The same mistake that they made over Xbox one. One pool of RAM has high bandwidth and the other pool of RAM has lower bandwidth. As a result, coding for the console is sometimes problematic. Because the total number of things we have to put in the faster pool RAM is so much that it will be annoying again, and add insult to injury the 4k output needs even more bandwidth. So there will be some factors which bottleneck XSX’s GPU.
The main difference is that the working frequency of the PlayStation 5 is much higher and they work at a higher frequency. That's why, despite the differences in CU count, the two consoles’ performance is almost the same. An interesting analogy from an IGN reporter was that the Xbox Series X GPU is like an 8-cylinder engine, and the PlayStation 5 is like turbocharged 6- cylinder engine. Raising the clock speed on the PlayStation 5 seems to me to have a number of benefits, such as the memory management, rasterization, and other elements of the GPU whose performance is related to the frequency not CU count. So in some scenarios PlayStation 5's GPU works faster than the Series X. That's what makes the console GPU to work even more frequently on the announced peak 10.28 Teraflops. But for the Series X, because the rest of the elements are slower, it will not probably reach its 12 Teraflops most of the time, and only reach 12 Teraflops in highly ideal conditions.
Aware of this, Sony has used a faster GPU instead of a larger GPU to reduce allocation costs. A more striking example of this was in the CPUs. AMD has had high-core CPUs for a long time. Intels on the other hand has used less core but faster ones. Intel CPUs with less cores but faster ones perform better in Gaming. Clearly, a 16- or 32-core CPU has a higher number of Teraflops, but a CPU with a faster core will definitely do a better job. Because it's hard for gamers and programmers to use all the cores all the time, they prefer to have fewer cores but faster.
*trash*
Look if he just stated, that in current gen titles that use less cu's PS5 could see a advantage because of this sure, next gen titles lol nope.
Someone obviously never told nvidia or amd when they designed their new monster gpu’s....Dude was spitting truth bombs it appears. Theoretical power don't mean jack if you can't fully utilize it.
On PC big Navi will have dedicaded memory and much higher clock. What he said make sense, if your data pipeline has bottlenecks you wouldn't be able to performe efficiently. Mark Cerny make a big deal about how ps5 is optimize for data throughput, increase cache sizes, adicional hardware to handle the memory access. The racional is, let's use less silicon, but let's make sure we can fully utilized they by keep them fed constantly.They went with it because for more bandwidth. With his logic he should like it. And frankly memory limitation with or without a shared pool is going to be a thing anyway. If you wanted to get rid of memory constrains they should have opted for 32gb of ram but they didn't. 10gb is what u get and maybe 11-12gb on the Ps5 both are limited. Bandwidth is king at higher resolutions and microsoft understood this.
He should honestly praise xbox for this but he won't and u see him move through hoops of twisted logic to push sony into a favorable position with borked logic over and over again.
Lets put some real talk forwards then.
PC lowest rnda 2 gpu announced is a 60 cu gpu. the xbox has less then that, and the PS5 way way less then that. U honestly think games are not going to use those CU's ( aka cores as he calls them ) when the whole market is shifting towards it and that's what gains them performance? of course they do.
So his whole story about his 36 compute units at a higher clock is just laughable really and frankly it will only help the PS5 in current gen titles that do not adress much cu's but for next gen things are going to be cmpletely different. The same reason why microsoft pushes bandwidth upwards.
And no u don't need a high clock speed to get use out of that 12tflop solution for the simple fact that 6800xt with 72 ( twice the PS5 cu count ) is not running its clocks at 2x the frequency to keep up with it. There is absolute no need for that.
And then his ideology of the 12tflop number that is hard to reach, so is any tflop number with any card with that logic.
Look if he just stated, that in current gen titles that use less cu's PS5 could see a advantage because of this sure, next gen titles lol nope.
Those have big advantages, higher clock, dedicaded memory, more cpu power available for them and better thermal design and power delivery.Someone obviously never told nvidia or amd when they designed their new monster gpu’s....
Gotta love the internet. What do you do for a living my brother? How can you speak with such authority on the subject, when opposing someone who does work on the thing for a living, at a studio known for being a tech studio, and with the results being what they are.
This ain't politics breh, you can't just drop hot trash like that and classify it as "opinion", "freedom of thought". I mean, you're free to look dumb, but is that what you want?
On PC big Navi will have dedicaded memory and much higher clock. What he said make sense, if your data pipeline has bottlenecks you wouldn't be able to performe efficiently. Mark Cerny make a big deal about how ps5 is optimize for data throughput, increase cache sizes, adicional hardware to handle the memory access. The racional is, let's use less silicon, but let's make sure we can fully utilized they by keep them fed constantly.
Oh look another PS5 fanboy crawled out of the dumpster called PS5 next gen thread with no argument just shit posting because i dared to critize his plastic box prophets.
Maybe next time start posting with some arguments. because the market is clearly moving into xbox direction and not PS5 direction when it comes to GPU solutions as i explained and with that bandwidth also. Maybe the entire market even amd rdna2 and nvidia 3000 series got it all wrong.
What a joke.
And the same counts for software being designed for high cu counts which the entire market is moving into. But he forgot to mention that. And the market is also moving into far more bandwidth memory and raytracing will also use more cu's etc etc etc.
Someone obviously never told nvidia or amd when they designed their new monster gpu’s....
A bit unrelated. I see a 5MB L2 cache in there. What's the L2 cache size of Big Navi?What MS did with xsex is to win the TF war on paper for marketing. But actual results as we can see now favors the PS5 in performance.
AMD's new RDNA2 GPUs have 8-10 CUs per shader array. That's the amount of CUs they deemed optimal for each shader array. So AMD's big GPUs have 8 shader arrays with 8-10 CUs.
The problem with xsex GPU is that MS crammed 12-14 CUs in each shader array. But it only has 4 shader arrays similar to PS5. So xsex is not actually wider or big by the definition of AMD. AMD's big and wide GPUs have as much shader array as the number of CUs (8-10CUs to 1 shader array).
With PS5 you have 4 shader arrays with 8-10 CUs. But the GPU is clocked a lot higher so its caches are a lot faster. If xsex has 6 or 5 shader arrays then that would count as bigger and wider than PS5. Then surely it would have performed faster than PS5. But that's not the case.
There may still be advantage on the xsex configuration I would say. It's a wash. There may be operations where xsex is faster and rendering where PS5 is faster.
Nobody knows because cache is different in each chip but BigNavi is suppose to have 128MB L2 or L3 cache.A bit unrelated. I see a 5MB L2 cache in there. What's the L2 cache size of Big Navi?
You know at least what's the point of the CUs? Calculate data. Let me put a simple example. You really think series X can push an advantage around the 40% of more CUs power when can just fill those with around the 20% of more bandwidth (which is even splitted)? No such advantage it will stay just around the 20% of more data calculable and that's it.They went with it because for more bandwidth. With his logic he should like it. And frankly memory limitation with or without a shared pool is going to be a thing anyway. If you wanted to get rid of memory constrains they should have opted for 32gb of ram but they didn't. 10gb is what u get and maybe 11-12gb on the Ps5 both are limited. Bandwidth is king at higher resolutions and microsoft understood this.
He should honestly praise xbox for this but he won't and u see him move through hoops of twisted logic to push sony into a favorable position with borked logic over and over again.
Lets put some real talk forwards then.
PC lowest rnda 2 gpu announced is a 60 cu gpu. the xbox has less then that, and the PS5 way way less then that. U honestly think games are not going to use those CU's ( aka cores as he calls them ) when the whole market is shifting towards it and that's what gains them performance? of course they do.
So his whole story about his 36 compute units at a higher clock is just laughable really and frankly it will only help the PS5 in current gen titles that do not adress much cu's but for next gen things are going to be cmpletely different. The same reason why microsoft pushes bandwidth upwards.
And no u don't need a high clock speed to get use out of that 12tflop solution for the simple fact that 6800xt with 72 ( twice the PS5 cu count ) is not running its clocks at 2x the frequency to keep up with it. There is absolute no need for that.
And then his ideology of the 12tflop number that is hard to reach, so is any tflop number with any card with that logic.
Look if he just stated, that in current gen titles that use less cu's PS5 could see a advantage because of this sure, next gen titles lol nope.
The difference then was that the PS4 perform better in all of games, have not only a better gpu but a faster ram.So now it's frequencies that's the secret sauce and MS's Direct X is a hinderence to the console. And raw power is meaningless.
This reminds me of the devs in 2013 that were downplaying the PS4s power advantage over XB1.
There is also no doubt that Crytek is not happy with MS after they all most went bankrupt and big reason for that was due to the poor sells of Ryse. And the fued MS and Crytek had over the ip
Meltdowns? No. Some devs will have different views and preferences. But the XSX power advantage is known and common knowledge. And I haven't to bet many devs would disagree with the unknown devs opinion to say the least.
Yet another attempt of desperate sony fanboys to downplay the power advantage narrative to make themselves feel a bit better. In the end, you guys are just setting yourselves up for disappointment when the DF head to head comes in and the XSX wins the majority with significantly better RT, higher resolutions at higher settings etc
And it's even more sad that you have a few hardcore sony fanboys doing the translations
The only similarity with the series X is just in the high CUs counts but it ends here. If you look how I/O is handled in the GPU is more close to the what ps5 does than to the series X approach. Even in the frequency.Oh look another PS5 fanboy crawled out of the dumpster called PS5 next gen thread with no argument just shit posting because i dared to critize his plastic box prophets.
Maybe next time start posting with some arguments. because the market is clearly moving into xbox direction and not PS5 direction when it comes to GPU solutions as i explained and with that bandwidth also. Maybe the entire market even amd rdna2 and nvidia 3000 series got it all wrong.
What a joke.
And the same counts for software being designed for high cu counts which the entire market is moving into. But he forgot to mention that. the same goes for higher bandwidth on the v-ram modules but all of that is not important even whli everywhere else it's the most important aspect lol. its so important the cu's on the card and memory clocks that amd nvidia charges a bucket load more money for just that in there top efforts. and with RT those cu's will age poorly. I won't be shocked if in 2022 the PS5 pro is already on the menu with a massive leap in cu's.
You know at least what's the point of the CUs? Calculate data. Let me put a simple example. You really think series X can push an advantage around the 40% of more CUs power when can just fill those with around the 20% of more bandwidth (which is even splitted)? No such advantage it will stay just around the 20% of more data calculable and that's it.
More CUs are almost pointless without a proportionate increase in the data feeding. On pc there is the infinity cache which feeds constantly the massive CUs counts with a massive amount of data. On series X can just count to the bandwidth , it hasn't even a robust cache customisation as ps5 to help to push the CUs more, so higher CUs can be good but is not that extreme differentiation in raw power without a proper flow of input .
For full navi 21 dieNobody knows because cache is different in each chip but BigNavi is suppose to have 128MB L2 or L3 cache.
We can only post what exists in the market... RX 5700 and 5700 XT has 4MB L2 cache that is proportional more cache than Xbox has.
Cache HierarchyA bit unrelated. I see a 5MB L2 cache in there. What's the L2 cache size of Big Navi?
In my defence it was just a very approximate chats of the reason of the CUs; personally didn't knew about the stuff you posted but I already suspected the limited CUs number was balanced to improve other performance.Your missng so much I dont know where to start. Look at this patent from Sony where they compact data before Local data store / cache before pixel shaders. Its a known bottleneck so compacting data helps. as does having less less CU in the shader array.
Old way, made worse by the bigger shader array - Cerny and naughty dog patent
Sony patent and different CU arrangenent / workflow
The cU and arrangenent in ps5 will be very different to XSX, and I bet PC parts are same as ps5 as its more performant, but we need to wait for RDNA2 white paper to see.
There is a reason you dont just have very large shader arrays.......
Summary : different CU arrangement which allows faster processing between pixel vertices and pixel shaders and allows loess costly post process effects.
Probably 1mb for every 64bit mem controllerA bit unrelated. I see a 5MB L2 cache in there. What's the L2 cache size of Big Navi?
A bit unrelated. I see a 5MB L2 cache in there. What's the L2 cache size of Big Navi?
Probably 1mb for every 64bit mem controller
Navi 21 256bit 4mb
Ps5 256bit 4mb
XsX 320bit 5mb
Thanks.Its just matching the L2 cache to the GDDR6 PHY controllers, so as XSX has more PHY controllers it has 5 MB L2 cache, ps5 and 6800 are 256 bit so 4 mB cache or multiple of 4.
They went with it because for more bandwidth. With his logic he should like it. And frankly memory limitation with or without a shared pool is going to be a thing anyway. If you wanted to get rid of memory constrains they should have opted for 32gb of ram but they didn't. 10gb is what u get and maybe 11-12gb on the Ps5 both are limited. Bandwidth is king at higher resolutions and microsoft understood this.
He should honestly praise xbox for this but he won't and u see him move through hoops of twisted logic to push sony into a favorable position with borked logic over and over again.
Lets put some real talk forwards then.
PC lowest rnda 2 gpu announced is a 60 cu gpu. the xbox has less then that, and the PS5 way way less then that. U honestly think games are not going to use those CU's ( aka cores as he calls them ) when the whole market is shifting towards it and that's what gains them performance? of course they do.
So his whole story about his 36 compute units at a higher clock is just laughable really and frankly it will only help the PS5 in current gen titles that do not adress much cu's but for next gen things are going to be cmpletely different. The same reason why microsoft pushes bandwidth upwards.
And no u don't need a high clock speed to get use out of that 12tflop solution for the simple fact that 6800xt with 72 ( twice the PS5 cu count ) is not running its clocks at 2x the frequency to keep up with it. There is absolute no need for that.
And then his ideology of the 12tflop number that is hard to reach, so is any tflop number with any card with that logic.
Look if he just stated, that in current gen titles that use less cu's PS5 could see a advantage because of this sure, next gen titles lol nope.
I mean everyone is free to believe whatever he wants . But it has explained why ps5 outperformed series X, why it has lesser CUs but nope, "more CUs are always better, I don't listen, blablabla," and so on, with the same generic stuff with tons of approximations and nothing else to add to the discussion. If some people prefers to live in their bubble, and just to hear series X is more powerful, good, rejoy about it. But no needs to engage the same narrative loop every single time if I can say.Amen to that. I think deep down, the people who shout vindication in this thread know it. They are not wasting any second to celebrate because they know very well that with equal development time, level of familiarity with the tools and developer skills, there is no rational reason why a game should perform better on PS5. Not saying that it can never happen, but the hardware wouldn't be to blame. Those consoles have the same hardware supplier and the architecture differences outisde of the I/O system are marginal. The theoritical power difference is quite frankly the best we have to predict future performance comparisons in multiplat games.
And yes we might as well throw away all we know about multicore programming if more CUs is somehow worse. By that logic, all hardware manufacturers including Nvidia would be racing to the bottom and try to engineer ways to run a single CU at the highest possible frequency like in the early 90s. We all know that computer science runs in the opposite direction to that.
Amen to that. I think deep down, the people who shout vindication in this thread know it. They are not wasting any second to celebrate because they know very well that with equal development time, level of familiarity with the tools and developer skills, there is no rational reason why a game should perform better on PS5. Not saying that it can never happen, but the hardware wouldn't be to blame. Those consoles have the same hardware supplier and the architecture differences outisde of the I/O system are marginal. The theoritical power difference is quite frankly the best we have to predict future performance comparisons in multiplat games.
And yes we might as well throw away all we know about multicore programming if more CUs is somehow worse. By that logic, all hardware manufacturers including Nvidia would be racing to the bottom and try to engineer ways to run a single CU at the highest possible frequency like in the early 90s. We all know that computer science runs in the opposite direction to that.
The Crytec dev didnt say that more CU's are bad, as long as you can keep feeding them with relevant data. He's angle was the Series X' and the PS5's GPU's. With their respective bandwidth and clocks he felt that Sony chose the "right" amount of CU's. Like Cerny said, it's a balancing act. A GPU won't be faster simply because you throw more CU's at it, it also needs to be supported by the rest of the chip, otherwise it's a waste.Amen to that. I think deep down, the people who shout vindication in this thread know it. They are not wasting any second to celebrate because they know very well that with equal development time, level of familiarity with the tools and developer skills, there is no rational reason why a game should perform better on PS5. Not saying that it can never happen, but the hardware wouldn't be to blame. Those consoles have the same hardware supplier and the architecture differences outisde of the I/O system are marginal. The theoritical power difference is quite frankly the best we have to predict future performance comparisons in multiplat games.
And yes we might as well throw away all we know about multicore programming if more CUs is somehow worse. By that logic, all hardware manufacturers including Nvidia would be racing to the bottom and try to engineer ways to run a single CU at the highest possible frequency like in the early 90s. We all know that computer science runs in the opposite direction to that.
They went with it because for more bandwidth. With his logic he should like it. And frankly memory limitation with or without a shared pool is going to be a thing anyway. If you wanted to get rid of memory constrains they should have opted for 32gb of ram but they didn't. 10gb is what u get and maybe 11-12gb on the Ps5 both are limited. Bandwidth is king at higher resolutions and microsoft understood this.
He should honestly praise xbox for this but he won't and u see him move through hoops of twisted logic to push sony into a favorable position with borked logic over and over again.
Lets put some real talk forwards then.
PC lowest rnda 2 gpu announced is a 60 cu gpu. the xbox has less then that, and the PS5 way way less then that. U honestly think games are not going to use those CU's ( aka cores as he calls them ) when the whole market is shifting towards it and that's what gains them performance? of course they do.
So his whole story about his 36 compute units at a higher clock is just laughable really and frankly it will only help the PS5 in current gen titles that do not adress much cu's but for next gen things are going to be cmpletely different. The same reason why microsoft pushes bandwidth upwards.
And no u don't need a high clock speed to get use out of that 12tflop solution for the simple fact that 6800xt with 72 ( twice the PS5 cu count ) is not running its clocks at 2x the frequency to keep up with it. There is absolute no need for that.
And then his ideology of the 12tflop number that is hard to reach, so is any tflop number with any card with that logic.
Look if he just stated, that in current gen titles that use less cu's PS5 could see a advantage because of this sure, next gen titles lol nope.
RDNA parts do not scale well at higher clockspeeds. Overclocking tests on the RX 5700 XT-close analogue for the PS5’s GPU-indicate that a massive 18 percent overclock from stock up to 2.1 GHz resulted in just a 5-7 percent improvement to frame rates
There is caveat too. Using two different speed in the same bus decrease the effectiveness to have more bandwidth because it needs to have more. So isn't it that straight victory especially considered ps5 can count to a robust cache system to support to the bandwidth work. I'm not entirely sure series X did the best deal compared to the ps5. But imo.What is the logic behind current gen and next gen games ? It do not think that make any sense. Overall SX on paper should have 18% advantage but we do not know about the bottlenecks and API's. Also games using more than 10gb RAM on SX has to be managed by programmer due to its different bus width unlike the PS5. Faster SSD on PS5 should also help in filling the RAM twice as fast than SX. So the way I see it. SX has 25% BW advantage only for games using 10gb or less. So next gen games using more RAM should benefit the PS5 instead.
Isn't it based on rdna1? Oh an article from 3/20/2020. Lol.No, the PS5 won’t offer anywhere near the graphics performance of Xbox Series X: Navi benchmarks prove it
Sony’s claim that high clockspeeds offset its meagre shader allocation on the PlayStation 5 doesn’t hold water when Navi overclocking results are factored in. Due to non-linear performance/clock scaling, the likely performance deficit between the two consoles is in the 25-30 percent range, which...www.notebookcheck.net
After 1.8 Ghz GPU performance will not scale linearly with clock speed
That is why I said it was bullshit to use RDNA clocks to make claims for RDNA 2 clocks lolNo, the PS5 won’t offer anywhere near the graphics performance of Xbox Series X: Navi benchmarks prove it
Sony’s claim that high clockspeeds offset its meagre shader allocation on the PlayStation 5 doesn’t hold water when Navi overclocking results are factored in. Due to non-linear performance/clock scaling, the likely performance deficit between the two consoles is in the 25-30 percent range, which...www.notebookcheck.net
After 1.8 Ghz GPU performance will not scale linearly with clock speed
It is. RDNA2 parts are built for higher frequencies.Isn't it based on rdna1? Oh 3/20/2020. Lol.
Off the shelf RDNA1.No, the PS5 won’t offer anywhere near the graphics performance of Xbox Series X: Navi benchmarks prove it
Sony’s claim that high clockspeeds offset its meagre shader allocation on the PlayStation 5 doesn’t hold water when Navi overclocking results are factored in. Due to non-linear performance/clock scaling, the likely performance deficit between the two consoles is in the 25-30 percent range, which...www.notebookcheck.net
After 1.8 Ghz GPU performance will not scale linearly with clock speed
No, the PS5 won’t offer anywhere near the graphics performance of Xbox Series X: Navi benchmarks prove it
Sony’s claim that high clockspeeds offset its meagre shader allocation on the PlayStation 5 doesn’t hold water when Navi overclocking results are factored in. Due to non-linear performance/clock scaling, the likely performance deficit between the two consoles is in the 25-30 percent range, which...www.notebookcheck.net
After 1.8 Ghz RDNA1 GPU performance will not scale linearly with clock speed
Yes, the rest is basically what's inside the shader arrays. They both have 4, and PS5 shader arrays are clocked 22% faster. That would explain both consoles being close in GPU heavy scenes.The Crytec dev didnt say that more CU's are bad, as long as you can keep feeding them with relevant data. He's angle was the Series X' and the PS5's GPU's. With their respective bandwidth and clocks he felt that Sony chose the "right" amount of CU's. Like Cerny said, it's a balancing act. A GPU won't be faster simply because you throw more CU's at it, it also needs to be supported by the rest of the chip, otherwise it's a waste.
Splitted bandwidth speed in the same bus . Two fast two furious. Just my guess. Cache system also could help a lot the cpu on ps5. A combination of both. I don't find anything of inexplicable if we start to consider some difference in both hardwareYes, the rest is basically what's inside the shader arrays. They both have 4, and PS5 shader arrays are clocked 22% faster. That would explain both consoles being close in GPU heavy scenes.
But that wouldn't explain why the PS5 has a 10% advantage in CPU heavy scenes (in AC at least). There is something else for that.
The Crytec dev didnt say that more CU's are bad, as long as you can keep feeding them with relevant data. He's angle was the Series X' and the PS5's GPU's. With their respective bandwidth and clocks he felt that Sony chose the "right" amount of CU's. Like Cerny said, it's a balancing act. A GPU won't be faster simply because you throw more CU's at it, it also needs to be supported by the rest of the chip, otherwise it's a waste.
It’s OS overhead. At least I’m assuming it to be that.Yes, the rest is basically what's inside the shader arrays. They both have 4, and PS5 shader arrays are clocked 22% faster. That would explain both consoles being close in GPU heavy scenes.
But that wouldn't explain why the PS5 has a 10% advantage in CPU heavy scenes (in AC at least). There is something else for that.
Could well be that too. Plus I suspect the Quick Resume feature might affect it as well...It’s OS overhead. At least I’m assuming it to be that.
Advantage is it allows more of the system to be used for BC and Series S compatibility similar to a how easy it is in PC to play old games with different components on each PC.
Disadvantage is it means a lot more abstraction and less fine tuning on a specific system.
The Crytec dev didnt say that more CU's are bad, as long as you can keep feeding them with relevant data. He's angle was the Series X' and the PS5's GPU's. With their respective bandwidth and clocks he felt that Sony chose the "right" amount of CU's. Like Cerny said, it's a balancing act. A GPU won't be faster simply because you throw more CU's at it, it also needs to be supported by the rest of the chip, otherwise it's a waste.
Ok, let's say that people who work in tech companies are not biased. Let's assume that they have invested a huge amount of their time becoming experts in both platforms developing games (you are taking their opinion for granted, you want them to be experts).Gotta love the internet. What do you do for a living my brother? How can you speak with such authority on the subject, when opposing someone who does work on the thing for a living, at a studio known for being a tech studio, and with the results being what they are.
This ain't politics breh, you can't just drop hot trash like that and classify it as "opinion", "freedom of thought". I mean, you're free to look dumb, but is that what you want?
MS had an alternative agenda since they also wanted a chip they could use in their Xcloud, where compute are more important.I vaguely suspect MS knew that it's a balancing act even before Cerny said it.
I'd chill and wait for more games to come out before awarding vindication medals to supporters of any of those platforms. The XSX does 4k 120 just fine in Halo MCC and many other games. Hell, the XSS even does 4k60 in a game. Clearly 3rd parties are currently struggling to quite pull the most out of the machine. time will tell if it's temporary.
That’s why ms upped the bandwidth .. but you guys seem to have missed that point .thankyou.gif
nobody seemed to be picking up on that - anyone who has worked in IT for a while understands this - its the same as having a super fast CPU and 4GB of RAM; you're going to be limited by the LCD
He did not even say XSX GPU is weaker than ps5's. He even mentioned that he believes these console perform mostly the same. Ps5 is weaker but 'cause of its smarter architecture can punch above its weight.The Crytec dev didnt say that more CU's are bad, as long as you can keep feeding them with relevant data. He's angle was the Series X' and the PS5's GPU's. With their respective bandwidth and clocks he felt that Sony chose the "right" amount of CU's. Like Cerny said, it's a balancing act. A GPU won't be faster simply because you throw more CU's at it, it also needs to be supported by the rest of the chip, otherwise it's a waste.
On pure next-gen games, the loading times on PS5 are around 2 seconds so let's wait on that, I am just sayin' of courseI can only find material indicating that the biggest hype of ps5, the disk speed, is outperformed when it comes to loading times. Not gpu related, i am just sayin...
Those have big advantages, higher clock, dedicaded memory, more cpu power available for them and better thermal design and power delivery.
What MS did with xsex is to win the TF war on paper for marketing. But actual results as we can see now favors the PS5 in performance.
AMD's new RDNA2 GPUs have 8-10 CUs per shader array. That's the amount of CUs they deemed optimal for each shader array. So AMD's big GPUs have 8 shader arrays with 8-10 CUs.
The problem with xsex GPU is that MS crammed 12-14 CUs in each shader array. But it only has 4 shader arrays similar to PS5. So xsex is not actually wider or big by the definition of AMD. AMD's big and wide GPUs have as much shader array as the number of CUs (8-10CUs to 1 shader array).
With PS5 you have 4 shader arrays with 8-10 CUs. But the GPU is clocked a lot higher so its caches are a lot faster. If xsex has 6 or 5 shader arrays then that would count as bigger and wider than PS5. Then surely it would have performed faster than PS5. But that's not the case.
There may still be advantage on the xsex configuration I would say. It's a wash. There may be operations where xsex is faster and rendering where PS5 is faster.