Imo MS locking at sustained speeds is a mistake to be corrected

and this is what the vast majority of sony fans have been talking about, Parity. good we are at least on the same page then.

What we know for sure and is well documented is the PS5 has a real SSD advantage while the XSX has a real GPU advantage.

Its fairly obvious in the end they'll have similar performance profiles which vary from game to game owing to that plus aforementioned API and architectural differences.

My point in my original response to you stands: there are way too many Sony warriors in here making a big deal out of what is essentially API immaturity and, secret sauce aside, when this situation is corrected we'll see the math play out as it predictably does.
 
The pattern of the Sony diehard crew on this site seems to be to troll Twitter for the absolute dumbest Xbox fanboy hot takes and then post them here and act like that's what every single person that bought an Xbox thinks.

Not a single person that's not a moron thought or thinks that the Xbox should be performing 50% better than the PS5, yet the same users on here keep bringing it up. It's ridiculous. At least it's thinning the herd for the posts that I'll see, so I guess it's not all bad.
 
.
What we know for sure and is well documented is the PS5 has a real SSD advantage while the XSX has a real GPU advantage.

Its fairly obvious in the end they'll have similar performance profiles which vary from game to game owing to that plus aforementioned API and architectural differences.

My point in my original response to you stands: there are way too many Sony warriors in here making a big deal out of what is essentially API immaturity and, secret sauce aside, when this situation is corrected we'll see the math play out as it predictably does.

Well we will see how it goes, you can book mark this post and we check back in a year or two's time
 
Source? This sounds like "secret sauce" talk.
Pfft. Secret sauce... Seriously? Look past TFLOPS and bandwidth for once. It's not some "secret sauce". This is pretty standard stuff when you do specs comparison of graphics cards on PC. Both XSX and PS5 have 64 ROPs (Render Output Units), due to PS5's 2230 MHz clock speed advantage which is a 22% uplift over 1825 MHz of XSX, the pixel fillrate of the PS5 goes up by 22%. Similarly, there are 4 primitive units in both the console's GPUs that handle rasterization and this too goes up by 22% on PS5. The caches also have 22% more bandwidth as a result. This is why it's said time and time again that TFLOPS alone isn't the be-all and end-all metric to gauge a GPU's perf.

At HotChips conference here MS is not only showing TFLOPS but also pixel fillrate and rasterization rate metrics as a part of their GPU evolution slide.
sUET9U8.jpg

PlayStation 5: 2020, 4K 120 Hz, 8K display
  • 10.28 TFLOPS, 448 GB/sec, 8.92 Gtri/sec, 142.72 Gpix/sec

And we're actually seeing the results of those PS5 advantages in practice in more than one game now:
r0S72TW.png

vlcsnap-2020-11-19-21h55m13s183.png

The AssCreed e.g. is a bit extreme. But you get the point.
 
Last edited:
Pfft. Secret sauce... Seriously? It's not some "secret sauce". This is pretty standard stuff when you do specs comparison of graphics cards on PC. Both XSX and PS5 have 64 ROPs (Render Output Units), due to PS5's 2230 MHz clock speed advantage which is a 22% uplift over 1825 MHz of XSX, the pixel fillrate of the PS5 goes up by 22%. Similarly, there are 4 primitive units in both the console's GPUs that handle rasterization and this too goes up by 22% on PS5. The caches also have 22% more bandwidth as a result. This is why it's said time and time again that TFLOPS alone isn't the be-all and end-all metric to gauge a GPU's perf.

At HotChips conference here MS is not only showing TFLOPS but also pixel fillrate and rasterization rate metrics as a part of their GPU evolution slide.
sUET9U8.jpg

PlayStation 5: 2020, 4K 120 Hz, 8K display
  • 10.28 TFLOPS, 448 GB/sec, 8.92 Gtri/sec, 142.72 Gpix/sec

And we're actually seeing the results of those PS5 advantages in practice in more than one game now:
r0S72TW.png

vlcsnap-2020-11-19-21h55m13s183.png

The AssCreed e.g. is a bit extreme. But you get the point.
I don't quite understand your point here. How do you explain why it has a better performance in 4k - e.g. AssCreed Vikings? The issues is mostly with these performance modes which is quite puzzling why this is happening. I don't know much about the technicalities, so this is the reason why I am asking, but for me it sounds something that can be patched and therefore not hardware related.
 
Deflection at it's best. Boohoo boohoo!
Why would I be sad 😂 got my SX and a gaming PC, 4K TV and a 144hz setup, I'm happy as fuck with life atm.

Oh noes, the SX drops resolution for a split second on some launch games, oh noes.

Waffle waffle, go be mad at Phil Spencer or something. You act as if your actually John Wick and he shot your dog 🐶😢
 
I don't quite understand your point here. How do you explain why it has a better performance in 4k - e.g. AssCreed Vikings? The issues is mostly with these performance modes which is quite puzzling why this is happening. I don't know much about the technicalities, so this is the reason why I am asking, but for me it sounds something that can be patched and therefore not hardware related.
The point is, as I said, TFLOPS alone doesn't show the full picture. A GPU is made up of a lot of units. And TF shows just one part of a GPU's perf which is the computational capability of vector ALU.

The XSX has 18% advantage in that regard. When it comes to pixel fillrate, rasterization, etc. the PS5 has a 22% advantage there. So if a game/scene is pixel fillrate bound then PS5 will see higher framerates. If it's mem bandwidth or ALU bound for e.g. the XSX will see higher framerates.
 
The point is, as I said, TFLOPS alone doesn't show the full picture. A GPU is made up of a lot of units. And TF shows just one part of a GPU's perf which is the computational capability of vector ALU.

The XSX has 18% advantage in that regard. When it comes to pixel fillrate, rasterization, etc. the PS5 has a 22% advantage there. So if a game/scene is pixel fillrate bound then PS5 will see higher framerates. If it's mem bandwidth or ALU bound for e.g. the XSX will see higher framerates.
So what you are saying is that in the future, when games are made thinking about the new consoles and not the PS4, the XSX will have a better overall performance?
 
So what you are saying is that in the future, when games are made thinking about the new consoles and not the PS4, the XSX will have a better overall performance?
PS5 will have better performance and outperform XSX in some, and XSX will have better performance and outperform PS5 in some games, at identical graphics and resolution.
 
The point is, as I said, TFLOPS alone doesn't show the full picture. A GPU is made up of a lot of units. And TF shows just one part of a GPU's perf which is the computational capability of vector ALU.

The XSX has 18% advantage in that regard. When it comes to pixel fillrate, rasterization, etc. the PS5 has a 22% advantage there. So if a game/scene is pixel fillrate bound then PS5 will see higher framerates. If it's mem bandwidth or ALU bound for e.g. the XSX will see higher framerates.

Pixel fillrate doesn't matter as it is limited to memory bandwidth in either system
 
PS5 will have better performance and outperform XSX in some, and XSX will have better performance and outperform PS5 in some games, at identical graphics and resolution.
Historically, I think that most of the games were limited by mem bandwidth or ALU. I think it is a bit early to jump into conclusions but in my view the XSX will perform better. We need to wait and see how the new/updated engines will perform.
 
Last edited:
What we know for sure and is well documented is the PS5 has a real SSD advantage while the XSX has a real GPU advantage.

Its fairly obvious in the end they'll have similar performance profiles which vary from game to game owing to that plus aforementioned API and architectural differences.

My point in my original response to you stands: there are way too many Sony warriors in here making a big deal out of what is essentially API immaturity and, secret sauce aside, when this situation is corrected we'll see the math play out as it predictably does.
You will see a Series X advantage. A slight advantage. Most Sony fans know and expect this. Not the nonsense that is being peddled out. Remember the Valhalla marketing? That wasn't fangirls but MS. Series X 4K 6 60fps . Yet the same fangirls were making noise that PS5 would be dynamic while Series X native. We saw how that turned out. No amount of Series X API maturity would have made it native 4K.
PS5 has it's advantages too especially in clock speed and IO and with the API having far less overhead and being specifically tailored for PS5.
The differences are Sony exclusives and the controller. Dual Sense is a game changer. MS exclusives won't arrive till late 2021 by which time Sony will be well ahead in games and tools etc
 
Last edited:
Pfft. Secret sauce... Seriously? Look past TFLOPS and bandwidth for once. It's not some "secret sauce". This is pretty standard stuff when you do specs comparison of graphics cards on PC. Both XSX and PS5 have 64 ROPs (Render Output Units), due to PS5's 2230 MHz clock speed advantage which is a 22% uplift over 1825 MHz of XSX, the pixel fillrate of the PS5 goes up by 22%. Similarly, there are 4 primitive units in both the console's GPUs that handle rasterization and this too goes up by 22% on PS5. The caches also have 22% more bandwidth as a result. This is why it's said time and time again that TFLOPS alone isn't the be-all and end-all metric to gauge a GPU's perf.

At HotChips conference here MS is not only showing TFLOPS but also pixel fillrate and rasterization rate metrics as a part of their GPU evolution slide.
sUET9U8.jpg

PlayStation 5: 2020, 4K 120 Hz, 8K display
  • 10.28 TFLOPS, 448 GB/sec, 8.92 Gtri/sec, 142.72 Gpix/sec

And we're actually seeing the results of those PS5 advantages in practice in more than one game now:


The AssCreed e.g. is a bit extreme. But you get the point.
Some RDNA GPU basics

1. GTri/sec refers to geometry, refer to "Prim Unit".

mVzoVOJ.png


The "Rasterizer" is another hardware unit

2. Any Raster Operation Units (ROPS, RBE) debate is bound by memory bandwdith.

NLRxDru.jpg


Do the math on ROPS fill rate vs memory bandwidth.


3. What happened to your Rasterizer debate?
RXn2BUp.png



4. BiG NAVI for RX 6900 XT and RX 6800 XT Block Diagram


KZrwtco.png



NAVI 21's Prim Unit count is the same as NAVI 10, yield it delivered nearly 2X performance over RX 5700 XT. Your argument is flawed.

NAVI 21's CUs has up to 128 ROPS (read/write units + graphics fixed functions) backed by a super-fast 128 MB L3 cache.

PS; Mesh Shaders move additional geometry workloads into shaders e.g. Mesh Shaders can cull geometry.

RX 6900 XT and RX 6800 XT has 128 ROPS while RX 6800 has 96 ROPS. RX 6800 has an entire Shader Engine disabled.
 
Last edited:
Look guys, my topic is not a knee jerk to early multiplatform results, those are a 100m start in a 42km marathon.
I have faith SX will overpower PS5 hardware in due time, another year or shorter.
Its just an imo, MS is leaving 10-15% on the table.
But i guess if you are winning in specs already, having a more compact design is choosen. Nothing wrong, just bringing up the opinions and perhaps Phil sees this and go back to his engineers. 🤷‍♀️
XSX APU's fixed clock speed frequency strategy doesn't allow competent programmers to better maximize the hardware usage .e.g not all games will use 7 CPU cores with 256-bit AVX v2 workload. Four CPU cores with 8 threads and AVX-128 bit can be enough for certain games which is less than half of the CPU's total power budget.

Atm, XSX has fixed clock speed frequencies for both CPU and GPU which locks the power allocation budget for CPU and GPU.
 
But I realize that is a lot to expect of people who practically live for console warz. 🤦‍♂️😅
Your whole MS will fix it and 'reverse' the situation with a new direct X version/game patches/etc. Is pure fanboy drivel, this is what brought us where we are natively speaking (ms/Phil used that same excuse before).

Things are what they are, wait until MS delivers something before assuming they will... And more importantly, don't be condescending to those who can see reality for what it is (the consoles performance is practically equal, some things perform slightly better on one than on the other/vice versa).

The only reason Sony fans (or anyone who paid attention) make fun of the current situation is that we spent months being told by people who stood on their high horses like you that the xsx would destroy the ps5 in resolution and framerate in all games all the time, that there would be no raytracing, etc. I (and others) told you many times about the potential benefits of the ps5's architecture, that if they performed in line with their respective GPUs 15-20% would not be a big deal (a small resolution drop in dynamic resolutions game would do the trick). Now they come out as more or less equals and you come here and talk as if you were the knowledgeable and reasonable one... Look at yourself go.
 
If MS followed fast narrow philosophy, it would still have more CU than PS5. TSMC charges MS/Sony per mm2 of die ? Let's say on 310mm2 die you can either have 48CU at slower frequency or 36CU at faster frequency(put cache instead of more cu), there are multiple sweetspots.
Quite simply, Microsoft had to consider whether or not the die space trade-off for L3 cache was worth it and clearly opted to save the die space for 'other things' such as more CU.
Link : http://www.redgamingtech.com/xbox-series-x-hot-chips-analysis-part-1-gpu-cpu-overview/
 
Some RDNA GPU basics

1. GTri/sec refers to geometry, refer to "Prim Unit".

mVzoVOJ.png


The "Rasterizer" is another hardware unit

2. Any Raster Operation Units (ROPS, RBE) debate is bound by memory bandwdith.

NLRxDru.jpg


Do the math on ROPS fill rate vs memory bandwidth.


3. What happened to your Rasterizer debate?
RXn2BUp.png
"What happened to your Rasterizer debate?"

No one's denying XSX's advantages. PS5 has its advantages too due to higher clock speed that's evident in more than one game's performance. Like I've said:
PS5 will have better performance and outperform XSX in some, and XSX will have better performance and outperform PS5 in some games, at identical graphics and resolution.

What happened to your memory bandwidth debate?
XYJhEiI.png


I can do this too.
 
Rasterization/ pixel fill rate is limited to the memory bandwidth.

It is actually more in the XSX favor due to higher memory bandwidth
From what we heard by different developers, bandwidth on ps5 seems perfectly balanced to the hardware specs and there aren't evident bottlenecks in the frequency uses, so the fact series X has higher memory bandwidth doesn't means so much. Furthermore it's all to demonstrate if the higher memory bandwidth on XSX is that advantage considered is splitted in two sides (which is a terrible idea imo) . From what we have seen in the multiplat until now, such choice seems more a bottlenecks to work around than a real advantage. But I disagree with the OP. Higher CUs number should compensate the "lower" frequency, so no problems there, though it will be more complicated to use.
 
Last edited:
It's all to see if the higher memory bandwidth on XSX is that great advantage considered is splitted in two side (which is a terrible idea imo) . From what we have seen in the multiplat until now, such choice seems more a bottlenecks to work around than a real advantage. But I disagree with the OP. Higher CUs number should compensate the "lower" frequency, so no problems there, though it will be more complicated to use.

It certainly requires more coding and devs should allocate cpu / audio tasks to the slower pool.

I believe that the bad performance of xsx is due to the difficulty of utilizing all the cu's

Hell even Cerny hinted that in his presentation
 
This is some armchair hardware design, OP.

1. Hardware design is done by professionals, not by forum overclockers. They run simulations, variations of temperaturs, stability etc on test models as well as Monte Carlo simulations once they get technology documents and chip designs and even early factory prototypes. These parameters determine the highest sustainable clocks for machine lifetime as well as yield and MS makes a decision based on that as well as obviously the power limits.

2. XboX has a unified SDK which makes backwards compat and future compat and PC porting easier but clearly the low-level access to some hardware power is limited. PS5 has a separate SDK for each console (with obviously a lot of common elements) which means that it's necessary sometimes to patch games to get the most performance out of the system for backwards compat but it's easier to tap into the raw power which is why you would see a slight advantage to PS5 and maybe for a very long time.
 
It certainly requires more coding and devs should allocate cpu / audio tasks to the slower pool.

I believe that the bad performance of xsx is due to the difficulty of utilizing all the cu's

Hell even Cerny hinted that in his presentation
It's not just the CUs utilisation. Splitted bandwidth has a catch and developers need to worrying about it too also to balance the CUs job. It's a potential bottlenecks.
 
"What happened to your Rasterizer debate?"

No one's denying XSX's advantages. PS5 has its advantages too due to higher clock speed that's evident in more than one game's performance. Like I've said:


What happened to your memory bandwidth debate?
XYJhEiI.png


I can do this too.
Software glitch.
 
Software glitch.
Delight us. Doubt a software glitch could impact performance in 3 different games by I'm curious to hear more. Rasterisation anyway should have the advantage on ps5 thanks to the higher frequency.
 
Last edited:
Do you know if Rasterization/ pixel fill rate is affected by on-die cache bandwidth?
For RX 6800, 6800 XT, and 6900 XT, GPU functions are supported by a very fast 128 MB L3 cache (with delta color compression, NAVI's DCC everywhere design).

NM8GZM8.png
 
There is a custom cache system on ps5 too
XSX GPU has a 5MB L2 cache which is larger than NAVI 10's 4MB L2 cache. So what?

Note that NAVI 21's 128 MB L3 cache is four times of XBO's 32 MB ESRAM which supports 1600x900 frame buffers without tiling. When delta color compression (DCC) is used, 128 MB L3 cache is ideal for 4K framebuffers.

PS5 does NOT have BiG NAVI's ~100 mm2 size 128 MB L3 cache when the entire PS5 APU is only ~309 mm2 area size. LOL.
 
Last edited:
Lets relax a bit guys, the console just came out and MSs tool are not up to date yet. Its way too early to start changing clock speeds.

Let them improve the tools and let devs get used to RDNA2. its not gonna happen overnight.
Why even bother explaining anything to some of these guys? These are the same people that believe this secret insider that claims the ps5 has RDNA 3 features. Logic just goes out the window.
 
XSX GPU has a 5MB L2 cache which larger than NAVI 10's 4MB L2 cache. So what?

Note that NAVI 21's 128 MB L3 cache is four times of XBO's 32 MB ESRAM which supports 1600x900 frame buffers without tiling. When delta color compression (DCC) is used, 128 MB L3 cache is ideal for 4K framebuffers.

PS5 does NOT have BiG NAVI's ~100 mm2 size 128 MB L3 cache when the entire PS5 APU is only ~309 mm2 area size. LOL.
The hell it has to do series X size about the ps5 custom cache system? So because it hasn't the big size of big navi for you the custom cache system on ps5 it's inexistent? Go to inform yourself at least, there are tons of resources on the net. Lol.
 
Why even bother explaining anything to some of these guys? These are the same people that believe this secret insider that claims the ps5 has RDNA 3 features. Logic just goes out the window.
Don't say bullshit. The custom cache system on ps5 is notorious for the enormous work around it and we are trying to argue the cache system is better on series X because the size of the GPU and everything about the ps5 it's just secret sauce bullshit? Wut. There isn't even an SDRAM on series X. If you are really interested you can ask to people even more informed than me in this forum.
 
Last edited:
PS5 is custom, and was ,made with different customizations some of which are in RDNA 3.

Though the chip is RDNA 2 based arc, the way it's implemented with "other" Sony customizations differs in it's feature set compared to XBox series X rdna 2 chip.

Where is the proof of these RDNA 3 features in ps5? I haven't see any confirmation of this anywhere except from anomoymous insiders.
 
The hell it has to do series X size about the ps5 custom cache system? So because it hasn't the big size of big navi for you the custom cache system on ps5 it's inexistent? Go to inform yourself at least, there are tons of resources on the net. Lol.
1. BiG NAVI's carefully configured 128 MB L3 cache size is PC exclusive.

2. NAVI 10 has 252 mm2 chip area size already.
 
Where is the proof of these RDNA 3 features in ps5? I haven't see any confirmation of this anywhere except from anomoymous insiders.

Try RedTechGaming who is s tech youtuber and broke infinity cache rumor. Among other things.
Moore's law is dead also comments on customizations from RDNA 3 being in PS5's custom silicon.
 
1. BiG NAVI's carefully configured 128 MB L3 cache size is PC exclusive.

2. NAVI 10 has 252 mm2 chip area size already.
And? What it has to do with the ps5? My suggestion. If you want really some delightful about the ps5 cache system search Matt chief engineer ps5 on twitter and ask to him.
 
Last edited:
Don't say bullshit. The custom cache system on ps5 is notorious for the enormous work around it and we are trying to argue the cache system is better on series X because the size of the GPU and everything about the ps5 it's just secret sauce bullshit? Wut. There isn't even an SDRAM on series X. If you are really interested you can ask to people even more informed than me in this forum.
PS4's L2 cache customization was largely a non-issue when PC's R7-265 delivered similar results.
 
Top Bottom