• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

HW| does Xbox Series X really have 12 Teraflops ?

DenchDeckard

Moderated wildly
Yeah Coalition Dev, I said earlier in the thread I was expecting this, people laughed, but here we are.

You were right about VRR and now this.

That's why I would ignore 90 percent of this forums jokes against you as time has proven you right.

Good on ya man.
 

Elog

Member
Yeah Coalition Dev, I said earlier in the thread I was expecting this, people laughed, but here we are.
I do not know if someone was laughing or not. The problem comes down to claims that one system can do this but not the other.

VRS 2 applies the same principle as MESH shaders. You group objects (primitives/ vertices/pixels) into groups and then decide how much pipeline work you will do on that group ('variable' shading rate).

So after you have grouped everything your first decision is: Should any work be done downstream in the pipeline at all (yes/no)? If the answer is yes, you can then modulate how much hardware you will dedicate to it.

Both systems can do this. MS went with the AMD solutions (MESH and VRS). Sony did their own solutions. Once again - is there a difference in performance? No clue.
 
Both systems can do this. MS went with the AMD solutions (MESH and VRS). Sony did their own solutions. Once again - is there a difference in performance? No clue.

I think that's the issue most are having here. Is there exhaustive proof that one is better than the other?

Not really or at least I haven't seen a study on it yet. In the end both can be the same and we wouldn't really know because the labels are different.

In the end what matters are the games and neither system is destroying the other when it comes to performance. The only one that could be considered weak is the XSS but that system is designed for a much lower price point so it isn't fair to compare it to the PS5 and XSX.
 

Riky

$MSFT
I do not know if someone was laughing or not. The problem comes down to claims that one system can do this but not the other.

VRS 2 applies the same principle as MESH shaders. You group objects (primitives/ vertices/pixels) into groups and then decide how much pipeline work you will do on that group ('variable' shading rate).

So after you have grouped everything your first decision is: Should any work be done downstream in the pipeline at all (yes/no)? If the answer is yes, you can then modulate how much hardware you will dedicate to it.

Both systems can do this. MS went with the AMD solutions (MESH and VRS). Sony did their own solutions. Once again - is there a difference in performance? No clue.
Tier 2 VRS is hardware assisted lowering of the shading rate in areas the player doesn't see or is unlikely to look at. It isn't the same as primitive or mesh shaders. Other consoles can do this through software which has an overhead and needs a deblocking pass, also results as in Dead Space can be bad.
DF went over that Series consoles are the only ones hardware compliant as stated also in the AMD/Xbox statement at the RDNA2 reveal.
There is also the evidence from the Coalition paper which had a thread here and from id working on Doom Eternal on the performance benefits.
All this has been covered before but it's good to see it in UE5 so not only first party software will use it but third parties too.
 

Elog

Member
Tier 2 VRS is hardware assisted lowering of the shading rate in areas the player doesn't see or is unlikely to look at. It isn't the same as primitive or mesh shaders. Other consoles can do this through software which has an overhead and needs a deblocking pass, also results as in Dead Space can be bad.
DF went over that Series consoles are the only ones hardware compliant as stated also in the AMD/Xbox statement at the RDNA2 reveal.
There is also the evidence from the Coalition paper which had a thread here and from id working on Doom Eternal on the performance benefits.
All this has been covered before but it's good to see it in UE5 so not only first party software will use it but third parties too.
You are mixing a lot of analytical objects that do not make sense.

Simplified CliffsNotes:

MESH - allows you to group primitives/vertices and then not process that group at all downstream in the pipeline (Yes/No to do work)

VRS 1 - allows you to modulate the amount of pipeline work you will apply per primitive

VRS 2 - allows you to group primitives/vertices and then modulate the amount of work you will apply on all those primitives/vertices in the group

Both systems can do this functionally with hardware support - just different solutions.

All these solutions is due to the fact that complex geometry otherwise will bog down the GPU to a standstill due to the sheer amount of workstreams (I/O limitations kill performance).
 
Last edited:

Riky

$MSFT
You are mixing a lot of analytical objects that do not make sense.

Simplified CliffsNotes:

MESH - allows you to group primitives/vertices and then not process that group at all downstream in the pipeline (Yes/No to do work)

VRS 1 - allows you to modulate the amount of pipeline work you will apply per primitive

VRS 2 - allows you to group primitives/vertices and then modulate the amount of work you will apply on all those primitives/vertices in the group

Both systems can do this functionally with hardware support - just different solutions.

All these solutions is due to the fact that complex geometry otherwise will bog down the GPU to a standstill due to the sheer amount of workstreams (I/O limitations kill performance).

Not mixing up anything,

As per Digital Foundry,

"It's also interesting to note that Xbox Series consoles use the hardware-based tier two VRS feature of the RDNA2 hardware, which is not present on PlayStation 5. VRS stands for variable rate shading, adjusting the precision of pixel shading based on factors such as contrast and motion. Pre-launch there was plenty of discussion about whether PS5 had the feature or not and the truth is, it doesn't have any hardware-based VRS support at all."
 

DenchDeckard

Moderated wildly
Anything that helps the devs is great and it's good it's added to unreal engine thanks to coalition. MS use unreal a bit and this should hopefully help.

Cant wait for the next gears.
 

Riky

$MSFT
Anything that helps the devs is great and it's good it's added to unreal engine thanks to coalition. MS use unreal a bit and this should hopefully help.

Cant wait for the next gears.
Exactly, every time you post a developer or platform holder update on these features you get people bringing up other consoles. What matters to me is just making Xbox games run better than they have.
What SFS could do for Series S could be a game changer.
 

Elog

Member
Not mixing up anything,

As per Digital Foundry,

"It's also interesting to note that Xbox Series consoles use the hardware-based tier two VRS feature of the RDNA2 hardware, which is not present on PlayStation 5. VRS stands for variable rate shading, adjusting the precision of pixel shading based on factors such as contrast and motion. Pre-launch there was plenty of discussion about whether PS5 had the feature or not and the truth is, it doesn't have any hardware-based VRS support at all."
As I have stated, PS5 does not have VRS but their own functionality that achieves the same thing. With VRS 2 you can e.g. group 16 primitives into one group, run the shader on one primitive and broadcast that to all 16 - saves a lot of work but loss of detail.

On the PS5, you can e.g. group 16 primitives and create one primitive out of the 16 (in the geometry engine). This one primitive is then run through the pipeline. Saves a lot of work but loss of detail.

The functionality is the same but the method differs slightly. Which one is better? No clue but the idea that the functionality does not exist in a very similar fashion is pure FUD.
 

Fafalada

Fafracer forever
"Pre-launch there was plenty of discussion about whether PS5 had the feature or not and the truth is, it doesn't have any hardware-based VRS support at all."
The conflation people referred to is between two things.
VRS as a DirectX feature, which is defined as a specific implementation of the process (ie. use the MSAA hardware, connect how sampler backend schedules compute to the spec etc. There's no two ways of going around 'that' part of the specification, so yes, DF is presumably 100% right on this (I don't have the info, so I can only take their word for it).

But when people talk about 'Variable Rate Shading' as a concept - ie. how does one go about shading the scene with dynamically varying sample rate according to some image-analysis based inputs - there are different ways to go about it. And the whole 'but but software vs. hardware' is nonsensical to the discussion anyway. The only things that matter are:
1) Does it yield the expected performance benefits?
2) Is it 'practical' to use (ie. what does it cost me to integrate it with my game pipeline).

I'd say - practicality is the most open to debate - DirectX way of doing VRS was designed to be pretty simple, and yet still has very practical issues as evidenced by most implementations to date - just not being very good - to put it mildly.
Performance benefits are simpler to talk about - but they are also scene topology (and game) specific. Anyone that tells you 'VRS improves performance by X%' as a statement of fact, is intentionally bending the truth, uninformed, or simply wrong.

My personal view on VRS is that it's actually by far the most practical as a quality enhancement feature rather than used for performance in standard games. Ie. use it to selectively upsample parts of the scene you care most about, and let DRS & resolution upscalers do the performance lifting.
In VR games - VRS(the DirectX way) makes for perhaps the easiest/simplest way to do FOV related optimizations (in terms of ease of integration with rest of rendering pipeline), especially on PC. There's only one alternative I've seen to date that could potentially match that for ease-of-use & give similar or better performance, and Sony holds a patent for it, so I don't know if we'll ever see it in a device.
 

FireFly

Member
As I have stated, PS5 does not have VRS but their own functionality that achieves the same thing. With VRS 2 you can e.g. group 16 primitives into one group, run the shader on one primitive and broadcast that to all 16 - saves a lot of work but loss of detail.

On the PS5, you can e.g. group 16 primitives and create one primitive out of the 16 (in the geometry engine). This one primitive is then run through the pipeline. Saves a lot of work but loss of detail.

The functionality is the same but the method differs slightly. Which one is better? No clue but the idea that the functionality does not exist in a very similar fashion is pure FUD.
The geometry engine works on triangles while VRS works on pixels, so I don't see how they achieve the same functionality. The shading stage is after the geometry stage in the pipeline.
 

Elog

Member
The geometry engine works on triangles while VRS works on pixels, so I don't see how they achieve the same functionality. The shading stage is after the geometry stage in the pipeline.
The point is that for a given group of vertices/pixels/primitives you reduce the amount of hardware used for the shader step by 15/16 or 94% (in my example above) with loss of detail as a result. The methodology is different but the functionality is the same.

For the developer the key step is really what criteria you apply and how you apply those criteria for the engine to make the decision that a given object in the final image should have less shader work conducted on it. Both machines can do this - that is my point - but with different approaches. The earlier the call is made in the pipeline, the more work is saved.
 

Fafalada

Fafracer forever
The geometry engine works on triangles while VRS works on pixels, so I don't see how they achieve the same functionality. The shading stage is after the geometry stage in the pipeline.
The 'after' part is what makes it work. GE includes primitive assembly, outputs of which can be associated to a given set of viewport parameters (that includes things like number of samples, pixels and more). The granularity of said association(simple way to think of it is an index that groups of primitives point to) isn't prescribed (so it could be blocks larger than 32x32 or 16x16 pixels)- but at least one such application exists in a past console and was designed specifically to provide multi-resolution-render targets, aimed at accelerating VR.
I don't have access to PS5 details to speak how that might be implemented in that particular GPU - but the main point is precedent for doing it this way exists already in older hw.
Specifically to your point, GE wouldn't be saving pixels in of itself, but it can ostensibly provide the input to the later stages on how things get rendered.

To Elog's point - doing this earlier in the pipeline affords potential for additional savings (and quality compromises too) eg. because viewport can also vary pixel-density, you can reduce that too, not just number of samples. Admittedly that's less relevant to 'typical' VRS examples that we've all seen to date, but in the aforementioned VR examples, it's actually a big win, as sample-variation alone often isn't enough.
 
Last edited:
The conflation people referred to is between two things.
VRS as a DirectX feature, which is defined as a specific implementation of the process (ie. use the MSAA hardware, connect how sampler backend schedules compute to the spec etc. There's no two ways of going around 'that' part of the specification, so yes, DF is presumably 100% right on this (I don't have the info, so I can only take their word for it).

But when people talk about 'Variable Rate Shading' as a concept - ie. how does one go about shading the scene with dynamically varying sample rate according to some image-analysis based inputs - there are different ways to go about it. And the whole 'but but software vs. hardware' is nonsensical to the discussion anyway. The only things that matter are:
1) Does it yield the expected performance benefits?
2) Is it 'practical' to use (ie. what does it cost me to integrate it with my game pipeline).

I'd say - practicality is the most open to debate - DirectX way of doing VRS was designed to be pretty simple, and yet still has very practical issues as evidenced by most implementations to date - just not being very good - to put it mildly.
Performance benefits are simpler to talk about - but they are also scene topology (and game) specific. Anyone that tells you 'VRS improves performance by X%' as a statement of fact, is intentionally bending the truth, uninformed, or simply wrong.

My personal view on VRS is that it's actually by far the most practical as a quality enhancement feature rather than used for performance in standard games. Ie. use it to selectively upsample parts of the scene you care most about, and let DRS & resolution upscalers do the performance lifting.
In VR games - VRS(the DirectX way) makes for perhaps the easiest/simplest way to do FOV related optimizations (in terms of ease of integration with rest of rendering pipeline), especially on PC. There's only one alternative I've seen to date that could potentially match that for ease-of-use & give similar or better performance, and Sony holds a patent for it, so I don't know if we'll ever see it in a device.
I just wanted to say that we appreciate you taking the time to post on those threads.
 
Or devs simply put more optimisation effort in the console that is sold more often.
Even when they are under Xbox umbrella like Tango Softworks and Arkane? The thread title can be felt as baity but we are 3 years in the generation. When few games in Xbox are close to cross gen games like GOW and Horizon having questions is normal. We need Xbox to give us games like Hellblade 2 as soon as possible to see what their console is truly capable of. And whatever Doom maker studio Id is doing. In the meantime Sony managed to make a console that while clearly cheaper to make and sell( PS5 DE is 400$ in the US) they are still equal, if not sometimes better than the competition. This is not a easy feat to do. And it is Microsoft job to push their consoles in the best possible light. The Series X has points in it's favor. We just have to see them more often. And this means games.
 
Last edited:

Tripolygon

Banned
The cache scrubbers have a couple of purposes.
One is to help manage the temperature of the console's solid-state drive (SSD) by clearing out any old or unnecessary data from its cache. By doing this, the cache scrubbers can reduce the workload on the SSD, which can help to keep its temperature within safe operating limits.

Additionally, the cache scrubbers can also help to optimize system performance by ensuring that the cache contains the most frequently used data. As the PS5 runs programs and games, it stores frequently used data in the cache to make it faster to access in the future. However, if the cache becomes too full or contains outdated data, it can actually slow down system performance. The cache scrubbers help to prevent this by continuously monitoring and clearing out old data from the cache, which helps to maintain optimal system performance.

So there would have an application outside of the SSD, and AMD could have adopted it if they wished as this was part of the agreement Sony had with AMD.

The point is that the cache scrubbers don't add any real performance to the console.

The XSX has Sampler Feedback Streaming, not just Sampler feedback. They are not the exact same thing.

As for Primitive vs Mesh Shaders, they were AMD and Nvidias competing solutions for a new geometry pipeline. They both had the same end goal, but went about it a bit differently. AMD decided to adopt Mesh Shaders to standardise the process to make it easier for developers.

We don't need to theorise about it, this interview from AMD goes into it way beyond our knowledge.
Interestingly, AMD say that Sony went with Primitive Shaders, the Xbox is capable of doing both Primitive and Mesh Shaders.
Also of note is that Sony's first party studios have already used Primitive Shaders on their games while non have uses Mesh Shaders on Xbox.

https://www-4gamer-net.translate.go...=ja&_x_tr_tl=en&_x_tr_hl=en-US&_x_tr_pto=wapp
Everything here is wrong. Including you misunderstanding a direct quote from AMD talking about how primitive shader is what enables mesh shaders on DX12.

Primitive Shader as hardware exists in everything from Radeon RX Vega to the latest RDNA 3-based GPU. When viewed from DirectX 12, Radeon GPU's Primitive Shader is designed to work as a Mesh Shader.
 
Last edited:
Top Bottom