UL releases 3DMark Mesh Shaders Feature test, first results of NVIDIA Ampere and AMD RDNA2 GPUs

SmokSmog

Member
Source


fdcbb8230fe28c959d1a47f05f0131d85a69d29628c7a1bbd122669a9843df82.jpg



Edit:

RX 6800XT before and after drivers update
30cb5375a7555df3cb5a8124063134e400413c5001290c5c9cc76edad1786e1c.jpg
 
Last edited:
where is our resident amd boy?
RTX > Rdna2.

But mesh shaders in SX, wow at that improvement.....it may be a bigger win than the TFLOPS difference.....
 
Last edited:
Most likely API/ implementation driven and not hardware. That difference does not make sense from a silicon point of view. Would be interesting to see how the PS5 variant would fare with the right API etc in place.
 
Last edited:
I'll wait for more syntetic benchmarks and some realtime ones, these benchmarks are always subject to implementation iirc
 
The overall results are line with nVidia Turing.
  • NVIDIA Ampere: 702%
  • AMD RDNA2: 547%
  • NVIDIA Turing (RTX): 409%
  • NVIDIA Turing: 244%
RDNA2 is in the middle of Ampere and Turing.

AMD is an arch release behind nVidia even with a better silicon process.
 
Last edited:
The overall results are line with nVidia.
  • NVIDIA Ampere: 702%
  • AMD RDNA2: 547%
  • NVIDIA Turing (RTX): 409%
  • NVIDIA Turing: 244%
RDNA2 is in the middle of Ampere and Turing.
RTX 3090 has almost 3x more FPS compared to 6900XT, those percentages are to show gains for mesh shaders turned ON and OFF
 
RTX 3090 has almost 3x more FPS compared to 6900XT, those percentages are to show gains for mesh shaders turned ON and OFF
Because it does the scene at higher fps (60 vs 30).
If you want to compare only the Mesh Shader feature like the benchmark tries to do you need to see the results improvements with the feature on/off.

BTW why AMD is so bad at render that scene even with Mesh off?
 
Last edited:
But mesh shaders in SX, wow at that improvement.....it may be a bigger win than the TFLOPS difference.....
What? The relevant test would be AMD mesh shader implementation vs Nviidia mesh shader implementation vs Sony 'mesh shader' implementation. That test says nothing about that.
 
that is absolutely awful. WTF is AMD doing.

Amazing performance gains on Nvidia cards. I think a 2060 will absolutely fucking annihilate the ps5 and xsx once devs start using mesh shaders. What a disaster.
 
that is absolutely awful. WTF is AMD doing.

Amazing performance gains on Nvidia cards. I think a 2060 will absolutely fucking annihilate the ps5 and xsx once devs start using mesh shaders. What a disaster.
Sony has their own implantation - they did not go for the AMD variant.
 
nvidia was looking years into the future when designing Turing and Ampere. While amd was making cards that can play 2013 games very well. They practically looked backwards in their design
 
What? The relevant test would be AMD mesh shader implementation vs Nviidia mesh shader implementation vs Sony 'mesh shader' implementation. That test says nothing about that.
The poster meant regarding consoles alone, because consoles will never do better than PC due to almost unlimited resources PC has (starting by budget).

I'm one of those saying that this gen would show less graphical differences than previous ones but with these features it'll probably not be the case... Contrary to 8th gen that was basically 7th gen++ even graphically (for PC players at least).
 
the cu scaling nvidia documented when they presented turin architecture is clearly there for them.
remove margin of error there is none on amd side...so if it's not a bug the cu are dependent to something else preventing the scale.
 
Last edited:
GAFers are so fucking funny. You show them an image that they think they comprehend at the surface level and they start spewing narratives. But hey, just looking out for myself here so I don't get quoted later.
 
I guess there is a silver lining here. It seems the AMD cards have awful performance even with Mesh Shaders off. And the percentage difference is actually somewhat on par with the 20 series GPUs.
 
mesh shaders and vrs are modern gpu features that don't show up in tflops metric. now we have the numbers on their impact, im glad MS waited for full rdna2
So you are suggesting its dangerous to rely on TFLOPs as an absolute indicator of performance?

Hmm... coudlve sworn i heard that somewhere before. Well.
 
I think its geometry engine? I know that none of the vulcan, dx12 ultimate stuff applies to PS5. On top of that, PS5 is highly customized and fine tuned to Sonys own needs.
Yes - all the functionality of the mesh shaders (AMD and Nvidia) was implemented as part of the new geometry engine in the PS5.
 
Hmm does not bode well for Consoles.

You do realize Series X fully supports this, right? There's already a video demonstration from Microsoft where Series X's mesh shading is running a demo at 4K that the 2080 Ti was only running at 1440p or something like that. Don't have time to look it up.
 
Yes - all the functionality of the mesh shaders (AMD and Nvidia) was implemented as part of the new geometry engine in the PS5.

Mesh Shaders aren't in the PS5's geometry engine. PS5 doesn't support Mesh Shaders. It supports Primitive Shaders, which is more limited, still involves the input assembler, and doesn't go as far as mesh shaders does. I notice people keep giving features to the PS5 that even their lead architect hasn't said it supports. Only Microsoft and AMD have confirmed these features to be inside Xbox Series S/X.

Also, this benchmark appears not even properly coded for AMD hardware, but is instead geared towards Turing architecture, which is why Ampere isn't doing even better. So these AMD results can be even better if that's the case.
 
Last edited:
Mesh Shaders aren't in the PS5's geometry engine. PS5 doesn't support Mesh Shaders. It supports Primitive Shaders, which is more limited, still involves the input assembler, and doesn't go as far as mesh shaders does. I notice people keep giving features to the PS5 that even their lead architect hasn't said it supports. Only Microsoft and AMD have confirmed these features to be inside Xbox Series S/X.

Also, this benchmark appears not even properly coded for AMD hardware, but is instead geared towards Turing architecture, which is why Ampere isn't doing even better. So these AMD results can be even better if that's the case.
So explain to me how Vulkan and OpenGL supports Mesh Shaders too?

Mesh Shaders is a change in the shading pipeline... nVidia introduced it way better AMD or MS start to talk about it.
PS5 support that too... how Sony implemented it in their APU is what you need to ask.
 
Last edited:
Mesh Shaders aren't in the PS5's geometry engine. PS5 doesn't support Mesh Shaders. It supports Primitive Shaders, which is more limited, still involves the input assembler, and doesn't go as far as mesh shaders does. I notice people keep giving features to the PS5 that even their lead architect hasn't said it supports. Only Microsoft and AMD have confirmed these features to be inside Xbox Series S/X.
First of all - Mesh shaders is a hardware implementation of primitive shaders.

Secondly - you are right that the above implementation of varied degree of shader work and culling of geometry is something that the PS5 does not have.

However - culling of geometry and what parts of that geometry that gets shaders applied (and to what extent) is instead done on the geometry engine level.

I.e. PS5 has the full functionality (and according to claims - a bit more than that) of what the 'mesh shaders' are doing on Nvidia and AMD cards but the 'how' is different since it is all driven by the geometry engine instead of a hardware piece downstream of the GE.

Most people reading that PS5 does not have mesh shaders interpret that as if the PS5 does not have that function and that is incorrect. It has the full functionality but it is achieved differently.

Which hardware implementation of this functionality is better? Who knows but Cerny is good at what he is doing.
 
Mesh Shaders aren't in the PS5's geometry engine. PS5 doesn't support Mesh Shaders. It supports Primitive Shaders, which is more limited, still involves the input assembler, and doesn't go as far as mesh shaders does. I notice people keep giving features to the PS5 that even their lead architect hasn't said it supports. Only Microsoft and AMD have confirmed these features to be inside Xbox Series S/X.

Also, this benchmark appears not even properly coded for AMD hardware, but is instead geared towards Turing architecture, which is why Ampere isn't doing even better. So these AMD results can be even better if that's the case.

Nah, God Cerny saw how shitty AMDs implementation was and pulled his own perfect mesh shader out of his ass and told AMD to put it in their APU.


edit:
Which hardware implementation of this functionality is better? Who knows but Cerny is good at what he is doing.

See, Cerny knows better than AMDs engineers.
 
Last edited:
First of all - Mesh shaders is a hardware implementation of primitive shaders.

Secondly - you are right that the above implementation of varied degree of shader work and culling of geometry is something that the PS5 does not have.

However - culling of geometry and what parts of that geometry that gets shaders applied (and to what extent) is instead done on the geometry engine level.

I.e. PS5 has the full functionality (and according to claims - a bit more than that) of what the 'mesh shaders' are doing on Nvidia and AMD cards but the 'how' is different since it is all driven by the geometry engine instead of a hardware piece downstream of the GE.

Most people reading that PS5 does not have mesh shaders interpret that as if the PS5 does not have that function and that is incorrect. It has the full functionality but it is achieved differently.

Which hardware implementation of this functionality is better? Who knows but Cerny is good at what he is doing.

The PS5 method is inferior because it's not a fully reinvented/programmable geometry pipeline on the level of what Nvidia did with Turing/Ampere and what AMD did RDNA2 on PC and Xbox Series X/S. Primitive Shaders is an in-between step that AMD has moved on from because it isn't better than Mesh Shaders. Primitive Shaders was first introduced in Vega, but due to some hardware problem it was never exposed for use. It was fixed for RDNA 1st gen, and then immediately abandoned/taken to a whole other level for RDNA2 with Mesh Shaders.

Microsoft waited longer to get the full feature, not the mere point revision.
 
You do realize Series X fully supports this, right? There's already a video demonstration from Microsoft where Series X's mesh shading is running a demo at 4K that the 2080 Ti was only running at 1440p or something like that. Don't have time to look it up.
Must have been photomode, where the X excels.
 
So explain to me how Vulkan and OpenGL supports Mesh Shaders too?

Mesh Shaders is a change in the shading pipeline... nVidia introduced it way better AMD or MS start to talk about it.
PS5 support that too... how Sony implemented it in their APU is what you need to ask.

It is supported on Nvidia Turing and Ampere hardware, and now RDNA2. It isn't supported for RDNA 1st gen that only supports the same primitive shaders that's inside the PS5 via the same Geometry Engine. Series X, meanwhile has a mesh shading geometry engine. The Playstation 5 does not support Mesh Shaders. It has something that works in a similar way, but simply doesn't go nearly as far, that's primitive shaders. (And no, I'm not saying PS5 is RDNA1, it's clearly a hybrid/custom design, but it's using the RDNA 1st gen geometry engine)

u9CrrqU.jpg
 
Last edited:
Because it does the scene at higher fps (60 vs 30).
If you want to compare only the Mesh Shader feature like the benchmark tries to do you need to see the results improvements with the feature on/off.

BTW why AMD is so bad at render that scene even with Mesh off.

Seems that many does not understand what are the really interesting value in this benchmark !! As Ethomaz said, this is not the fps in the table that is interesting to read, but the increase in performance due to mesh shaders usage which need to be noticed. And in this case, the results for AMD are clearly not bad. The interest of this benchmark is to evaluate how mesh shader could increase the performances...

Edited to add his message.
 
Last edited:
You do realize Series X fully supports this, right? There's already a video demonstration from Microsoft where Series X's mesh shading is running a demo at 4K that the 2080 Ti was only running at 1440p or something like that. Don't have time to look it up.
Oh you are right, that one with the dragon in the room or what it was....
 
Seems that many does not understand what are the really interesting value in this benchmark !! As Ethomaz said, this is not the fps in the table that is interesting to read, but the increase in performance due to mesh shaders usage which need to be noticed. And in this case, the results for AMD are clearly not bad. The interest of this benchmark is to evaluate how mesh shader could increase the performances...

Edited to add his message.
Exactly.
People need to understand the context of the benchmark.

AMD is really having a bad render time of the scene that I can't explain... maybe it is how they said... it will get better with driver.
But the performance in the scene of the AMD card is not important for that benchmark.

What is really the point is how the cards shows gains with Mesh Shaders enabled.

RDNA 2 is showing better results than Turing... of course it is still behind Ampere but the different is not that bad how people tried to say due the lower fps... actually 6800 only showed lower results than RTX 3070, 3080 and 3090.
 
Last edited:
Exactly.
People need to understand the context of the benchmark.

AMD is really having a bad render time of the scene that I can't explain... maybe it is how they said... it will get better with driver.
But the performance in the scene of the AMD card is not important for that benchmark.

What is really the point is how the cards shows gains with Mesh Shaders enabled.

RDNA 2 is showing better results than Turing... of course it is still behind Ampere but the different is not that bad how people tried to say due the lower fps... actually 6800 only showed lower results than RTX 3070, 3080 and 3090.

And the results may be skewed against Ampere and RDNA2 because this benchmark is much more geared towards the Turing architecture, so it could be even better for all we know.
 
Top Bottom