• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS5 Pro Specs Leak are Real, Releasing Holiday 2024(Insider Gaming)

PaintTinJr

Member
Do you mean this?
Real-Time Neural Texture Upsampling in God of War Ragnarok on PS5

If so, it's using FP16.
Oy5OXMm.jpeg
vd6Dbti.jpeg


DP4A (Int8) is faster and more efficient, how did you come to the conclusion DP4A would be less flexible?
BT64Zx8.png
The bigger data type (Halfs) they choose wasn't for performance reasons, but a necessity of the algorithm accuracy, so the specific (ASIC?) DPA4 wouldn't work for this algorithm anyway, but the performance and versatility came from them having full control of the hardware which is explained in the 10 or so slides that come after the one you included, which is prior to optimisation, so my assertion is that if a different algorithm that need INT8 instead and could use DPA4 on RDNA2 cards, the PS5 solution would still exploit optimisations like doing 2 blocks at once (3 in the final optimisation) and have the flexibility to interlace them within the async processing and have complimentary resource usage throughout, rather than use DPA4 and wait on it returning the result of a massive 2K texture's worth of blocks being passed through it to produce a 4K texture with very little control for the programmer of the feature.
 

PaintTinJr

Member
But with VRS we are shading fewer fragments. We are basically reducing resolution in specific areas of the image.
IIRC we are talking orders of fragment reductions per pixel, I'm pretty sure VRS doing 4 fragments per framebuffer pixel would on average be a much much lighter workload than a typical complex non-VRS workload.
 

schaft0620

Member
i just hope its black

What if its $550 with no blades or a disc drive and you have to buy them? Technically it would be black.

EDIT: I mean it would connect to the add-on disc drive if you wanted it. So it would be:

$550 for console
$60 for the blades
$70 for the disc drive

$630 pre-tax, launch games are essentially NBA 2K, Until Dawn, and CoD with Sony's backlog working with it.
 
Last edited:

winjer

Gold Member
IIRC we are talking orders of fragment reductions per pixel, I'm pretty sure VRS doing 4 fragments per framebuffer pixel would on average be a much much lighter workload than a typical complex non-VRS workload.

No. With VRS we would be just shading 1 big pixel, instead 4.
Resulting in fewer pixels to shade.
 
But the texture remains the same, it's the amount of fragments that change, per location.
The texture won't show up with the full resolution, only because the shading doesn't have enough detail.

The quality will only remain similar, if the group of pixels are similar enough. Otherwise, it will be immediately noticeable.
That is why picking the regions to apply VRS is so important.
Not with RDNA2 VRS. You read too much PR talking point or cherry picked screenshots showing very dark textures where the blockiness is hidden. The lead developer did perf benchmarks and visuals comparisons (with zooms). RDNA2 VRS litteraly destroys textures (and valuable artists work). Textures no longer look the same and loose tons of high frequency details. With their custom software VRS textures look very similar to the original texture, often identical.
 
Last edited:

Loxus

Member
The bigger data type (Halfs) they choose wasn't for performance reasons, but a necessity of the algorithm accuracy, so the specific (ASIC?) DPA4 wouldn't work for this algorithm anyway, but the performance and versatility came from them having full control of the hardware which is explained in the 10 or so slides that come after the one you included, which is prior to optimisation, so my assertion is that if a different algorithm that need INT8 instead and could use DPA4 on RDNA2 cards, the PS5 solution would still exploit optimisations like doing 2 blocks at once (3 in the final optimisation) and have the flexibility to interlace them within the async processing and have complimentary resource usage throughout, rather than use DPA4 and wait on it returning the result of a massive 2K texture's worth of blocks being passed through it to produce a 4K texture with very little control for the programmer of the feature.
What's New in WebGPU (Chrome 123)
DP4a (Dot Product of 4 Elements and Accumulate) refers to a set of GPU instructions commonly used in deep learning inference for quantization. It efficiently performs 8-bit integer dot products to accelerate the computation of such int8-quantized models. It can save (up to 75%) of the memory and network bandwidth and improve the performance of any machine learning models in inferencing compared with their f32 version. As a result, it's now heavily used within many popular AI frameworks.


areiwW4.jpeg


DP4a (Int8) is more accurate and easier to optimize for than FP16.

PS5 didn't have DP4a, so they had no choice but to utilize what they have. Imagine the PS5's performance if it had DP4a for devs to utilize.
 

PaintTinJr

Member
What's New in WebGPU (Chrome 123)
DP4a (Dot Product of 4 Elements and Accumulate) refers to a set of GPU instructions commonly used in deep learning inference for quantization. It efficiently performs 8-bit integer dot products to accelerate the computation of such int8-quantized models. It can save (up to 75%) of the memory and network bandwidth and improve the performance of any machine learning models in inferencing compared with their f32 version. As a result, it's now heavily used within many popular AI frameworks.


areiwW4.jpeg


DP4a (Int8) is more accurate and easier to optimize for than FP16.

PS5 didn't have DP4a, so they had no choice but to utilize what they have. Imagine the PS5's performance if it had DP4a for devs to utilize.
For a start that's mostly about network training rather than inference, and the Ragnarok algorithm already has to produce BC7 format data so has other constraints that even INT16 might not have suited because the BC7 format itself has already quantized the raw RGBA data into lossy compressed data, which was essential to leverage the BC7 decompression acceleration built-in. So the precision based on the presentation isn't suited to INT4, INT8 or INT16 and was preferred at FP32 originally, but worked with minimal difference at FP16. A 4 times speed up of their first algorithm is achieve by optimisations at a to-the-metal level of effectively bin packing the workload and eliminating cache misses while still working in a general purpose async compute language. So I very much doubt the DPA4 would a) gain much more performance, and b) wouldn't provide the ability to use the feature without blocking the GPU's ability to interlace other work asynchronously.
 
Last edited:

omegasc

Member
VRS in Doom, on the Series X looked better than the VRS on the PS5.

Meanwhile on PC, we had a few games that had a very good implementation of VRS.

If the PS5 had hardware VRS, chances are most games would use it.

And then we have the lack of support for DP4A. If the PS5 had this feature we might already have PSSR.
You might be mistaking it for Dead Space Remake, where the VRS implementation was on at first, with ugly results. Then IIRC they disabled VRS on PS5 and got no perf decrease, so VRS was just making image quality worse.
 

PaintTinJr

Member
No. With VRS we would be just shading 1 big pixel, instead 4.
Resulting in fewer pixels to shade.
That's not how it works and if it did work like that you'd effectively end up rendering an image with quarter of the detail of the framebuffer resolution because you can't sample at a 1:1 ratio, you have to sample at at least twice the maximum frequency (2x horizontal and 2x vertical in this case) to avoid under sampling as asserted and stipulated by the Nyquiste Rate formula which is all part of sample theory.
 

Gaiff

SBI’s Resident Gaslighter
You might be mistaking it for Dead Space Remake, where the VRS implementation was on at first, with ugly results. Then IIRC they disabled VRS on PS5 and got no perf decrease, so VRS was just making image quality worse.
Pretty sure they disabled it on both PS5 and Xbox lol. On PC, you could always toggle that shit off to save your image quality.

The most competent use of VRS is probably Gears 5.
 

Mr.Phoenix

Member
What if its $550 with no blades or a disc drive and you have to buy them? Technically it would be black.

EDIT: I mean it would connect to the add-on disc drive if you wanted it. So it would be:

$550 for console
$60 for the blades
$70 for the disc drive

$630 pre-tax, launch games are essentially NBA 2K, Until Dawn, and CoD with Sony's backlog working with it.
I believe it's going to be $499 without the disc drive.
 

schaft0620

Member
I believe it's going to be $499 without the disc drive.

I don't see Sony going

$400 for the discless
$450 for the Slim
$500 for the Pro w/o drive


They could go a few routes, they could kill off the Slim, they could kill off the discless and make the Slim $400. A year ago, I would of said a $600 Pro is a lock but, the world is fighting inflation and we are reaching critical mass with costs going up. At $600 it would have to be some sort of limited edition Helldivers 2, Venom, GTA6 model to be successful.

Either way, the difference between ANY PS5 model and the Pro is not going to be $50. The Slim is $500 today, they are not going to do a $100 price cut this year. I think they are trying to figure out the best path and there is some skepticism that a Pro will come out this year. If it doesn't, its only because they did not figure out how to solve this problem.

$549.99 w/o a disc drive gets them there. Then they can drop the Slim ($450) and the discless ($400) to the holiday price full time. Ultimately, I think they drop one of the two standard PS5 models.
 

Poordevil

Member
As opposed to what? My OG PS5 is silent as death and doesn't put out any excessive heat. If yours is different then you better check it for repairs because since day one, PS5 is known as a very quiet console.
I had the PS4 Pro in mind. I hear or read a lot of posts from peeps claiming their PS4 Pro is loud. That it sounds like a jet engine. Mine is nothing like that. It is whisper quiet. But I got it later in the consoles life cycle. Maybe the launch PS4 Pros were loud and Sony fixed it?
I don't have PS5. Really ready for the PS5 Pro!
 

Perrott

Member
Marketing must be separate from the development deal they've had because the parity issue was exposed in the 900p XB1 vs PS4 AC debacle that Microsoft actually confirmed - which has probably been removed from DF - that they had a long stand parity or better on Xbox contract with Ubisoft.
A development parity deal between Ubisoft and Microsoft for Xbox versions to be as good if not better than the competition's? That cannot be true...

Watch Dogs ran at a higher resolution on PS4 (900p) than on the One (720p), and taking things into AC territory, we have the example of another major resolution gap between Assassin's Creed IV on PS4 (1080p) and One (900p), not to mention AC Syndicate receiving a PS4 Pro patch at launch on November 2016 (one year before the arrival of the One X) which undeniably turned the Pro into the best console to play that game on.
 

PaintTinJr

Member
A development parity deal between Ubisoft and Microsoft for Xbox versions to be as good if not better than the competition's? That cannot be true...

Watch Dogs ran at a higher resolution on PS4 (900p) than on the One (720p), and taking things into AC territory, we have the example of another major resolution gap between Assassin's Creed IV on PS4 (1080p) and One (900p), not to mention AC Syndicate receiving a PS4 Pro patch at launch on November 2016 (one year before the arrival of the One X) which undeniably turned the Pro into the best console to play that game on.
I'd assume Watch Dogs wouldn't be part of an AC deal, and are you saying that the Pro Vs One X version wasn't equal or better on the One X, or did they add features/fx and make alterations beyond resolution/frame-rate/texture size that the PS4/One version didn't get?

As for AC IV, the backlash from the parity with AC on the One at 900p was costing Ubi credibility and probably sales - I know I've skipped all since - and with the One already failing hard, my best guess is that Microsoft relented, after accepting that the gulf between the PS4 and One hardware was set, and asking Richard/DF to stop focusing on pixel counting as the new SoP was the easier option.
 

DJ12

Member
VRS in Doom, on the Series X looked better than the VRS on the PS5.

Meanwhile on PC, we had a few games that had a very good implementation of VRS.

If the PS5 had hardware VRS, chances are most games would use it.

And then we have the lack of support for DP4A. If the PS5 had this feature we might already have PSSR.
Absolute twaddle.

VRS reduces image quality, that's actually what the technology does. It maybe passable at full 4k where each cluster of 4 pixels is the same, lower resolutions it's jarring and noticeable everywhere.

Eye tracking vr, it's an absolutely brilliant feature, everything else it should never be used.

I think the series x with it's hardware vrs shows its not that big a deal for performance and only brings worse iq
 
Last edited:

Perrott

Member
I'd assume Watch Dogs wouldn't be part of an AC deal, and are you saying that the Pro Vs One X version wasn't equal or better on the One X, or did they add features/fx and make alterations beyond resolution/frame-rate/texture size that the PS4/One version didn't get?
Assassin's Creed Syndicate never got an Xbox One X patch, so the PS4 Pro ended up offering by far the best console version with its upscaled 4K mode vs the 900p of the game on both the PS4 and One versions.

So wouldn't that massive resolution advantage on the mid-gen PlayStation console constitute a violation of said alleged parity agreement for AC too since Ubisoft was, again, allegedly forced to hold back Unity from running at 1080p on PS4?

That's why I don't buy the idea of this parity agreement either existing or, if it does exist, meaning what you think it means.
 
Last edited:

winjer

Gold Member
If you are doing 4 or more fragments per pixel with VRS, you would also be doing 4 or more texture lookups for that pixel in the variable rate shading, so you will be capturing virtually all the same direct rendering detail that can be displayed at the limit of minification.

What you are describing is MSAA, where we pick regions to increase the sample rate.
But with VRS, we do the exact opposite. We pick regions where we reduce the sample rate. So for example, instead of sampling a group of 4 pixels, we sample only one time, at the centre.
The result is one big pixel, that used to be 2x2. And because of that, then we only have to shade one.
This is where the performance increase comes from, and where the potential image degradation comes from.

 

Mr.Phoenix

Member
I don't see Sony going

$400 for the discless
$450 for the Slim
$500 for the Pro w/o drive


They could go a few routes, they could kill off the Slim, they could kill off the discless and make the Slim $400. A year ago, I would of said a $600 Pro is a lock but, the world is fighting inflation and we are reaching critical mass with costs going up. At $600 it would have to be some sort of limited edition Helldivers 2, Venom, GTA6 model to be successful.

Either way, the difference between ANY PS5 model and the Pro is not going to be $50. The Slim is $500 today, they are not going to do a $100 price cut this year. I think they are trying to figure out the best path and there is some skepticism that a Pro will come out this year. If it doesn't, its only because they did not figure out how to solve this problem.

$549.99 w/o a disc drive gets them there. Then they can drop the Slim ($450) and the discless ($400) to the holiday price full time. Ultimately, I think they drop one of the two standard PS5 models.
I don't know where you are getting your prices.

On both Amazon and Sony Playstation direct store... The PS5 Slim is (listed/sold) for $499/$450 and $449/$399 for the Slim+drive and just the Slim respectively.

What is likely to happen, is that Sony could stop selling any bundled option of the Slim with a drive and have the Slim sit at $399. That way, they can sell a Pro console, without a drive for $499. And have a Pro+drive bundle that retails for $550.

We can also see something like this...

$399 PS5
$449 PS5 + BRD
$499 PS5pro
$549 PS5pro + BRD

TLDR, the slim today is not actually $499. And the difference between a slim and pro is not going to be $50, it will be $100.
 

winjer

Gold Member
Absolute twaddle.

VRS reduces image quality, that's actually what the technology does. It maybe passable at full 4k where each cluster of 4 pixels is the same, lower resolutions it's jarring and noticeable everywhere.

Eye tracking vr, it's an absolutely brilliant feature, everything else it should never be used.

I think the series x with it's hardware vrs shows its not that big a deal for performance and only brings worse iq

I say image quality is improved with VRS. I said that the VRS implementation in Doom is better on the Series X, than on the PS5. This is because the Series X has a higher version of hardware VRS.
 

PaintTinJr

Member
What you are describing is MSAA, where we pick regions to increase the sample rate.
But with VRS, we do the exact opposite. We pick regions where we reduce the sample rate. So for example, instead of sampling a group of 4 pixels, we sample only one time, at the centre.
The result is one big pixel, that used to be 2x2. And because of that, then we only have to shade one.
This is where the performance increase comes from, and where the potential image degradation comes from.


No, I think you are not appreciating the major difference between a fragment, and a pixel - which is Microsoft/Nvidia's fault for not using the RenderMan/3DLabs/Opengl original terminology: fragment shaders, and instead confusingly and idiotically called them pixel shaders.

Shading just one pixel still results in multiple fragment in most regular rendering scenarios where perspective projection of shaded, texture mapped geometry is rendered in that pixel.

VRS is intended to take the scenario where too many fragments are being calculated at distance beyond what the minification can allow the blended framebuffer pixel to display, meaning it is wasted processing.

The VRS artefacts from lowering the shading rate typically all come down to oversights of the required direct or indirect sample rate(fragments per pixel) needed to avoid undersampling relative to non-VRS.
 

winjer

Gold Member
No, I think you are not appreciating the major difference between a fragment, and a pixel - which is Microsoft/Nvidia's fault for not using the RenderMan/3DLabs/Opengl original terminology: fragment shaders, and instead confusingly and idiotically called them pixel shaders.

Yes, I'm using mostly Nvidia/Microsoft terminology. Both Nvidia and DirectX are the current standards on PC, not OpenGL.
Even Unreal Engine uses Pixel Shaders, for the terminology.

Shading just one pixel still results in multiple fragment in most regular rendering scenarios where perspective projection of shaded, texture mapped geometry is rendered in that pixel.

VRS is intended to take the scenario where too many fragments are being calculated at distance beyond what the minification can allow the blended framebuffer pixel to display, meaning it is wasted processing.

The VRS artefacts from lowering the shading rate typically all come down to oversights of the required direct or indirect sample rate(fragments per pixel) needed to avoid undersampling relative to non-VRS.

Overdraw of pixel shaders always occurs, it's just a consequence of us not being able to remove all hidden surfaces before we start the rendering pipeline.
It's also a consequence of AMD and Nvidia having an hardware rasterizer using quad pixels.
Yes, VRS and MSAA have consequences on how these things work. It's just how it works.

But what I'm saying, in simple terms, is that MSAA increases the samples per pixel and VRS reduces the samples per pixel. Both in a localized fashion.
This means, MSAA increase image quality but reduces performance. While VRS reduces image quality, but increases performance.
And the result, is having to shade fewer pixels. Or fragments.
 
Last edited:

PaintTinJr

Member
Yes, I'm using mostly Nvidia/Microsoft terminology. Both Nvidia and DirectX are the current standards on PC, not OpenGL.
Even Unreal Engine uses Pixel Shaders, for the terminology.



Overdraw of pixel shaders always occurs, it's just a consequence of us not being able to remove all hidden surfaces before we start the rendering pipeline.
It's also a consequence of AMD and Nvidia having an hardware rasterizer using quad pixels.
Yes, VRS and MSAA have consequences on how these things work. It's just how it works.

But what I'm saying, in simple terms, is that MSAA increases the samples per pixel and VRS reduces the samples per pixel. Both in a localized fashion.
This means, MSAA increase image quality but reduces performance. While VRS reduces image quality, but increases performance.
And the result, is having to shade fewer pixels. Or fragments.
If you think we are talking about overdraw, then unfortunately you have some pre-requisite reading to do about the fundamentals of rasterization from its origins to understand that pixel shading is just a silly name and not a description of the fragment shading processing taking place in the "pixel shaders" that are carrying out that part of the rasterization pipeline in DirectX.

Here's the book that everyone in the industry 25-30 years ago would have had in their collection, commissioned by IBM et al - before Nvidia and Microsoft. A lot of the information for the programming is out of date, but the maths and the fundamentals all still apply to what's happening under the hood.

OIP.A7VYF3hHZRih3iySAf2dGQAAAA
 

PaintTinJr

Member

From the very retro Doom 3 visualization at high resolution the game is still giving off, you can tell that they didn't utilise the PS5 fully, and the 15% VRS saving claim is pitiful IMO and is not even equal to the fill-rate advantage of the PS5 over the XsX... which tells its own story about the dev's impartial credibility.

The GPU cache setup on the PS5 favours stable high frame more than the Series consoles, and especially with simple stuff like this relying mostly on quality texturing and medium quality geometry, so the advantages of the PS5 would have been huge on this game had it been developed by a skilled team without any requirement to extol the home team's inferior hardware VRS setup.
 

winjer

Gold Member
If you think we are talking about overdraw, then unfortunately you have some pre-requisite reading to do about the fundamentals of rasterization from its origins to understand that pixel shading is just a silly name and not a description of the fragment shading processing taking place in the "pixel shaders" that are carrying out that part of the rasterization pipeline in DirectX.

Here's the book that everyone in the industry 25-30 years ago would have had in their collection, commissioned by IBM et al - before Nvidia and Microsoft. A lot of the information for the programming is out of date, but the maths and the fundamentals all still apply to what's happening under the hood.

Sorry, but I'm not going to buy and read a book just for the sake of an internet argument.
So if you could elaborate a bit more, I would appreciate it.
I do think that a good part in our discussion is not so much the technical concepts, but the misunderstanding form using different terminology.

But I guess that you are talking about how a fragment is not a pixel when using multiple samples.
But in the case of a single sample per pixel, they end up being the same.
A pixel has no size, only coordinates and alpha.
But even in the case of a multicomplex pixel, the fragment shader is only ran once.
 

Goalus

Member
Sorry, but I'm not going to buy and read a book just for the sake of an internet argument.
So if you could elaborate a bit more, I would appreciate it.
I would like to hear this too, as with early z-testing no fragment that doesn't eventually become a pixel should ever enter the fragment shader. And consequently, "pixel shader" suddenly makes sense as a term.
 
Last edited:

winjer

Gold Member
From the very retro Doom 3 visualization at high resolution the game is still giving off, you can tell that they didn't utilise the PS5 fully, and the 15% VRS saving claim is pitiful IMO and is not even equal to the fill-rate advantage of the PS5 over the XsX... which tells its own story about the dev's impartial credibility.

The GPU cache setup on the PS5 favours stable high frame more than the Series consoles, and especially with simple stuff like this relying mostly on quality texturing and medium quality geometry, so the advantages of the PS5 would have been huge on this game had it been developed by a skilled team without any requirement to extol the home team's inferior hardware VRS setup.

Here, the comparison is mostly on how each chip can perform hardware based VRS.
We already know it's possible to create a VRS solution based on compute. But that is a different matter.
The advantage of Hardware VRS, such as was developed by Nvidia and has become the standard for modern GPUs, is that a dev can just target an API and it's respective hardware functions, without having to write a custom compute solution.
This is much cheaper and faster to implement.
 

Gaiff

SBI’s Resident Gaslighter
I don't know where you are getting your prices.

On both Amazon and Sony Playstation direct store... The PS5 Slim is (listed/sold) for $499/$450 and $449/$399 for the Slim+drive and just the Slim respectively.

What is likely to happen, is that Sony could stop selling any bundled option of the Slim with a drive and have the Slim sit at $399. That way, they can sell a Pro console, without a drive for $499. And have a Pro+drive bundle that retails for $550.

We can also see something like this...

$399 PS5
$449 PS5 + BRD
$499 PS5pro
$549 PS5pro + BRD

TLDR, the slim today is not actually $499. And the difference between a slim and pro is not going to be $50, it will be $100.
I don't see them releasing a diskless Pro. The margins on the PS5 Digital are presumably much slimmer than on the regular one. An optical drive sure as hell doesn't cost $100 to produce. I also believe Sony won't want so many SKUs on the market. It's equally doubtful that the price of any of the consoles will drop given the current economy and the fact that Sony increased the price just a year ago.

I see:

$399 PS5 Digital
$499 PS5
$599 PS5 Pro
 

PaintTinJr

Member
Sorry, but I'm not going to buy and read a book just for the sake of an internet argument.
So if you could elaborate a bit more, I would appreciate it.
I do think that a good part in our discussion is not so much the technical concepts, but the misunderstanding form using different terminology.

But I guess that you are talking about how a fragment is not a pixel when using multiple samples.
But in the case of a single sample per pixel, they end up being the same.
A pixel has no size, only coordinates and alpha.
But even in the case of a multicomplex pixel, the fragment shader is only ran once.
The part you are missing is how geometry(even a single triangle/polygon) gets broken into multiple fragments when they invariably overlap or partially overlap various pixels and get considered at a subpixel level(fragments) even just to create a basic aliased output that participates in the final shaded pixel and gives the noisy shape of the geometry volume being rendered. A process that gets more critical as the geometry is at acute angles from near to far frustum clip planes, or the scenario where the geometry moves from the near to far clip planes on acute angles; because candidate fragments crossing pixel boundaries becomes more common and harder to represent without losing the volume.
 
Last edited:

Fafalada

Fafracer forever
DP4a (Int8) is more accurate and easier to optimize for than FP16.
The paper you quoted refers to quantization to fixed point at lower bit-depth, not running SIMD operations on it (which is what DP4a does).
The flexibility trade-off is the same for 'all' SIMD operations - you perform more operations in a single instruction at the cost of having to tweak the algorithm/data to process that way.

What you are describing is MSAA, where we pick regions to increase the sample rate.
No - MSAA is a render-target property, there was no concept of regions with different sample rates.
VRS is what introduced fine-grained control over sample rate, and that includes increasing the sampling rate (arguably that's the more interesting way to use VRS, especially in VR - it just hasn't been explored too much yet).
 

winjer

Gold Member
The part you are missing is how geometry(even a single triangle/polygon) gets broken into multiple fragments when they invariably overlap or partially overlap various pixels and get considered at a subpixel level(fragments) even just to create a basic aliased output that participates in the final shaded pixel and gives the noisy shape of the geometry volume being rendered. A process that gets more critical as the geometry is at acute angles from near to far frustum clip planes, or the scenario where the geometry moves from the near to far clip planes on acute angles; because candidate fragments crossing pixel boundaries becomes more common and harder to represent without losing the volume.

I'm not missing that part. I just didn't understand what you were referring to.
Nvidia explains that in the video I posted, just with the Pixel Shader terminology.



But the thing is, using VRS, means using fewer samples. We get a bigger coarse pixel, meaning more primitives are never sampled at all.
 

winjer

Gold Member
Doom Eternal on PS5 doesn't use any form of VRS at all, only DRS.

I just posted a video from DF where they interview a dev from id, showing and talking about how VRS and DRS were implemented on all platforms.
There is even an image at 400%, taken by DF, comparing VRS on the PS5 Series X and PC.
 

Goalus

Member
The part you are missing is how geometry(even a single triangle/polygon) gets broken into multiple fragments when they invariably overlap or partially overlap various pixels and get considered at a subpixel level(fragments) even just to create a basic aliased output that participates in the final shaded pixel and gives the noisy shape of the geometry volume being rendered.
That is an over-complicated way of saying "rasterization produces fragments that are afterwards passed to the fragment/pixel shader". It does not explain why "pixel shader" is supposedly wrong terminology.
 

Arioco

Member
I just posted a video from DF where they interview a dev from id, showing and talking about how VRS and DRS were implemented on all platforms.
There is even an image at 400%, taken by DF, comparing VRS on the PS5 Series X and PC.


Yes, I watched the video when it was uploaded, and the DF analysis too. But did you?

Again, DOOM Eternal doesn't use any form of VRS on PS5 and the imagen taken at 400% tries to show the artifacts on Series consoles and not present on PS5.

You can hear the whole explanation here, minute 8:17

 

winjer

Gold Member
Yes, I watched the video when it was uploaded, and the DF analysis too. But did you?

Again, DOOM Eternal doesn't use any form of VRS on PS5 and the imagen taken at 400% tries to show the artifacts on Series consoles and not present on PS5.

You can hear the whole explanation here, minute 8:17



What he says is that at that moment, the Series X is the only platform that has Tier 2 VRS.
The PS5 does not support Tier 2 VRS. So they implemented Tier 1 VRS.
And the PC later got Tier 2 VRS.

Here is the video, with DF comparing the PS5 with Tier 1 VSR vs Series X with Tier 2 VRS and the PC with Tier 2 VRS.
All the while talking with an id developer talking about how they implemented VRS on all platforms.

 
Top Bottom