Digital Foundry: Nintendo Switch 2 DLSS Image Quality Analysis: "Tiny" DLSS/Full-Fat DLSS Confirmed

From Chrome AI
1. That's a regurgitation of whatever purely marketing statements spokespersons from companies involved has made till this date. This or any other AI doesn't have any visibility into the issue beyond that which means that asking them about something which is NDAed and hidden from public is totally pointless.
2. There are no mentions of "algorithms" and it even explicitly states that "The PS5 Pro will receive a ... update", not PSSR itself. Which can mean literally anything from PSSR getting an update to PSSR being switched to FSR4 in everything but the name.
 
p1s0PQwFNWsYOPJP.png


ok, sure, it still hovers around 1080p on average. and in quality mode it even can reach ~1700p
D9Zu9MP5ibbWgymU.png
it can reach highs of 1700p but the game isn't necessarily always producing IQ based artifacts either at those high resolutions. When you have a dynamic res jumping in that massive range it doesn't help you with the temporal artifacts. When people complained about the issue of temporal noise in SW:JS and made those comparisons they never mentioned framerate ever. Now it's important though.
it's all about expectations. you do not expect a mobile device, running games at 540p, to have flawless image quality.
meanwhile you do expect this on a home console running dynamic 1224p or dynamic 1728p.
No, it's all about preference and the point is you wouldn't get this on console even if it ran DLSS 3 because devs can and will sometimes choose a "tiny" version on it on limited hardware if it means lower frametime (high framerate) and freed cores used to boost RT features instead. Its a double edged sword because you can choose higher internal res to avoid worse artifacts and use a lighter "tiny dlss" or as DF suggested drop the internal res even further and use "full fat" (still DLSS 3) but that does not mean you are going to be artifact free as SF and Cyberpunk show. That's what you're seeing on Pro too. It's simply dev choices that the user cannot change and may not necessarily prefer.

because we know how good games can look with proper temporal reconstruction when running at such input resolutions. we know DLSS almost consistently looks on par or better than native when scaling from 960p to 1440p for example... or how it looks on par or better than native in 90% of games when scaling 1440p to 2160p. PSSR never looks as good as native, and sometimes looks problematic.
That depends on if you really despise artifacts or favour IQ but DLSS 3 and PSSR are largely very similar and PSSR has looked as good as native and even better than native in games like TLOU so that "never" is untrue.
while we already knew that running DLSS, even on PC, at 540p will have issues. ~900p is the lowest input resolution where DLSS gets actually good results. so again, the expectation when running at 540p or 720p is that it looks imperfect.
This expectation should apply elsewhere too. Even at higher res DLSS is still "imperfect".
as to when DF thinks it shouldn't be used,

that is when using the currently Switch 2 exclusive "tiny" DLSS that isn't even attempting to reconstruct the image in motion it seems.
Why do you think that is though? If it was just called "DLSS" instead of DF now giving it a name of "tiny DLSS" would it have mattered to you?
as to when DF thinks it shouldn't be used,

every game that uses the more PC-like DLSS on Switch 2 looks surprisingly good given the low resolution it renders at. Street Fighter 5 can look cleaner at 540p internal res than the Series S at 1080p internal res. and every game that uses the "proper" DLSS would absolutely look worse without it.
"Proper DLSS"? The other games are still DLSS but it's constrained by dev choices on fixed hardware just like the Pro. Could you call PSSR in TLOU and Demon's Souls "proper PSSR" and the others something else? Not really, so why do you do this here? TLOU has great picture quality with PSSR thats better than native but people nitpicked edge artifacts on peach fuzz nonetheless. Now look at what you call "Proper DLSS" on SF6, same artifacts but even more pronounced on character hair. "Tiny DLSS" is still DLSS and other implementations of PSSR are still PSSR, simply the developers making choices based on their fixed targets and hardware with PSSR and DLSS.


the issue is not DLSS. the issue is the low internal resolution. DLSS makes these games look presentable in the first place. they don't look awesome, or even good at times... but they look better than they would without DLSS, and they look surprisingly good for a small handheld system running with less power than a Steam Deck.
Nah it's both, because on fixed hardware and settings it's always a tradeoff choice. It's DLSS adding artifacts/issues and frametime cost. It's a tradeoff. Do I lower res so that I can use DLSS with the cost that it has but suffer from poor quality artifacts or do I increase res and remove DLSS, do I use "proper DLSS" but maybe get lower framerates. It's all choices made for you on console. Also SF6 is a shit example because even PS4 Pro looked better than the Series S version which apparently according to DF uses "capcoms custom internal upscaler". The Switch 2 version has to be docked to look better than the Series S as well.
meanwhile PSSR is now often an option you can turn off, because it can give worse results than fucking FSR2/3... that is expectations not fulfilled. and again, this can often be an issue the devs could fix maybe, but the enduser experience is disappointing.
That's completely horseshit. PSSR doesn't give results worse than FSR2/3. It gives much better results in terms of IQ. It may give similar artifacts like DLSS3 does, It may even lower framerate below that of the base console depending on the dev choices when adding it, but it will 100% give you better results than FSR 2 or 3. If you're talking about the toggle to turn off PSSR the same thing is happening on Switch 2. People asked for a mode in Fast Fusion that turns off DLSS because the artifacts were atrocious. The developer had to really lower settings for it though to boost resolution slightly instead. You should be happy for options.

The low resolution of Jedi Survivor was brought up as being one of the reasons the IQ with PSSR left much to be desired. Same for Alan Wake 2. In addition, people have understandably much higher expectations for the PS5 Pro than they have for the Switch 2.
Yes that is my point but kevboard kevboard is now suggesting that the resolution was 900p or 1700p and so it was the upscaler which is "problematic". they do have higher expectations but those expectations of a perfect upscale with zero artifacts or matching unknown PCs would never be met. Especially if they are blaming the reconstruction technique for everything and when it comes to switch changing that around to other factors. PS5 Pro will never match high end PCs or even lower-mid end nvidia GPUs especially if it has RT features in the PS5 Pro game enabled. I would say expectations were high for Switch 2from a lot of people too because they were saying the upscaling was going to be transformer and better than home consoles but the overall result isn't very good. It's not the transformer model in DLSS4 but CNN and the upscale is still full of glaring artifacts, flickering and generally very unstable.

Crank up the res on this vid and look at the roof of this bridge

This is a result of low settings but it's exacerbated but the upscale.
Now recall the dedicated threads to shit like this that people were perpetuating and highlighting on Pro and "PSSR":

opBGD26.jpeg
 
Last edited:
1. That's a regurgitation of whatever purely marketing statements spokespersons from companies involved has made till this date. This or any other AI doesn't have any visibility into the issue beyond that which means that asking them about something which is NDAed and hidden from public is totally pointless.
2. There are no mentions of "algorithms" and it even explicitly states that "The PS5 Pro will receive a ... update", not PSSR itself. Which can mean literally anything from PSSR getting an update to PSSR being switched to FSR4 in everything but the name.
So you don't have idea how PSSR works (me neither, outside maybe you are working secretely on it) and whatever we find in the net even rielaborate with the AI is just sony marketing spoke. Maybe not posting was a better answer than this convenient vague nonsense bullshit, with all respect. And again algorithms are also and most of the times a reseamble of different algorithms or logics; I don't know why we continue to spread this concept about algorithm to be an unique inextricable "entity" which has to be totally replaced with new updates/version because isn't it exactly true from my humble knowledge. Or maybe most of the people here don't know what algorithms/logics really are, I start to suspect. Not sure what is shocking or unbeliavable about keep some older logics in the future "new" PSSR version if some works well; arguing is just marketing spoken is quite bizzarre.
 
Last edited:
Guys, why are you on about PCs and PS5s?
This is a Switch 2 = 15W BEAST celebration thread...

Upscaler wars are something else :messenger_grinning_sweat: I thought we'd at least get an interesting discussion about this 'tiny DLSS' in the thread but seems we got a few pages of folk bickering over the worst implementations of DLSS & PSSR in games instead
 
it can reach highs of 1700p but the game isn't necessarily always producing IQ based artifacts either at those high resolutions. When you have a dynamic res jumping in that massive range it doesn't help you with the temporal artifacts. When people complained about the issue of temporal noise in SW:JS and made those comparisons they never mentioned framerate ever. Now it's important though.

No, it's all about preference and the point is you wouldn't get this on console even if it ran DLSS 3 because devs can and will sometimes choose a "tiny" version on it on limited hardware if it means lower frametime (high framerate) and freed cores used to boost RT features instead. Its a double edged sword because you can choose higher internal res to avoid worse artifacts and use a lighter "tiny dlss" or as DF suggested drop the internal res even further and use "full fat" (still DLSS 3) but that does not mean you are going to be artifact free as SF and Cyberpunk show. That's what you're seeing on Pro too. It's simply dev choices that the user cannot change and may not necessarily prefer.


That depends on if you really despise artifacts or favour IQ but DLSS 3 and PSSR are largely very similar and PSSR has looked as good as native and even better than native in games like TLOU so that "never" is untrue.

This expectation should apply elsewhere too. Even at higher res DLSS is still "imperfect".

Why do you think that is though? If it was just called "DLSS" instead of DF now giving it a name of "tiny DLSS" would it have mattered to you?

"Proper DLSS"? The other games are still DLSS but it's constrained by dev choices on fixed hardware just like the Pro. Could you call PSSR in TLOU and Demon's Souls "proper PSSR" and the others something else? Not really, so why do you do this here? TLOU has great picture quality with PSSR thats better than native but people nitpicked edge artifacts on peach fuzz nonetheless. Now look at what you call "Proper DLSS" on SF6, same artifacts but even more pronounced on character hair. "Tiny DLSS" is still DLSS and other implementations of PSSR are still PSSR, simply the developers making choices based on their fixed targets and hardware with PSSR and DLSS.



Nah it's both, because on fixed hardware and settings it's always a tradeoff choice. It's DLSS adding artifacts/issues and frametime cost. It's a tradeoff. Do I lower res so that I can use DLSS with the cost that it has but suffer from poor quality artifacts or do I increase res and remove DLSS, do I use "proper DLSS" but maybe get lower framerates. It's all choices made for you on console. Also SF6 is a shit example because even PS4 Pro looked better than the Series S version which apparently according to DF uses "capcoms custom internal upscaler". The Switch 2 version has to be docked to look better than the Series S as well.

That's completely horseshit. PSSR doesn't give results worse than FSR2/3. It gives much better results in terms of IQ. It may give similar artifacts like DLSS3 does, It may even lower framerate below that of the base console depending on the dev choices when adding it, but it will 100% give you better results than FSR 2 or 3. If you're talking about the toggle to turn off PSSR the same thing is happening on Switch 2. People asked for a mode in Fast Fusion that turns off DLSS because the artifacts were atrocious. The developer had to really lower settings for it though to boost resolution slightly instead. You should be happy for options.


Yes that is my point but kevboard kevboard is now suggesting that the resolution was 900p or 1700p and so it was the upscaler which is "problematic". they do have higher expectations but those expectations of a perfect upscale with zero artifacts or matching unknown PCs would never be met. Especially if they are blaming the reconstruction technique for everything and when it comes to switch changing that around to other factors. PS5 Pro will never match high end PCs or even lower-mid end nvidia GPUs especially if it has RT features in the PS5 Pro game enabled. I would say expectations were high for Switch 2from a lot of people too because they were saying the upscaling was going to be transformer and better than home consoles but the overall result isn't very good. It's not the transformer model in DLSS4 but CNN and the upscale is still full of glaring artifacts, flickering and generally very unstable.

Crank up the res on this vid and look at the roof of this bridge

This is a result of low settings but it's exacerbated but the upscale.
Now recall the dedicated threads to shit like this that people were perpetuating and highlighting on Pro and "PSSR":

opBGD26.jpeg

green-mile-im-tired-boss.gif
 
TIL that there are fanboys of upscaling techniques battling it out online.
The internet is a wild place.
I've been calling this shit out for years. They used to bash reconstruction and upscaling techniques until (insert favorite brand) started doing it. Now they shifted to war over the tech itself like good little retards.
 
I've been calling this shit out for years. They used to bash reconstruction and upscaling techniques until (insert favorite brand) started doing it. Now they shifted to war over the tech itself like good little retards.
Upscaling before AI was a bit shite though, even DLSS1 was wank. Whole different ballgame now
 
TIL that there are fanboys of upscaling techniques battling it out online.
The internet is a wild place.
More like people can't accept verified and quantifiable facts and have to resort to mental gymnastic to fit that into their own narrative.

There is nothing magical about upscalers, everything about their end result (the final IQ) can be quantified with facts and numbers. We know which one is the best and which one is the worst given a set of verifiable metrics. This discussion is pointless.

I've been calling this shit out for years. They used to bash reconstruction and upscaling techniques until (insert favorite brand) started doing it. Now they shifted to war over the tech itself like good little retards.
Maybe because the technology evolved? DLSS1 wasn't even a temporal upscaler.
 
Upscaling before AI was a bit shite though, even DLSS1 was wank. Whole different ballgame now
That wasn't the reason I'm referring to. Some still to this day to the random "1080p" drive-by's in only certain threads as if they still dig deep for that resolution purity, when almost all of the techniques draw from that resolution internally for integer scaling and the like on 1440p-2160p displays. They never do the "1080p" drive-by in other threads that doing just that more often than not.

It's goofy shit no matter how you try and mental gymnastics around it.
 
Last edited:
More like people can't accept verified and quantifiable facts and have to resort to mental gymnastic to fit that into their own narrative.

There is nothing magical about upscalers, everything about their end result (the final IQ) can be quantified with facts and numbers. We know which one is the best and which one is the worst given a set of verifiable metrics. This discussion is pointless.


Maybe because the technology evolved? DLSS1 wasn't even a temporal upscaler.
Such verifiable metrics are shot capture and video in slowmotion? They aren't exactly scientific "metrics" but just eyes catch impressions spread by youtubers channel.
 
Last edited:
That wasn't the reason I'm referring to. Some still to this day to the random "1080p" drive-by's in only certain threads as if they still dig deep for that resolution purity, when almost all of the techniques draw from that resolution internally for integer scaling and the like on 1440p-2160p displays. They never do the "1080p" drive-by in other threads that doing just that more often than not.

It's goofy shit no matter how you try and mental gymnastics around it.
Okaay Ok GIF by MOODMAN
 
This was what I meant regarding the specular issues attributed to the upscaling before. Nvidia has really good ray reconstruction and denoisers that help but even that only goes so far too when settings are lowered. Look at the specular issues on the switch 2 version of outlaws for example. The flickering issues everywhere almost identical to what people were lambasting before. Look at the poor shadow/lighting fizzling in cyberpunk in the DF video. Do you remember the CoD controversy? Do you remember the cheek of abby with her peach fuzz causing edge artifacts? Now look at the artifacts from hair in SF on Switch 2, much worse.

We already discussed this months ago when you showed me artifacts in Cyberpunk with PT. This was the problem with DENOISER, NOT SUPER RESOLUTION. Those things are separate.

Games have different platform agnostic denoisers in them, all games with RT have some issues atributted to them. Now, all reconstruction techniques can play nice with them or not, and looks like FSR2/3, TSR, XeSS and DLSS don't produce more problems (image is the same compared to TAA+native when it comes to problems) and then you have PSSR that INTRODUCES MORE PROBLEMS.

Ray Reconstruction is another denoiser based on ML, you can switch it or on off. It looks better than standard in game denoiser or worse depending on scene or elements. And this has nothing to do with DLSS Super Resolution, it's a separate thing. You can use DLSS SR with standard in game denoiser and have the same kind of issues as with native + TAA.
 
Last edited:
We already discussed this months ago when you showed me artifacts in Cyberpunk with PT. This was the problem with DENOISER, NOT SUPER RESOLUTION. Those things are separate.
And this was what I was telling you regarding things attributed to PSSR.

Games have different platform agnostic denoisers in them, all games with RT have some issues atributted to them. Now, all reconstruction techniques can play nice with them or not, and looks like FSR2/3, TSR, XeSS and DLSS don't produce more problems (image is the same compared to TAA+native when it comes to problems) and then you have PSSR that INTRODUCES MORE PROBLEMS.

Ray Reconstruction is another denoiser based on ML, you can switch it or on off. It looks better than standard in game denoiser or worse depending on scene or elements. And this has nothing to do with DLSS Super Resolution, it's a separate thing. You can use DLSS SR with standard in game denoiser and have the same kind of issues as with native + TAA.
DLSS 3 does not play nice with noise. So this idea that they dont "produce more problems" is false. I told you the solution and cause when we were discussing Cyberpunk back then:

"This shimmering (or, alternatively, ghosting when using ray reconstruction) occurs when there aren't enough actual samples being calculated, so DLSS/RR can't make accurate predictions about the image. As a result, you get wrong/bad hallucinations in the output, like these shimmers."

The answer is to up the internal res/reduce the scaling factor to improve it but that's a little difficult to do with fixed hardware and no settings options on a console.

Edit: and to just add a small correction to your post it was not Cyberpunk with path tracing. It was without PT, so separate RTGI.
 
Last edited:
And this was what I was telling you regarding things attributed to PSSR.


DLSS 3 does not play nice with noise. So this idea that they dont "produce more problems" is false. I told you the solution and cause when we were discussing Cyberpunk back then:

"This shimmering (or, alternatively, ghosting when using ray reconstruction) occurs when there aren't enough actual samples being calculated, so DLSS/RR can't make accurate predictions about the image. As a result, you get wrong/bad hallucinations in the output, like these shimmers."

The answer is to up the internal res/reduce the scaling factor to improve it but that's a little difficult to do with fixed hardware and no settings options on a console.

Edit: and to just add a small correction to your post it was not Cyberpunk with path tracing. It was without PT, so separate RTGI.

I made this little video just for you:

- Native 1440p with Path Tracing
- DLSS Performance (720p internal) JUST Super Resolution
- DLSS Performance with Ray Reconstruction



You see what happens when RR is turned on? While native and SR (from 720p!) produces NO ISSUES without RR.

Plus my old videos of comparison between PSSR and DLSS3 in Jedi:



 
Last edited:
You see what happens when RR is turned on? While native and SR produces NO ISSUES without RR.
I don't know what you're showing here exactly. Are you showing DLSS's Ray reconstruction feature introduces issues/artifacts? Ok. Is that meant to be a good thing? I don't understand the point you're making. OK enabling the ray reconstruction feature when enabling DLSS introduces a similar artifact. You can't go in and fine tune settings on a console to see what feature the dev has used that caused said issues but you specifically often attribute every issue seen as "PSSR" while "DLSS Ray Reconstruction" is somehow not related to DLSS. That was the crux of our conversation ages ago too that I'd rather not reignite.
 
Last edited:
I don't know what you're showing here exactly. Are you showing DLSS's Ray reconstruction feature introduces issues/artifacts? Ok. Is that meant to be a good thing? I don't understand the point you're making. OK enabling the ray reconstruction feature when enabling DLSS introduces a similar artifact. You can't go in and fine tune settings on a console to see what feature the dev has used that caused said issues but you specifically often attribute every issue seen as "PSSR" while "DLSS Ray Reconstruction" is somehow not related to DLSS. That was the crux of our conversation ages ago too that I'd rather not reignite.

RR is not related to DLSS Super Resolution, one is (alternate) denoiser and one reconstructs the image from lower resolution. DLSS SR existed before RR, so all issues RR can potentially introduce have nothing to do with SR algorithm.

DLSS SR, DLSS RR and DLSS FG are separate things. While PlayStation Spectral Super Resolution in itself adds artifacts to the screen not present with FSR4, DLSS or XeSS.
Ray Reconstruction introduces SOME artifacts but at the same time it cleans image much better in other scenarios, it's a trade off:

 
RR is not related to DLSS Super Resolution, one is (alternate) denoiser and one reconstructs the image from lower resolution. DLSS SR existed before RR, so all issues RR can potentially introduce have nothing to do with SR algorithm.
DLSS RR was introduced to get rid of noise because DLSS did not handle it very well especially at lower res. RR trades it for other artifacts though.
DLSS SR, DLSS RR and DLSS FG are separate things. While PlayStation Spectral Super Resolution in itself adds artifacts to the screen not present with FSR4, DLSS or XeSS.
How would you know this if you can't set specific settings on console? The developer is choosing settings for specific modes with PSSR/DLSS in mind on console. So if a noisy image is creating issues with DLSS you get the shimmering/flickering artifacts or they change the denoiser ("enable RR") and you end up with some other seen artifact. You end up getting less flicker/shimmering but instead you get more ghosting issues and it has the benefit of using less RAM too with it enabled. You end up blaming "PSSR" because you don't distinguish between them when it comes to console upscaling because you can't change and trade your artifacts to your preference.
Ray Reconstruction introduces SOME artifacts but at the same time it cleans image much better in other scenarios. it's a trade off:
That's what I've been saying though. It's a tradeoff based on specs and the developer is choosing.
 
Last edited:
DLSS RR was introduced to get rid of noise because DLSS did not handle it very well especially at lower res. RR trades it for other artifacts though.

How would you know this if you can't set specific settings on console? The developer is choosing settings for specific modes with PSSR/DLSS in mind on console. So if a noisy image is creating issues with DLSS you get the shimmering/flickering artifacts or they change the denoiser ("enable RR") and you end up with some other seen artifact. You end up getting less flicker/shimmering but instead you get more ghosting issues and it has the benefit of using less RAM too with it enabled. You end up blaming "PSSR" because you don't distinguish between them when it comes to console upscaling because you can't change and trade your artifacts to your preference.

That's what I've been saying though. It's a tradeoff based on specs and the developer is choosing.

Ray Reconstruction invention wasn't mostly about "noise" but low resolution reflections. Reflections will always be rendered in internal resolution, RR makes them look much better.

PSSR itself doesn't add more noise compared to TAA, unlike PSSR vs. FSR2/3 or TSR in many games. That's the biggest difference here. PSSR can also add noise to games without RT - even simple SSAO is enough to show issues (like in Dragons Dogma 2).

All reconstruction techniques work with denoisers that already used in games, only PSSR introduces new issues not present even with FSR3:



0.51s.
 
Last edited:
Ray Reconstruction invention wasn't mostly about "noise" but low resolution reflections. Reflections will always be rendered in internal resolution, RR makes them look much better.

PSSR itself doesn't add more noise compared to TAA, unlike PSSR vs. FSR2/3 or TSR in many games. That's the biggest difference here. PSSR can also add noise to games without RT - even simple SSAO is enough to show issues (like in Dragons Dogma 2).

All reconstruction techniques work with denoisers that already used in games, only PSSR introduces new issues not present even with FSR3:



0.51s.

PSSR doesn't add more noise lol it's the lower resolution of post processing effects which cause noisy, the other upscaler just "hide" better such issues or run such effects at higher resolution there are nothing of new in such issues 😄 Jesus unbelievable the absurdity you can say just to discredit the pssr.
 
Last edited:
PSSR doesn't add more noise lol it's the lower resolution of post processing effects which cause noisy, the other upscaler just "hide" better such issues or run such effects at higher resolution there are nothing of new in such issues 😄 Jesus unbelievable the absurdity you can say just to discredit the pssr.

facepalm_deja_q.jpg
 
Come On What GIF by MOODMAN

It's you who said PSSR add new issues never existed lol. The fuck you even try to argue with such absurdity. Which is not even that true, Starwars and Avatar are better in still but when you moving there are similar if not worse artifacts with the FSR.
 
Last edited:

You yourself confirmed that you see the difference between PSSR and (in this case) TSR?

AYgYuXkQ51sFnvI0.jpg
SWPh5um59B3SXVAn.jpg


We don't have any analysis of Hell is Us for some reason but cost of PSSR is not that much different than TSR. Resolution is probably lower with PSSR but only a bit, does that explain massive difference in stability of the image?




451ejMfhdHz2frhQ.jpg


And youtube compression kills most of it. On TV difference is quite big. There are many, many games with similar issues and now you deny it?
 
You yourself confirmed that you see the difference between PSSR and (in this case) TSR?

AYgYuXkQ51sFnvI0.jpg
SWPh5um59B3SXVAn.jpg


We don't have any analysis of Hell is Us for some reason but cost of PSSR is not that much different than TSR. Resolution is probably lower with PSSR but only a bit, does that explain massive difference in stability of the image?




451ejMfhdHz2frhQ.jpg


And youtube compression kills most of it. On TV difference is quite big. There are many, many games with similar issues and now you deny it?

But it's not the PSSR the cause, it has to do with the lower buffer of the post processing effects applied to the foliage (indirect lighting, contact shadow etc etc) PSSR simply doesn't hide or fix it. Quite a difference to what you are trying to argue.
 
Last edited:
Ray Reconstruction invention wasn't mostly about "noise" but low resolution reflections. Reflections will always be rendered in internal resolution, RR makes them look much better.
Look, we're getting nowhere with this and I really don't want to reignite a lengthy back and forth again about what we've already discussed but that is exactly why it was invented. It was invented to produce clearer images from low sample (ie noisy) raytracing not just reflections either but global illumination. In fact the idea that DLSS and RR were completely different is completely untrue. you couldn't even enable DLSS Ray Reconstruction without enabling DLSS SR. You turned on RR and simple DLAA wouldn't be an option anymore. You had to use untested mods to do this and I don't even know what the results of those were.

When you do ray sampling per-pixel pre-DLSS, you end up with poor flickering and shimmering, you enable RR you end up with that other artifact you showed and ghosting instead, but your reflections look better.
All reconstruction techniques work with denoisers that already used in games, only PSSR introduces new issues not present even with FSR3:



0.51s.

Again not sure what you're showing here. PSSR modes on console aren't just a specific thing called "PSSR" that's enabled, settings are tweaked by the devs for modes.

With DLSS and DLSS RR you also get issues (i.e. artifacts) not present in FSR3 but with both PSSR and DLSS you get better image quality and/or lower burden on hardware for higher settings/fps. With DLSS you can fine tune the features on PC if you hate a specific artifact and trade it for others but you dont have this with Switch 2 and PS5 Pro so you just get the implementations the devs have chosen for specific modes or their implementation. If they enabled PSSR and used some RT you get whatever "DLSS settings" they chose with the associated artifacts that could include whatever "PSSR RR" is called or their denoiser.

With DLSS people go as far as to change the normally hidden presets due to artifacts they don't like but they trade it for other artifacts like this:



On Switch now people are calling the poor looking implementations "tiny DLSS" instead of just poor implementations of DLSS. Why do you think different implementations of PSSR doesn't exist and it's just "PSSR caused this because it must suck" with anything on another machine that also doesn't allow exchanging tradeoffs?
 
Last edited:
Look, we're getting nowhere with this and I really don't want to reignite a length back and forth again about what we've already discussed but that is exactly why it was invented. It was invented to produce clearer images from low sample (ie noisy) raytracing not just reflections either but global illumination. In fact the idea that DLSS and RR were completely different is completely untrue. you couldn't even enable DLSS Ray Reconstruction without enabling DLSS SR. You turned on RR and simple DLAA wouldn't be an option anymore. You had to use untested mods to do this and I don't even know what the results of those were.

When you do ray sampling per-pixel pre-DLSS you end up with poor flickering and shimmering, you enable RR you end up with that other effect you showed and ghosting instead, but your reflections look better.

Again not sure what you're showing here. PSSR modes on console aren't just a specific thing called "PSSR" that's enabled, settings are tweaked by the devs for modes.

With DLSS and DLSS RR you also get issues (i.e. artifacts) not present in FSR3 but with both PSSR and DLSS you get better image quality and/or lower burden on hardware for higher settings/fps. With DLSS you can fine tune the features on PC if you hate a specific artifact and trade it for others but you dont have this with Switch 2 and PS5 Pro so you just get the implementations the devs have chosen for specific modes or their implementation. If they enabled PSSR and used some RT you get whatever "DLSS settings" they chose with the associated artifacts that could include whatever "PSSR RR" is called or their denoiser.

With DLSS people go as far as to change the normally hidden presets due to artifacts they don't like but they trade it for other artifacts like this:



On Switch now people are calling the poor looking implementations "tiny DLSS" instead of just poor implementations of DLSS. Why do you think different implementations of PSSR doesn't exist and it's just "PSSR caused this because it must suck" with anything on another machine that also doesn't allow exchanging tradeoffs?

The Office Thank You GIF
 
Top Bottom