Digital Foundry: Nintendo Switch 2 DLSS Image Quality Analysis: "Tiny" DLSS/Full-Fat DLSS Confirmed

Again you clearly haven't a clue how AI training works or how programming work. 🙄
Oh please, as if you're an expert. It's extremely obvious with the recent leak of the INT8 version of FSR 4 what the end goal is. It always possible they are training an entire new model distilled from both PSSR and FSR4 but a) what would be the point if FSR4 is already superior, and b) they are doing an awful lot of work on this INT8 conversion.

I notice you also completely avoided the DLSS question. Have you compared them yourself? If so on which games?
 
Oh please, as if you're an expert. It's extremely obvious with the recent leak of the INT8 version of FSR 4 what the end goal is. It always possible they are training an entire new model distilled from both PSSR and FSR4 but a) what would be the point if FSR4 is already superior, and b) they are doing an awful lot of work on this INT8 conversion.

I notice you also completely avoided the DLSS question. Have you compared them yourself? If so on which games?
I'm not an expert. But I studied programming years ago (to be fair I'm a programmer but I never exercised) and the logic inside are evolution from the past code. The file format means nothing for such stuff.
 
Last edited:
I'm not an expert. But I study programming time ago and the logic inside are evolution from the past code. The file format means nothing.
INT8 VS FP8 isn't just a matter of file format, it requires extensive training on a new model in order to create something like that. Joint distillation basically. Which seems rather pointless considering the INT8 version of FSR4 is almost done.

So I assume at this point you haven't done a comparison yourself. Because if you did, the differences are pretty damn obvious on a hell of a lot of games.
 
INT8 VS FP8 isn't just a matter of file format, it requires extensive training on a new model in order to create something like that. Joint distillation basically. Which seems rather pointless considering the INT8 version of FSR4 is almost done.

So I assume at this point you haven't done a comparison yourself. Because if you did, the differences are pretty damn obvious on a hell of a lot of games.
What it has to do INT8 and FP8 to the logics inside the algorithms. I don't get it. I already told many time DLSS4 it's better in AA and denoiser but in motion PSSR seems sharper. Now if you want to define what is it the superior go on, I don't care of dick waving competition.
 
Last edited:
It's not upgraded to FSR4. It adds FSR4 logic to the existing AI algorithm.
Cerny said their implementation of FSR 4 would "take the same inputs and produce essentially the same outputs". So there should not be any meaningful difference in image quality between FSR 4 on the PC and Pro.
 
What it has to do INT8 and FP8 to the logics inside the algorithms. I don't get it. I already told many time DLSS4 it's better AA and denoiser but in motion PSSR seems sharper.
FP8 vs INT8 are two different ways of doing maths at the most basic level, floating point (FP) vs Integer (INT). Basically it means that while INT8 is faster it is also less accurate. There are other differences such as the dynamic range and scaling being different, but it basically boils down to the same conclusion. So even the INT8 version of FSR4 is not quite as good as the FP8 version (yet). Why this all matters is that the Pro has no acceleration for FP8, only INT8. And both FSR4 and DLSS4 are accelerated via FP8, because it is simply more accurate. This could also easily explain why PSSR has the issues they it does, because it is limited to INT8. Why that is the case is due to it being a custom setup done by Sony and it is likely due to the Pro being taped out in 2023. RDNA4 was far from ready and the only thing AMD had was RDNA3, which is even worse, as INT8 is even slower, and it also has no FP8 support. None of this will be an issue for the PS6 , as that will obviously have FP8 support.
 
What it has to do INT8 and FP8 to the logics inside the algorithms. I don't get it. I already told many time DLSS4 it's better in AA and denoiser but in motion PSSR seems sharper. Now if you want to define what is it the superior go on, I don't care of dick waving competition.

Look at this part of the video:



1:28:24

So far you talk a lot about stuff but provide no evidence. People that compared PSSR to DLSS themselves are all wrong, DF is wrong, other Youtubers are wrong and YOU are right?
 
But it's not efficient. It's using Samsung's 8nm node.
One that even in 2020 was well behind TSMC's N7.
The Switch 2 SoC should have been made with something more recent. Such as TSMC's N5 or maybe even N3.
That would have allowed for lower power usage, better temperatures and the chip could have a more powerful GPU.
Its much more efficient than the competition by a significant margin, could it have been better with a more advanced node and newer arch absolutely but it says a lot about amd that a old ass arch running on 8nm is embarrassing their significantly more expensive, larger mobile apus with higher tdps. Amds mobile apus are woefully designed they keep pushing compute for negligible gains while introducing almost bandwidth efficiency gains or innovations meaning its horribly bandwidth starved. So in the actual handheld market crazy as it is the tech in the switch is still best in class in terms of balance and efficiency and honestly even results.

It doesn't help amd has no ml based reconstruction in 2025 on the mobile apus a market where the tech is a requirement more than a feature meaning the aged tech in the switch ends humiliating amds 800$+ apus in actual results. The strix halo is their only real decent mobile offering but its still rdna3.5 meaning no ml rec tech. Also its designed for ai based cpu workloads and is an absolute monster in terms of cpu which does good gaming as a bonus more than it being its main focus and it needs 45w atleast to deliver a good output to tdp rayio.
 
But it's not efficient. It's using Samsung's 8nm node.
One that even in 2020 was well behind TSMC's N7.
The Switch 2 SoC should have been made with something more recent. Such as TSMC's N5 or maybe even N3.
That would have allowed for lower power usage, better temperatures and the chip could have a more powerful GPU.

it probably is 8nm because it was super cheap to make and there was no competition over production lines.
but how crazy is it that that outdated Nvidia chip is still vastly more efficient than anything AMD can produce :pie_invert:

it's too bad Microsoft and Sony already work with AMD on next gen, before that whole Nvidia + Intel stuff happened. imagine a next gen console with an Nvidia GPU 🤤
 
it probably is 8nm because it was super cheap to make and there was no competition over production lines.
but how crazy is it that that outdated Nvidia chip is still vastly more efficient than anything AMD can produce :pie_invert:

it's too bad Microsoft and Sony already work with AMD on next gen, before that whole Nvidia + Intel stuff happened. imagine a next gen console with an Nvidia GPU 🤤
Even better. Imagine a Switch 2 Pro using 4N or the equivalent Intel process. Legitimately could be a monster, almost makes me think Nintendo is accounting for it.
 
They are all TAAU based from what I knew.

Yes... and no?

That's like saying all upscalers are a hack of super sampling when it comes down to it, while ignoring how they resolve the problem? The fundamental existence of all these upscaler variants is in how they resolve the problems.

All modern upscalers start with super resolution upscaling

They all use motion vectors, they buffer sub-pixels and correct themselves from frame N+1. That's where the similarities end.

Because super sampling is too expensive on performances, they did a temporal average of it, the average is with motion vectors, they all share that baseline. With motion vectors you'll have a bunch of gaps filled naturally with the next frame and almost as full as checkerboard interlacing, but fine geometry and complex lighting/denoising still has problems with this shortcut

They do frame blending from multiple frames. But it's not perfect. This is the mask of detection patterns where the image failed to reconstruct after a sample per pixel

sopMML4.jpeg


To fix what's within the mask, that's why you have TAA/TSR/FSR/DLSS/XeSS/etc. The game gives you subpixel information and vectors, you can let your imagination run wild in how to resolve the problems.

Now how the upscalers approach refining these errors is completely different between the algorithm ones like TAA / FSR 2/3 and the ML upscalers like DLSS and FSR4, completely different.

TAA is closer to checkerboard. It has an algorithm that reconstruct the gap from a pixel center with surrounding information including the new frame for sub-samples and a few other fancy rotating patterns to negate jitter but its an algorithm that approaches the problem the same no matter the gap. It has no understanding of what it is. It does not change its approach for a mast or thin wires. Checkerboard was more simplistic with interlacing frame pixels. Still both of them considered spatial-temporal super sampling algorithms

ML upscalers, but more importantly, DLSS, is literally this concept in the video, which started when Nvidia made ML to fix jpegs.



ML is trained on particular suspects that create problems such as thin wires and so on. Unlike TAA who applies by "dumb" algorithms, ML upscalers will recognise what it is trying to fix, how it should look, it does not apply the same logic or pattern "dumbly" on whatever it sees in the game. The more it is trained the more it recognize what it is trying to fix and how it should look like.
 
I remember a certain someone telling people that the upscaler on switch 2 is going to be identical to $2000 PCs somehow because "its DLSS".

Oh look, doctor optics for « not small sweet spot » PSVR 2 🤡 taking James' lead on the forum

Is DLSS on switch 2 in its final form after less than 6 months? Do tell?

Even fat DLSS CNN preset E used on Switch 2 is a better upscaler than PSSR

It's a matter of time before a dev uses transformer model. Why would it not? Laptop 2050 can use it, even downclocked. There's devs mentioning new DLSS models by year's end, shush, your posts will age like milk, just like James'
 
Last edited:
Oh look, doctor optics for « not small sweet spot » PSVR 2 🤡 taking James' lead on the forum

Is DLSS on switch 2 in its final form after less than 6 months? Do tell?

Even fat DLSS CNN preset E used on Switch 2 is a better upscaler than PSSR

It's a matter of time before a dev uses transformer model. Why would it not? Laptop 2050 can use it, even downclocked. There's devs mentioning new DLSS models by year's end, shush, your posts will age like milk, just like James'
Shhh, you'll wake Karim

Jurassic Park Water GIF
 
Sure I'll give you an example:



The lack of antialiasing on some edges is also something you had complained about regarding PSSR but this is far worse on switch 2.


You mean the simpler model showing aliasing on camera cuts? Switch 2 is a tiny console with minuscule power draw and it can still run full CNN model of DLSS (as long as it reconstructs to ~1080p).

PS5 Pro is a ~230W big ass console with more ML power than the biggest AMD GPUs from 2020 and 2022. And yet, you can run FSR4 on them and it looks better than PSSR...
PSSR is in unacceptable state 11 months after launch.
 
You mean the simpler model showing aliasing on camera cuts? Switch 2 is a tiny console with minuscule power draw and it can still run full CNN model of DLSS (as long as it reconstructs to ~1080p).

PS5 Pro is a ~230W big ass console with more ML power than the biggest AMD GPUs from 2020 and 2022. And yet, you can run FSR4 on them and it looks better than PSSR...
PSSR is in unacceptable state 11 months after launch.
Nobody said anything about power draw just that there is worse artifacting on switch. I don't know what you're ranting about now that I gave you an example that you asked for. In case you missed it in the vid it is not just camera cuts either.
 
Last edited:
Nobody said anything about power draw just that there is worse artifacting on switch. I don't know what you're ranting about now that I gave you an example that you asked for. In case you missed it in the vid it is not just camera cut either.

Yeah but this is mobile device.

I thought we are talking about ML reconstruction on big machines - more holistic view. And in this case PSSR so far shows the worst results.
 
Last edited:
Yeah but this is mobile device.

I thought we are talking about ML reconstruction on big machines - more holistic view. And in this case PSSR so far shows the worst results.
Well no shit it's a mobile device. But this is what you called "not true":

The switch 2 isn't using DLSS 4s new transformer model and so largerly has similar or even worse artifacts.
That's not true at all. Show examples.

So what was not true exactly?
When I show you examples of bad artifacts you say its a mobile device and rant about PSSR again.

It also shows that these artifacts are largely similar and present themselves when developers use lower resolutions and make choices the players don't like in terms of image quality. An example being the poor artifacts and IQ that people complained about in Fast Fusion so they added a mode to turn off DLSS.
 
Last edited:
Well no shit it's a mobile device. But this is what you called "not true":




So what was not true exactly?
When I show you examples of bad artifacts you say its a mobile device and rant about PSSR again.

It also shows that these artifacts are largely similar and present themselves when developers use lower resolutions and make choices the players don't like in terms of image quality. An example being the poor artifacts and IQ that people complained about in Fast Fusion so they added a mode to turn off DLSS.

I thought you were talking about PC DLSS. This topic is about Switch but much of the discussion is more general about ML upscalers.

Yeah - DLSS on Switch has issues not present in PC version, because one of the versions developers use is super lite edition and not even available on PC. And DLSS3 beats PSSR any day, nothing changed since November 2024 (other than DLSS4 appearance in February this year).
 
FP8 vs INT8 are two different ways of doing maths at the most basic level, floating point (FP) vs Integer (INT). Basically it means that while INT8 is faster it is also less accurate. There are other differences such as the dynamic range and scaling being different, but it basically boils down to the same conclusion. So even the INT8 version of FSR4 is not quite as good as the FP8 version (yet). Why this all matters is that the Pro has no acceleration for FP8, only INT8. And both FSR4 and DLSS4 are accelerated via FP8, because it is simply more accurate. This could also easily explain why PSSR has the issues they it does, because it is limited to INT8. Why that is the case is due to it being a custom setup done by Sony and it is likely due to the Pro being taped out in 2023. RDNA4 was far from ready and the only thing AMD had was RDNA3, which is even worse, as INT8 is even slower, and it also has no FP8 support. None of this will be an issue for the PS6 , as that will obviously have FP8 support.
Again, I know I could sound repetitive, but logic inside the algorithm are adapted to the hardware calculation, sure, doesn't mean you can apply the same logics or expand with different type of calculation and new algorithm, I don't know how to explain to you better. I find surprising some of you find impossible such things. And from what I remind in the last leak sony said PSSR with FSR4 it's not FSR4 but just the "same" PSSR upgraded with the FSR4 algorithm.
https://www.tweaktown.com/news/1077...rade-in-2026-not-amd-fsr-4-support/index.html
 
Last edited:
You mean the simpler model showing aliasing on camera cuts? Switch 2 is a tiny console with minuscule power draw and it can still run full CNN model of DLSS (as long as it reconstructs to ~1080p).

PS5 Pro is a ~230W big ass console with more ML power than the biggest AMD GPUs from 2020 and 2022. And yet, you can run FSR4 on them and it looks better than PSSR...
PSSR is in unacceptable state 11 months after launch.
Well, it's your opinion.
 
Its kinda sad that these days people seem to care less about the actual graphics than the method use to upscale them to a certain resolution.
 
PSSR is completely incompatible with some games/engines unlike other ML reconstruction techniques.
All ML upscaling solutions are the same in terms of compatibility with engines. The only thing which is required by all of them is the implementation of subpixel jitter which is also a requirement for any TAA. If a game engine doesn't have that none of TAAU/ML- or just TAA even - solutions would be compatible. PSSR isn't any different to others in that.

There is also leaked FSR4 Int 8 model that is close in quality to FP8 model and runs on shader cores
INT8 runs on anything which can run INT8, same for FP8 or any other math format. INT8 would run on tensor cores on Nvidia h/w for example as their main SIMDs do not support such math precision.

It's not upgraded to FSR4. It adds FSR4 logic to the existing AI algorithm.
This is not how it works. You can't "add logic" to a different neural network, you need to retrain it with the same inputs and at the same scale and model at which point you're just recreating something which already exists (i.e. FSR4).
If I had to take a guess I would expect the "upgraded" PSSR to basically be FSR4 INT8 version, and the time it takes for it to appear is essentially the time AMD spends right now on creating it.

Carry on.
 
Last edited:
All ML upscaling solutions are the same in terms of compatibility with engines. The only thing which is required by all of them is the implementation of subpixel jitter which is also a requirement for any TAA. If a game engine doesn't have that none of TAAU/ML- or just TAA even - solutions would be compatible. PSSR isn't any different to others in that.


INT8 runs on anything which can run INT8, same for FP8 or any other math format. INT8 would run on tensor cores on Nvidia h/w for example as their main SIMDs do not support such math precision.


This is not how it works. You can't "add logic" to a different neural network, you need to retrain it with the same inputs and at the same scale and model at which point you're just recreating something which already exists (i.e. FSR4).
If I had to take a guess I would expect the "upgraded" PSSR to basically be FSR4 INT8 version, and the time it takes for it to appear is essentially the time AMD spends right now on creating it.

Carry on.
You think the PSSR updated won't use any of the past datasets and they completely eliminate it to make predictions or decisions on the new data by the AI upscaler? Seems a waste to me, I thought the point of AI was that.
I asked to chatgpt:

1.


Developers can intentionally retain parts of the previous logic when updating an algorithm. This is common when:
  • Backward compatibility is important.
  • Previous logic worked well in some scenarios.
  • You want to avoid regression (i.e., new logic breaking old functionality).
 
Last edited:
You think PSSR won't use any of the past logics and they completely delete such experienced logic to elaborate by the AI upscaler? Seems a waste to me.

The input data algorithm will remain. But the upscaling algorithm is like a black box, and it's not like they can cut and paste parts of it.
 
Well, it's your opinion.

Not only my opinion: me, F Feel Like I'm On 42 , several other posters that were able to compare it with other techniques, Digital Foundry and even Mark fucking Cerny himself. Why they are making PSSR2 is first version was so good?

I don't get why you or T Three are so defensive about it at this point. WE KNOW they will upgrade in next year, there is a chance it will finally catch up with big boys (ML power is there in Pro). Why defend something that only works well in Sony games (1 or 2 a year?) and few third party games? Many other third party games have problems - the most popular engine (UE5) has problems with PSSR (or PSSR has problems with UE5). How many third party games are released/year vs. Sony games?

I think Sony fans should be thankful that PSSR is criticized because otherwise we would be stuck with that broken version until PS6 (that will support FP8).
 
Top Bottom