Digital Foundry: Nintendo Switch 2 DLSS Image Quality Analysis: "Tiny" DLSS/Full-Fat DLSS Confirmed

Did I miss something or did he never mention the Nintendo patent for a lightweight DLSS on Switch 2? That is the obvious news tie in to this story and I'm not sure how he didn't bring that up throughout this whole video.

Anyway, very solid video by DF. Confirms some things we thought coming into this console, and also confirms that full CNN based DLSS is there even on some of the heaviest games. The lite model is probably much better than Nintendo falling back on FSR1 spatial upscaling like they seem to do too much. But it's definitely a far cry from the full model. Hopefully they tweak that lite model to be better over time.
 
Last edited:
These reconstruction techniques are the worst, everything looks like absolute shit. Jaggies, noise, pixels salad everywhere. Simply disgusting.
 
Last edited:
In many games it's hard to tell the difference between DLSS4 and native with TAA. And you get 50%+ performance.

Using native resolutions when we have such good techniques like DLSS or FSR4 is not very smart.

While the techniques are improving, I don't agree. I understand why these methods exists as ambition has pushed far beyond what hardware is capable of, but I'd rather address the ambition "problem".
 
While the techniques are improving, I don't agree. I understand why these methods exists as ambition has pushed far beyond what hardware is capable of, but I'd rather address the ambition "problem".

Rendering 8 294 400 pixels was always a waste, and now with really good reconstruction techniques it's not worth it at all.



V5SZefxbDUh1ZQ7t.jpg
8g2iIslrAIhaOGfu.jpg
 
These reconstruction techniques are the worst, everything looks like absolute shit. Jaggies, noise, pixels salad everywhere. Simply disgusting.
Nah. The tech itself is fine and can look real good and way better than AA.

What sucks is modern developers over-reliance on it. Optimizing the game? Nah, just turn on DLSS and leave us alone.
 
Last edited:
While the techniques are improving, I don't agree. I understand why these methods exists as ambition has pushed far beyond what hardware is capable of, but I'd rather address the ambition "problem".
TAA usually looks like shite, that's why DLSS4 looks better even with a lower internal resolution.
 
Did their stance finally change?
Is the Switch 2 still a "PS4 level experience" for them?
Anticipation Popcorn GIF

So Many Crows eating for HATERS
They are haters because they dare to say the switch 2 has a raw power comparable to the ps4? Some of you are really like a child. I have the switch 2 and in terms of poly counts it appears more close to the ps4 than the ps5. Not absolutely bad for a portable. Even if you look to the Starwars Outlaw port, the environment poly count is definitely cutback, hard to notice in undocked mode, of course, but very apparent in a big screen.
 
Last edited:
Did their stance finally change?
Is the Switch 2 still a "PS4 level experience" for them?
Anticipation Popcorn GIF

So Many Crows eating for HATERS

it still indeed has the raw raster and CPU power that is similar to a PS4. the hardware hasn't changed you know...
 
Last edited:
Rendering 8 294 400 pixels was always a waste, and now with really good reconstruction techniques it's not worth it at all.
Enjoy DLSS as much as you want for your fake 4K. I don't need 4K to begin with.

Native 1080p@60fps is largely sufficient as far as I am concerned and we don't get all the inherent problems of "best guessing the missing pixels". The games displayed here on Switch 2 that use DLSS have pixels salad everywhere, incredibly distracting on top of making no sense from a visual stand point. If people don't see, then great, but I see it and can't stand it.

Capture-d-cran-2025-10-05-193731.png


🤮
 
Last edited:
Enjoy DLSS as much as you want for your fake 4K. I don't need 4K to begin with.

Native 1080p@60fps is largely sufficient as far as I am concerned and we don't get all the inherent problems of "best guessing the missing pixels". The games displayed here on Switch 2 that use DLSS have pixels salad everywhere, incredibly distracting on top of making no sense from a visual stand point. If people don't see, then great, but I see it and can't stand it.

Capture-d-cran-2025-10-05-193731.png


🤮

Those issues are not cause by DLSS.
The dithering effect on the right side, is an LOD transition. And it happens so close to the camera because DF had to use low settings, including draw distance, to match the Switch2 settings.
If the Switch 2 was more powerful, and could use better LODs, then that transition would be farther away and barely noticeable.
And the shimmering effect on the Switch 2 is due to the low base resolution. Without DLSS, it would be even more noticeable. If the Swicth 2 had a better GPU, it could render a bit higher resolution, and give DLSS inputs more pixels to work with.
And of course, it the Switch was capable of using DLSS with the transformer model, even better.
 
And the shimmering effect on the Switch 2 is due to the low base resolution. Without DLSS, it would be even more noticeable.
It would run a higher res without DLSS. Thing is, it looks extremely distracting as is, and if without DLSS it would still look like shit, then maybe port something else ? This is going to be a Doom 2016 festival all over the gen again.
 
It would run a higher res without DLSS. Thing is, it looks extremely distracting as is, and if without DLSS it would still look like shit, then maybe port something else ? This is going to be a Doom 2016 festival all over the gen again.

How would it run at a higher resolution without DLSS?
 
How would it run at a higher resolution without DLSS?
Because you are not wasting your resources to DLSS anymore. So it would higher than the base resolution used for DLSS.

When you have to reduce the quality of your output to reserve the resources for your process that will, somehow, try to improve the quality of your output, common sense has been defeated.
 
Last edited:
But DLSS doesn't run on shaders. So it's not wasting resources.
If you disable DLSS, the same compute capability is used to render the game.
Wasted hardware. Put the money in a better CPU/GPU/Memory rather than stuff like this that will lead to poor picture quality anyway.
 
Last edited:
Wasted hardware. Put the money in a better CPU/GPU/Memory rather than stuff like this that will lead to poor picture quality anyway.

The Tensor Cores in Ampere acount for around 10% of the GPU. On the Switech 2 SoC, it's less, because there is also the CPU portion.
So maybe we could have a few more shaders. And instead of running at 540p to 1080p, it would run 594p to 1188p.
Not a big difference. But then it would have to use TAA, with much lower temporal stability, more shimmering, more ghosting, and more artifacts.
Not to mention, that without upscale, the output image would be of even lower resolution.
 
But DLSS doesn't run on shaders. So it's not wasting resources.
If you disable DLSS, the same compute capability is used to render the game.

I mean, it sorta does run on the shaders. at least since DLSS 2.0. (DLSS 1 actually relied almost fully on the tensor cores creating a high res image, which is why it was so shit)

DLSS, at the end of the day, works exactly like TAAU or FSR2/3. the only step that doesn't run on the shaders is the algorithm that determines how the jittered and multi-frame accumulated image data gets combined into the final reconstructed image.

TAAU, TSR and FSR2/3 do this through a hand programmed algorithm, while DLSS does it with a machine learning model.

so the initial multi-frame reconstruction is done on the shaders, and will have a similar cost to TAAU.
and if you actually follow Nvidia's recommendations on how to implement DLSS for the best possible output quality, there are several other additional things that adds to the workload of the shaders, like rendering the post processing at full res, and having the same mipmap and LOD bias as the target resolution would have.

it seems even games that use the "full fat" DLSS on Switch 2 do not do that last part, specifically to save on render time. Cyberpunk clearly doesn't render post processing at target resolution for example.


edit: oh, forgot another thing, your denoiser for SSR and RT needs to be better when using DLSS, because unlike TAA, DLSS doesn't smear the image as much. many denoisers do less work because the Devs expect the TAA to smooth over the remaining noise and jitter. so your denoiser has to do more work to look good with DLSS. that does add to the workload of the shaders too.
if you don't do that, you get the same issue PSSR faces in many games, where shadows flicker and stuff like that.
 
Last edited:
The Tensor Cores in Ampere acount for around 10% of the GPU. On the Switech 2 SoC, it's less, because there is also the CPU portion.
So maybe we could have a few more shaders. And instead of running at 540p to 1080p, it would run 594p to 1188p.
Not a big difference. But then it would have to use TAA, with much lower temporal stability, more shimmering, more ghosting, and more artifacts.
Not to mention, that without upscale, the output image would be of even lower resolution.
As stated, wrong choice of game. If your console has a 1080p screen, you should output at 1080p. If that's too much for Cyberpunk, then release something else.

This will always look much better than anything lower that is upscaled for whatever reason. It's as if people have forgotten how clean and crisp a native res picture looks like. Then again, people have forgotten about how motion clarity was impeccable on CRT, so it wouldn't be a first.
 
Last edited:
Enjoy DLSS as much as you want for your fake 4K. I don't need 4K to begin with.

Native 1080p@60fps is largely sufficient as far as I am concerned and we don't get all the inherent problems of "best guessing the missing pixels". The games displayed here on Switch 2 that use DLSS have pixels salad everywhere, incredibly distracting on top of making no sense from a visual stand point. If people don't see, then great, but I see it and can't stand it.

Capture-d-cran-2025-10-05-193731.png


🤮

1080p you say... Ok.

Example, RDR2 - 1080p, native TAA vs. 540p reconstructed to 1080p using DLSS4:

KeJoSDY.png
eL0coaS.png


Guess which one is which.

4k version

cU2YqHy.jpeg
3KzV86n.jpeg
 
Last edited:
1080p you say... Ok.

Example, RDR2 - 1080p, native TAA vs. 720p reconstructed to 1080p using DLSS4:

KeJoSDY.png
eL0coaS.png


Guess which one is which.
You know quite well that all these issues, jaggies, artifacts and pixel salad are made extremely apparent when there is movement. Comparing screenshots is pointless.

Only based on the screenshot, I see pixel salad in the bottom screenshot, especially at the top of all the poles. The trees and grass as well don't look super convincing. The puddle is also pixel salad. Pretty sure that in movement this looks pretty bad.
Top screen feels way too soft, which I dislike as well.

I don't care about TAA or DLSS or whatever. If it leads to distracting visual artifacts, they are all just as bad as far as I am concerned.

Didn't look at the 4K screenshots. 4K is pointless anyway.
 
As stated, wrong choice of game. If your console has a 1080p screen, you should output at 1080p. If that's too much for Cyberpunk, then release something else.

This will always look much better than anything lower that is upscaled for whatever reason. It's as if people have forgotten how clean and crisp a native res picture looks like. Then again, people have forgotten about how motion clarity was impeccable on CRT, so it wouldn't be a first.

In docked mode, it runs at 540-1080p, then upscales it.
In portable mode, it runs at 360-720p, then upscales it.
Yes, the console is underpowered for a game like CP2077.
But DLSS is helping it by using one of the best upscalers in the market.
 
In docked mode, it runs at 540-1080p, then upscales it.
In portable mode, it runs at 360-720p, then upscales it.
Yes, the console is underpowered for a game like CP2077.
But DLSS is helping it by using one of the best upscalers in the market.
If people are happy with this, then good.
 
You know quite well that all these issues, jaggies, artifacts and pixel salad are made extremely apparent when there is movement. Comparing screenshots is pointless.

Only based on the screenshot, I see pixel salad in the bottom screenshot, especially at the top of all the poles. The trees and grass as well don't look super convincing. The puddle is also pixel salad. Pretty sure that in movement this looks pretty bad.
Top screen feels way too soft, which I dislike as well.

I don't care about TAA or DLSS or whatever. If it leads to distracting visual artifacts, they are all just as bad as far as I am concerned.

Didn't look at the 4K screenshots. 4K is pointless anyway.

Have you used DLSS of FSR4 in games? DLSS4 produces sharper image than native 1080p TAA in most games.

I have video of it:



1080p native vs. 1080p from 540p using DLSS4.
 
Last edited:
Top Bottom