Digital Foundry: Nintendo Switch 2 DLSS Image Quality Analysis: "Tiny" DLSS/Full-Fat DLSS Confirmed

Did I miss something or did he never mention the Nintendo patent for a lightweight DLSS on Switch 2? That is the obvious news tie in to this story and I'm not sure how he didn't bring that up throughout this whole video.

Anyway, very solid video by DF. Confirms some things we thought coming into this console, and also confirms that full CNN based DLSS is there even on some of the heaviest games. The lite model is probably much better than Nintendo falling back on FSR1 spatial upscaling like they seem to do too much. But it's definitely a far cry from the full model. Hopefully they tweak that lite model to be better over time.
 
Last edited:
These reconstruction techniques are the worst, everything looks like absolute shit. Jaggies, noise, pixels salad everywhere. Simply disgusting.
 
Last edited:
I like you. #Native4Life

In many games it's hard to tell the difference between DLSS4 and native with TAA. And you get 50%+ performance.

Using native resolutions when we have such good techniques like DLSS or FSR4 is not very smart.
 
In many games it's hard to tell the difference between DLSS4 and native with TAA. And you get 50%+ performance.

Using native resolutions when we have such good techniques like DLSS or FSR4 is not very smart.

While the techniques are improving, I don't agree. I understand why these methods exists as ambition has pushed far beyond what hardware is capable of, but I'd rather address the ambition "problem".
 
While the techniques are improving, I don't agree. I understand why these methods exists as ambition has pushed far beyond what hardware is capable of, but I'd rather address the ambition "problem".

Rendering 8 294 400 pixels was always a waste, and now with really good reconstruction techniques it's not worth it at all.



V5SZefxbDUh1ZQ7t.jpg
8g2iIslrAIhaOGfu.jpg
 
These reconstruction techniques are the worst, everything looks like absolute shit. Jaggies, noise, pixels salad everywhere. Simply disgusting.
Nah. The tech itself is fine and can look real good and way better than AA.

What sucks is modern developers over-reliance on it. Optimizing the game? Nah, just turn on DLSS and leave us alone.
 
Last edited:
While the techniques are improving, I don't agree. I understand why these methods exists as ambition has pushed far beyond what hardware is capable of, but I'd rather address the ambition "problem".
TAA usually looks like shite, that's why DLSS4 looks better even with a lower internal resolution.
 
Did their stance finally change?
Is the Switch 2 still a "PS4 level experience" for them?
Anticipation Popcorn GIF

So Many Crows eating for HATERS
They are haters because they dare to say the switch 2 has a raw power comparable to the ps4? Some of you are really like a child. I have the switch 2 and in terms of poly counts it appears more close to the ps4 than the ps5. Not absolutely bad for a portable. Even if you look to the Starwars Outlaw port, the environment poly count is definitely cutback, hard to notice in undocked mode, of course, but very apparent in a big screen.
 
Last edited:
Did their stance finally change?
Is the Switch 2 still a "PS4 level experience" for them?
Anticipation Popcorn GIF

So Many Crows eating for HATERS

it still indeed has the raw raster and CPU power that is similar to a PS4. the hardware hasn't changed you know...
 
Last edited:
Rendering 8 294 400 pixels was always a waste, and now with really good reconstruction techniques it's not worth it at all.
Enjoy DLSS as much as you want for your fake 4K. I don't need 4K to begin with.

Native 1080p@60fps is largely sufficient as far as I am concerned and we don't get all the inherent problems of "best guessing the missing pixels". The games displayed here on Switch 2 that use DLSS have pixels salad everywhere, incredibly distracting on top of making no sense from a visual stand point. If people don't see, then great, but I see it and can't stand it.

Capture-d-cran-2025-10-05-193731.png


🤮
 
Last edited:
Enjoy DLSS as much as you want for your fake 4K. I don't need 4K to begin with.

Native 1080p@60fps is largely sufficient as far as I am concerned and we don't get all the inherent problems of "best guessing the missing pixels". The games displayed here on Switch 2 that use DLSS have pixels salad everywhere, incredibly distracting on top of making no sense from a visual stand point. If people don't see, then great, but I see it and can't stand it.

Capture-d-cran-2025-10-05-193731.png


🤮

Those issues are not cause by DLSS.
The dithering effect on the right side, is an LOD transition. And it happens so close to the camera because DF had to use low settings, including draw distance, to match the Switch2 settings.
If the Switch 2 was more powerful, and could use better LODs, then that transition would be farther away and barely noticeable.
And the shimmering effect on the Switch 2 is due to the low base resolution. Without DLSS, it would be even more noticeable. If the Swicth 2 had a better GPU, it could render a bit higher resolution, and give DLSS inputs more pixels to work with.
And of course, it the Switch was capable of using DLSS with the transformer model, even better.
 
And the shimmering effect on the Switch 2 is due to the low base resolution. Without DLSS, it would be even more noticeable.
It would run a higher res without DLSS. Thing is, it looks extremely distracting as is, and if without DLSS it would still look like shit, then maybe port something else ? This is going to be a Doom 2016 festival all over the gen again.
 
It would run a higher res without DLSS. Thing is, it looks extremely distracting as is, and if without DLSS it would still look like shit, then maybe port something else ? This is going to be a Doom 2016 festival all over the gen again.

How would it run at a higher resolution without DLSS?
 
How would it run at a higher resolution without DLSS?
Because you are not wasting your resources to DLSS anymore. So it would higher than the base resolution used for DLSS.

When you have to reduce the quality of your output to reserve the resources for your process that will, somehow, try to improve the quality of your output, common sense has been defeated.
 
Last edited:
But DLSS doesn't run on shaders. So it's not wasting resources.
If you disable DLSS, the same compute capability is used to render the game.
Wasted hardware. Put the money in a better CPU/GPU/Memory rather than stuff like this that will lead to poor picture quality anyway.
 
Last edited:
Wasted hardware. Put the money in a better CPU/GPU/Memory rather than stuff like this that will lead to poor picture quality anyway.

The Tensor Cores in Ampere acount for around 10% of the GPU. On the Switech 2 SoC, it's less, because there is also the CPU portion.
So maybe we could have a few more shaders. And instead of running at 540p to 1080p, it would run 594p to 1188p.
Not a big difference. But then it would have to use TAA, with much lower temporal stability, more shimmering, more ghosting, and more artifacts.
Not to mention, that without upscale, the output image would be of even lower resolution.
 
But DLSS doesn't run on shaders. So it's not wasting resources.
If you disable DLSS, the same compute capability is used to render the game.

I mean, it sorta does run on the shaders. at least since DLSS 2.0. (DLSS 1 actually relied almost fully on the tensor cores creating a high res image, which is why it was so shit)

DLSS, at the end of the day, works exactly like TAAU or FSR2/3. the only step that doesn't run on the shaders is the algorithm that determines how the jittered and multi-frame accumulated image data gets combined into the final reconstructed image.

TAAU, TSR and FSR2/3 do this through a hand programmed algorithm, while DLSS does it with a machine learning model.

so the initial multi-frame reconstruction is done on the shaders, and will have a similar cost to TAAU.
and if you actually follow Nvidia's recommendations on how to implement DLSS for the best possible output quality, there are several other additional things that adds to the workload of the shaders, like rendering the post processing at full res, and having the same mipmap and LOD bias as the target resolution would have.

it seems even games that use the "full fat" DLSS on Switch 2 do not do that last part, specifically to save on render time. Cyberpunk clearly doesn't render post processing at target resolution for example.


edit: oh, forgot another thing, your denoiser for SSR and RT needs to be better when using DLSS, because unlike TAA, DLSS doesn't smear the image as much. many denoisers do less work because the Devs expect the TAA to smooth over the remaining noise and jitter. so your denoiser has to do more work to look good with DLSS. that does add to the workload of the shaders too.
if you don't do that, you get the same issue PSSR faces in many games, where shadows flicker and stuff like that.
 
Last edited:
Top Bottom