Machine Learning-powered upscalers, are they better than "native" or not at the same output resolution?

Do you find machine learning-powered upscalers to be better than native at the same output res?

  • DLSS is better than native

    Votes: 47 74.6%
  • Native is better than DLSS

    Votes: 15 23.8%
  • FSR4 is better than native

    Votes: 9 14.3%
  • Native is better than FSR4

    Votes: 16 25.4%
  • PSSR is better than native

    Votes: 3 4.8%
  • Native is better than PSSR

    Votes: 18 28.6%
  • Xess is better than native

    Votes: 3 4.8%
  • Native is better than XeSS

    Votes: 16 25.4%

  • Total voters
    63

Gaiff

SBI’s Resident Gaslighter
I was watching Hardware Unboxed's QA this morning and Tim, who I consider to be one of the more knowledgeable people when it comes to image and display technologies, stated that techniques such as DLSS and FSR4 are "generally better than native".

Of course, this is a bit of a loaded statement as "native" has its own AA such as TAA, TAAU, or other post-processing techniques to clean up or enhance the final resolve. There's also the base resolution to consider, so while I would broadly agree that 4K DLSS4 Quality is better than 4K+TAA, the same cannot necessarily be said for 1080p DLSS4 Quality vs 1080p+TAA.

So GAF, in your experience (and this includes PSSR), do you generally find yourself appreciating the IQ of ML-accelerated upscalers to be better than native resolution at the same output resolution? Obviously, DLAA does not count.

Note: The poll considers YOUR use case, not a specific resolution.
 
Last edited:
I always see pattern artifacting in DLSS. I really don't use upscalers unless in extreme cases.

edit:
Also like to add that the only time I see where DLSS can be better than native is at sub 4k level of resolutions. At 4k native wins hands down 99% of the time. only when games have awful post process effects is where DLSS can fix it up a bit.
 
Last edited:
Artifacts are the big issue.
Machine learning IS getting better though, just not quite there yet.
 
DLSS Quality mode at a target resolution of 1440p or above is often better than native due to TAA being bad in those games.

every id Tech game for example. they all look objectively better with DLSS Quality mode compared to native.
they are sharper and have less artifacts in motion
 
DLSS over native on the Switch almost any time (except for Fast Fusion).

Native over FSR4 anytime (it just isn't the same).

I never used Xerces so I can't tell if it's good or not
 
I prefer DLSS quality to native with TAA.

Can't speak about the others since I haven't used them.
 
Over time they will surpass native costing much less or half of the performance because of the nature of AI.

Doesn't matter if is FSR or DLSS.
 
AI is a regression and regression will always have misses in predication and so artifacts are inevitable.
MSAA is a best result but very costly
 
Last edited:
None of the options. Native is better at sharpness but can have anti aliasing issues that only TAA can deal with. DLSS is better at anti aliasing overall but is a little blurry + some artifacts, afaik. But I still use DLSS quality because of the frame rate gains. 🤷‍♂️
 
Last edited:
AI is a regression and regression will always miss in predication and so artifacts are inevitable.
MSAA is a best result but very costly
MSAA was the best result for forward renderers. It completely breaks apart in deferred renderers, which are the norm, and the cost is prohibitive. MSAA also doesn't play nice with the much more complex image of modern games. In 2010, I would have agreed with you, but in 2025, no.
 
I've been using a 32" QHD (2560 x 1440) 10-bit VA monitor running at native res and 165hz for the last few years. I had a 3080 and recently moved to a 9070 XT. I play single player games and generally aim for frame rates between 60 and 120fps, but I cap at 120 because I don't see any benefit above that for me personally.

What I saw with DLSS was once we got to 3.0, I would run games in "Quality" mode, which usually meant it was rendering at 1440, then upscaling to 4k. The resulting image in most games was crisp with either no detectable "jaggies", or almost none. If I ran DLSS at the "Balanced" mode, that usually renders at 1080 and upscales to 1440. That would look fine, but I would notice a little more staircasing in some spots, and also a very faint checkerboarding effect on some textures - I noticed that most when playing fortnight and running over grassy hills.

Once DLSS got to 3.5, those few visual artifacts just went away, although I typically played with "Quality" mode, so it was more like down sampling from a higher resolution. I haven't had the chance to use FSR4 in any games yet, and I have no interest in FSR3, so I've been playing games at native with my 9070 XT. It doesn't feel any different to me.

Overall, I would say the additional performance headroom we're getting from the best of these AI upscalers is a HUGE benefit for gaming. With what we've seen on DLSS4 and FSR4, I wouldn't complain a bit if these are just default in every game going forward. Having a rock solid 60fps or above is really important to me these days, so anything that helps get the frame rate higher with no perceptible drawbacks in quality or added artifacts is all good.
 
Last edited:
I always use DLSS Quality where games use CNN model or performance where games use the transformer model (as it looks great to me) simply because I'd rather have more frames and I rarely see any artifacting that annoys me enough to turn it off.

I'm really happy with DLSS but it should never be used as a crutch for shit optimisation, which I kind of feel it is now.
 
It may be too soon to say better than native, but I run DLSS almost always even if my RTX 4070 doesn't need it to keep up the FPS.
Native can have jaggies and artifacts of it's own.
Nowadays the artifacts almost always com from Frame Gen rather than Upscaling.
 
I don't overthink it and use dlss performance with transformer model whenever possible, on PC. CNN I go with quality.
Native for me means no AA whatsoever. In that case - yes, it's better than native.
 
Last edited:
Everyone above already said it, when they say better than native they mean the overall image, ie aliasing + detail and sharpness.

They consider the tradeoff of detail loss of native + the subpar AA the game has vs the inherently AA qualities of DLSS et al to be worth it.

ML applied to the native res is always going to be better than everything else, but obviously that's going to cost even more than native res + the regular AA.

I still choose to play a lot of Pro games in the 30fps mode because the image quality difference is noticeabe to me, but thats not DLSS and the input res for the 60fps mode is usually much much lower than 4K so its not really a fair comparison.
 
I dont even bother trying to run native on any game that has DLSS.
Straight Quality or Balanced with Override set to Latest.
 
Dlss4 seems better than native most of the time.

PSSR is better but than native in very rare scenarios.

FSR 4 looks likely to be better than native more often than PSSR. Haven't seen enough

Know nothing about XeSS
 
I'd say both FSR4/DLSS4 are generally better at 1440p/Quality or higher. Not better in every metric, but enough that I'd choose it over the native image. XeSS and PSSR are a bit behind both of them at the moment.
 
FSR4 and DLSS4 are better than native, if native is using TAA.
TAA might eliminate shimmering, but it causes a ton of image problems, such as ghosting, disillusion artifacts and blurriness.
In stills, TAA might look decent enough, but in movement, it looks really bad.
 
It depends on what your native resolution is. If it's 1440p, the answer is "usually not", especially DLSS3 which is the default for most games these days. Specific aspects of the image could be, but not in its entirety. Don't get me wrong, the trade-off is still worth it (at least for most games) but it's certainly not universally better.
 
Last edited:
It depends on what your native resolution is. If it's 1440p, the answer is "usually not", especially DLSS3 which is the default for most games these days. Specific aspects of the image could be, but not in its entirety. Don't get me wrong, the trade-off is still worth it (at least for most games) but it's certainly not universally better.
Yeah, that's why I mention your use case in the OP. 99% of people will either use native or use an upscaler with their monitor's output res.
 
Depends on how well implemented the AA is for a game, an upscaler will generally be preferable to dodgy TAA. Most times I'm usually defaulting to DLSS these days anyway but its not by choice really. Having the PC hooked up to a 4K TV, the ol' 3090 can't quite keep up at the resolution these days but still does well at the 1440p of DLSS Quality. I'll take the odd artefact for fauxK image.
 
DLAA at native is always going to be better than DLSS.

It's a trade off though. Can I run the game at 120+fps with DLAA? If not then I'll probably go with DLSS.
 
'Native res' is barely a thing these days. But DLSS4 often looks better at 1440p than some TAA solutions do at 4k.

It still doesn't match the sharpness of old-school rendering with MSAA/SSAA.
 
MSAA was the best result for forward renderers. It completely breaks apart in deferred renderers, which are the norm, and the cost is prohibitive. MSAA also doesn't play nice with the much more complex image of modern games. In 2010, I would have agreed with you, but in 2025, no.
MSAA as a specific technique - yrs
MSAA as approach - no. You can use super resolution to render at higher resolution and than downscale, essentially doing MSAA
 
Just to give u guys an idea how crazy magical dlss4 transformer model got, check this out, 5yo gpu(3090 launched sept 2020 so in 2 months it will be 5yo, its fricken older from base ps5 ), and resolution is 4k dlss ultra performance so only 720p native upscaled to 4k , yet look at that IQ, its comparable to 1440p native actually, far above 1080p:

Ofc its combo of specific game(cp2077) and dlss4 transformer model, it doesnt work anywhere close as good in many other games as of yet but potential for extreme greatness is there.
 
Native does not really exist, nowadays there is always a temporal component that is applied to the final output.

DLSS is way, way better than whatever it is provided as "native".
 
Last edited:
MSAA as a specific technique - yrs
MSAA as approach - no. You can use super resolution to render at higher resolution and than downscale, essentially doing MSAA
This is basically SSAA, which MSAA is a derivative of. SSSA is not viable because the cost is insane. MSAA is not viable because sampling parts in the image of deferred renderers is a different ballgame. They tried it, MSAA falls apart in complex scenes with a lot of subpixel details, specular highlights, etc. It's still available in GTA V I think, but looks like crap at anything below 8x.
 
I have a 5090 and there isn't a game I play that has DLSS where I don't use it. That was even when I had a 1440p monitor. Now on 2160p, it's a permanent addition to everything.
 
Everything has a trade off in life. These are no different. 9070xt here for reference on a 4k LG C1 65" TV.

Personally, i'll drop to 1440p over 4k with FSR3 generally unless its like HFW where its really well implemented. Alternatively, Xess has been a great alternative in games like CP2077 but ultimately the crispness of native is hard to replace with these upscalers.

FSR4 has been a different story. I have played the below with it active:
- COD BL6/WZ
- Remnant 2
- GTA 5
- Horizon FW

And yeah… been amazing. Anything at the Quality or Balanced setting has not provided any noticeable differences compared to native 4K in image quality. So, i'd say it at least matches 4k, which in itself is a huge accomplishment.
 
How can anything be better than native in terms of iq?
Because it improve on it?

Like when you straight up see better details or more clear writing on a distant sign.

Now we can discuss about the dev vision to have that sign with a blurry line on it, but iq\clarity wise, it straight up improve from the blurry taa, and i bet half the times is not reqlly the artist vision to have details hidden by non-perfect iq.

They still have flaws like ghosting and framegen is often broken before fixes, but it's hard to renounce to better clarity with better performances in MOST cases.

Even better when you use dlaa that is native res+ai improvement.
 
Last edited:
Native on a high dpi screen, with all post processing and AA disabled.

gotta straight up raw dog those polygons, that's the way I roll
So an awful aliased image that shimmers like hell and is unstable.

How can anything be better than native in terms of iq?
Because native+AA has its fair share of issues depending on the AA method and post-processing. The absolute best is obviously native+DLAA.
 
Last edited:
Poll isnt clear enough since native can refer to DLAA or FSR at 100% res too, you should replace Native with TAA there and mention presets for upscalers.

I do think every single ML upscaler (except old ones like FSR1+2, DLSS1 and FSR3 in motion) is better than TAA at the same res.

From how i use them at 4k id put DLSS4 with 50% resolution scale above TAA 100% resolution scale, at 1440p id put DLSS3 CNN Balanced a tiny bit ahead of TAA 100% res, with Transformer i find even performance to still be better.
 
Poll isnt clear enough since native can refer to DLAA or FSR at 100% res too, you should replace Native with TAA there and mention presets for upscalers.

I do think every single ML upscaler (except old ones like FSR1+2, DLSS1 and FSR3 in motion) is better than TAA at the same res.

From how i use them at 4k id put DLSS4 with 50% resolution scale above TAA 100% resolution scale, at 1440p id put DLSS3 CNN Balanced a tiny bit ahead of TAA 100% res, with Transformer i find even performance to still be better.
I said DLAA does not count and I also specified your use case.
 
Top Bottom