• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

NVIDIA DLSS 4.5 to feature 2nd Gen Transformer model and Dynamic 6x Frame Generation

So, what's the overall conclusion? Seems 4.5 is a bit of a flop. Not only is it even more demanding than 4, but it's not even universally superior and is a regression in several ways?

It doesn't sound like it should have come out of beta so soon.
 
So, what's the overall conclusion? Seems 4.5 is a bit of a flop. Not only is it even more demanding than 4, but it's not even universally superior and is a regression in several ways?

It doesn't sound like it should have come out of beta so soon.
Been using "recommended" on 1440p. I set it to Performance mode (M) initially but if I see any fuckery I switch to balanced (K) that way i dont need to quit out the game to change shit.
 
I still use DLAA, preset M creates a lot of weird shit in the geometry. If you have a 4070 or a 5070 it will be worth for future games, right now? Not really
 
So, what's the overall conclusion? Seems 4.5 is a bit of a flop. Not only is it even more demanding than 4, but it's not even universally superior and is a regression in several ways?

It doesn't sound like it should have come out of beta so soon.
I had some image quality issues with M, but they are gone when using L. I dont measure performance since i use framegen anyway, but ive just switched to using L globally @67-75% res and overall feels like an improvement across the board in every game i play to me, except the games that have ray recon that is, which definitely isnt worth disabling as it does a lot more stuff.
 
Last edited:
Tried 4.5 profile M quality mode in 4 games and none of them is even close to be oversharpened.

Do you people keep the sharp slider in your monitor\tv to the max or something?

If anything the difference between perf and quality is now super low so there is almost no reason to play quality, but not because of the sharpness.
There is no "oversharpening" in the models, their output does not have any artifacts which you would expect from oversharpening.
They are outputting sharper resolve while simultaneously outputting an image which LOOKS post-sharpened due to how it is resolved.
Generally any sharpening should be at 0 with M/L and if a game doesn't provide such option then you may get "oversharpened" look due to sharpening running on top of an image which already has "sharpened" characteristics.

So, what's the overall conclusion? Seems 4.5 is a bit of a flop. Not only is it even more demanding than 4, but it's not even universally superior and is a regression in several ways?

It doesn't sound like it should have come out of beta so soon.
I'd argue that it is universally better but is also more demanding to the pre-upscaled image in how well it is denoised and how well the pixel grid jitter is implemented. Hence the per game results where sometimes it shines and sometimes a more forgiving K could be better.
 
So after that DF video and some discussion from PSSR thread I wanted to check out if DLSS 4.5 is a regress when it comes to stability in UE5/RT games. And yeah, that's the case, at least partially...

SHf: in game DLSS3, DLSS4, DLSS4.5 M, DLSS 4.5 L:






SH2: in game DLSS3, DLSS4 changed to DLSS4.5 M (mid video), DLSS 4.5 L:





So indeed, preset M can show similar artifacts to what was seen in some games with PSSR, big regress over DLSS4 (that is super stable) in this aspect. But preset L (that is omitted from most comparisons so far) is far more stable than M, not on par with K maybe but better than DLSS3.
 
I used DLSS Swapper to replace Ghostwire Tokyo's DLL, It was still on the old 2.5 version. So far everything's working fine, it's using the latest 310.5.3 DLL and the K preset.

I thought it might break something since the game was running such an old version, but no issues so far. Hopefully it stays that way.
 
Last edited:
So after that DF video and some discussion from PSSR thread I wanted to check out if DLSS 4.5 is a regress when it comes to stability in UE5/RT games. And yeah, that's the case, at least partially...

SHf: in game DLSS3, DLSS4, DLSS4.5 M, DLSS 4.5 L:






SH2: in game DLSS3, DLSS4 changed to DLSS4.5 M (mid video), DLSS 4.5 L:





So indeed, preset M can show similar artifacts to what was seen in some games with PSSR, big regress over DLSS4 (that is super stable) in this aspect. But preset L (that is omitted from most comparisons so far) is far more stable than M, not on par with K maybe but better than DLSS3.

Looks like K is the safe bet for UE5 games for now. K is already pretty good and miles ahead of every other upscaler so nothing wrong with that
 
So indeed, preset M can show similar artifacts to what was seen in some games with PSSR, big regress over DLSS4 (that is super stable) in this aspect.
This is not a DLSS issue, it's the issue of games which are being upscaled.
When a game is badly denoised/filtered than any upscaler which is skewed towards current frame data (PSSR 1.0 in a prime example) will "reconstruct" the noise which is present in the original lower resolution.
A model which is doing multiframe history blending will "denoise" such cases but it will also produce ghosting and detail loss on camera movements and objects disocclusion.
So this is a no win scenario - either option will produce issues in case of such games, and the problem must be fixed in the original renderer, not in the upscaler.

But preset L (that is omitted from most comparisons so far) is far more stable than M, not on par with K maybe but better than DLSS3.
Preset L is less aggressive in discarding frame history and is thus better at handling temporal instabilities of the original frame. I've done a number of tests, and in almost all of them the flickering/noise issues of M were reduced with L.
It is still not as "accumulative" as K though so the issues will remain, they'll just be less visible. Sometimes it's also producing a somewhat softer image which still has "sharpening" like elements in it. So not really a solution, just a bit of a band aid for bad original renderers.

DLSS4 was a slam dunk over 3.7. 4.5 came out half-baked.
DLSS4 (model J) was not a slam dunk over model E.
Model J is also skewed towards current frame data and it also produced flickering which Nvidia had to solve by releasing a more "accumulative" model K which in turn produced ghosting in areas where model E didn't.

Any one tested 310.5.3 yet ?
There are no changes stated for SR in .3, only for RR, and it's not about models.
 
Last edited:
DLSS4 was a slam dunk over 3.7. 4.5 came out half-baked.

Hilarious to see some reactions now when 90% of the thread up until these last 2 pages were prasing 4.5 as the second coming of Jesus, as if the differences were super huge, except ppl are fucking blind. I said it first and I'll say it again, if you're at 1440p or lower which the majority of ppl are, 4.5 is pointless. That being said, to be fair, UE5's lumen and probably other engine's raytracing effects rely heavily on internal resolutions, so no matter how good dlss is, the lower you drop the resolution the more flicker you'll get, and 4.5 on quality is pretty much close to native performance. No point using it. I think 4(k) is the sweet spot for everyone in terms of +/-/

M/L are a bit flickery, I get that, but it's also 91 FPS vs. 175 FPS - that's a 92% gain in FPS.

Thats because he's comparing quality vs ultraperformance/perfomance, the internal resolution is abysmal, ofc ull get more frames. You can lower dlss 4 to performance as well and ull gain the same amount of performance/even more.
 
Thats because he's comparing quality vs ultraperformance/perfomance, the internal resolution is abysmal, ofc ull get more frames. You can lower dlss 4 to performance as well and ull gain the same amount of performance/even more.
Then I don't really understand the direct comparison of image quality.
If you want more frames, you have to make compromises somewhere.
And if you want less flickering, you inevitably have to give up 'a few' frames.
Nothing new
 
Then I don't really understand the direct comparison of image quality.
If you want more frames, you have to make compromises somewhere.
And if you want less flickering, you inevitably have to give up 'a few' frames.
Nothing new

On the bigger picture there is no difference between the same levels of 4 vs 4.5 especially when you play normally and dont stand still to stare at certain stuff. The difference is that 4.5 makes certain sfx such as specular effects slightly more bright(but again not in all cases), fixes some shimmering and ghosting for certain effects and it has a more aggressive sharpness at the cost of around 15% frames in some cases. If you did not care about said things or you never noticed them, and you play at resolutions at or lower than 1440p, I see no benefit of 4.5 especially since most games will be using UE5 with lumen, and lumen scales horribly with internal resolutions. If you're at 4k, you can certainly use the new model. Other than that, it's not a generational leap or a massive difference, or anything hyperbolic as some users called it on this thread a few pages back. You have to remember, just as there are console fanboys, so are there nvidia/amd ones, who will praise the living shit out of their features. tl;dr stick with K at 1440p or lower(quality/balanced) and M for 4k(balanced/performance) or L(ultraperformance) at 4k only if you really need to have a boost of frames.
 
Last edited:
On the bigger picture there is no difference between the same levels of 4 vs 4.5 especially when you play normally and dont stand still to stare at certain stuff. The difference is that 4.5 makes certain sfx such as specular effects slightly more bright(but again not in all cases), fixes some shimmering and ghosting for certain effects and it has a more aggressive sharpness at the cost of around 15% frames in some cases. If you did not care about said things or you never noticed them, and you play at resolutions at or lower than 1440p, I see no benefit of 4.5 especially since most games will be using UE5 with lumen, and lumen scales horribly with internal resolutions. If you're at 4k, you can certainly use the new model. Other than that, it's not a generational leap or a massive difference, or anything hyperbolic as some users called it on this thread a few pages back. You have to remember, just as there are console fanboys, so are there nvidia/amd ones, who will praise the living shit out of their features. tl;dr stick with K at 1440p or lower(quality/balanced) and M for 4k(balanced/performance) or L(ultraperformance) at 4k only if you really need to have a boost of frames.
Have you ever considered the the people praising it and calling it a massive difference are using 4k screens? Like, why would I care what 4.5 looks like on a 1080p screen or a 1440p screen? At 4k it's very impressive.
 
This is not a DLSS issue, it's the issue of games which are being upscaled.
When a game is badly denoised/filtered than any upscaler which is skewed towards current frame data (PSSR 1.0 in a prime example) will "reconstruct" the noise which is present in the original lower resolution.
A model which is doing multiframe history blending will "denoise" such cases but it will also produce ghosting and detail loss on camera movements and objects disocclusion.
So this is a no win scenario - either option will produce issues in case of such games, and the problem must be fixed in the original renderer, not in the upscaler.


Preset L is less aggressive in discarding frame history and is thus better at handling temporal instabilities of the original frame. I've done a number of tests, and in almost all of them the flickering/noise issues of M were reduced with L.
It is still not as "accumulative" as K though so the issues will remain, they'll just be less visible. Sometimes it's also producing a somewhat softer image which still has "sharpening" like elements in it. So not really a solution, just a bit of a band aid for bad original renderers.


DLSS4 (model J) was not a slam dunk over model E.
Model J is also skewed towards current frame data and it also produced flickering which Nvidia had to solve by releasing a more "accumulative" model K which in turn produced ghosting in areas where model E didn't.


There are no changes stated for SR in .3, only for RR, and it's not about models.

Interesting info, thanks.
Makes you wonder why they didn't create PSSR version with more accumulation for UE5 and games with noisy RTGI.

TSR still is the best solution for lumen artifacts, it's even more stable than DLSS4 but looks like shit overall with a lot of grain, ghosting and lack of sharpness...
 
calling it a massive difference are using 4k screens?

Theres not a "massive" difference at 4k either. The Lumen issues apply there as well, on a minor scale, but still obvious when compared to 4.

I'm on 1440p, so can I ignore all of this stuff? Gives me headaches.
You can always not bother and just leave it by default which will use what the devs implemented for each game. The majority of gamers will do that anyway. Most never even noticed the ghosting/shimmering. The only thing you should keep your eyes on is ray reconstruction, whenever they decide to update that. That feature is in fact an actual massive difference unlike 4 vs 4.5.
 
Last edited:
I'm on 1440p, so can I ignore all of this stuff? Gives me headaches.

Of course, or try it and see if you like it.

Theres not a "massive" difference at 4k either. The Lumen issues apply there as well, on a minor scale, but still obvious when compared to 4.


You can always not bother and just leave it by default which will use what the devs implemented for each game. The majority of gamers will do that anyway. Most never even noticed the ghosting/shimmering. The only thing you should keep your eyes on is ray reconstruction, whenever they decide to update that. That feature is in fact an actual massive difference unlike 4 vs 4.5.

For most games 4.5 produces better results. I tested many games with it and only saw bigger problems in SH2/SHf and GOW2018 for some reason (camera stuttering, image quality is excellent).
 
Of course, or try it and see if you like it.



For most games 4.5 produces better results. I tested many games with it and only saw bigger problems in SH2/SHf and GOW2018 for some reason (camera stuttering, image quality is excellent).

I've tested 3 games, cp2077, avatar, and outer worlds 2 and aside from less ghosting, I saw no benefits. The shaper imager can be increased with K as well, no idea why some would call M's sharpness "clearer picture", they're exactly the same when their sharpness levels are equal.

You'd have to squint/zoom really hard to maybe notice something even at performance levels.

 
Last edited:
I've tested 3 games, cp2077, avatar, and outer worlds 2 and aside from less ghosting, I saw no benefits. The shaper imager can be increased with K as well, no idea why some would call M's sharpness "clearer picture", they're exactly the same when their sharpness levels are equal.

You'd have to squint really hard to maybe notice something.



Unless games force DLSS sharpening and don't allow users to change it (there are some games like that) DLSS 4.5 don't add any artificial sharpness to the image. Just reduces TAA blur even more than DLSS4. 4.5 looks like a beta version of upscaler, they introduced some regressions in some scenarios vs. 4 but overall, it mostly produces better results.
 
DLSS 4.5 don't add any artificial sharpness to the image

Yes it does, even without me changing the dlss sharpness levels. Avatar for example looked extremely overshapened when i switched to M and I had absolutely nothing else changed. Perhaps each engine handles sharpness differently. I honestly dont know, but it looked uglier in that scenario. Of course theres going to be examples where 4.5 looks better and viceversa. It's why nvidia did not recommend you stick with 4.5 permanently, but pick whichever one prefers.
 
Last edited:
Model M is more skewed towards current frame data and is considerably more aggressive at discarding frame history.
The result is a sharper and more coherent in motion but noisier, less temporally stable resolve.
It is the same in all presets, the reason why Nvidia "recommends" using K with B/Q/DLAA is performance.
This is not a DLSS issue, it's the issue of games which are being upscaled.
When a game is badly denoised/filtered than any upscaler which is skewed towards current frame data (PSSR 1.0 in a prime example) will "reconstruct" the noise which is present in the original lower resolution.
A model which is doing multiframe history blending will "denoise" such cases but it will also produce ghosting and detail loss on camera movements and objects disocclusion.
So this is a no win scenario - either option will produce issues in case of such games, and the problem must be fixed in the original renderer, not in the upscaler.

Preset L is less aggressive in discarding frame history and is thus better at handling temporal instabilities of the original frame. I've done a number of tests, and in almost all of them the flickering/noise issues of M were reduced with L.
It is still not as "accumulative" as K though so the issues will remain, they'll just be less visible. Sometimes it's also producing a somewhat softer image which still has "sharpening" like elements in it. So not really a solution, just a bit of a band aid for bad original renderers.
Do you have a source for the claim that DLSS4.5 presets M & L are focused on current frame info? Seems to completely contradict the entire point of the transformer architecture used in DLSS4 and above, which accounts for temporal dependencies across multiple frames for resolving detail. The old convolutional architecture (CNN) used in DLSS3 and below was only focused on current frame detail for upscaling.

Maybe I'm misunderstanding what you're saying, and instead you're arguing that there's more weight/priority given to the current frame in DLSS4.5 presets M & L and less weight is given to temporal dependencies than what we saw with DLSS4 preset K. I'm just not quite sure why there would be a bigger hit to GPUs if this was the case. The temporal aspect of transformer models is what makes them so much heavier to run than CNN models.
 
So indeed, preset M can show similar artifacts to what was seen in some games with PSSR, big regress over DLSS4 (that is super stable) in this aspect. But preset L (that is omitted from most comparisons so far) is far more stable than M, not on par with K maybe but better than DLSS3.

Don't worry. Once Nvidia buys enough amount of RAM for AI, DLSS 4.5 will be saved. 😁
 
Makes you wonder why they didn't create PSSR version with more accumulation for UE5 and games with noisy RTGI.
Because it's not a solution. You're exchanging one set of artifacts for another set of artifacts. Ideally you'd want a separate model to be trained specifically for each game but this isn't financially feasible or realistic - although console platforms may attempt it at some point.

Do you have a source for the claim that DLSS4.5 presets M & L are focused on current frame info?
The source is the results you're getting. Less ghosting and disocclusion means higher weight of the latest rendered frame in final resolve.

Seems to completely contradict the entire point of the transformer architecture used in DLSS4 and above, which accounts for temporal dependencies across multiple frames for resolving detail. The old convolutional architecture (CNN) used in DLSS3 and below was only focused on current frame detail for upscaling.
Both are TAAUs where NNs are used to choose which pixel from which of the set of frames in history should be used in the final reconstructed frame. The difference between CNN and TNN is in the NN's architecture; the latter allows for more complex "analysis" and can thus "guess" better which pixel should have what color in the final image.

Maybe I'm misunderstanding what you're saying, and instead you're arguing that there's more weight/priority given to the current frame in DLSS4.5 presets M & L and less weight is given to temporal dependencies than what we saw with DLSS4 preset K.
Yeah, exactly. The basics are the same in every TAAU, the difference is in how the pixels from low res frames in history are being selected for the final full resolution frame. And this difference determines whether a NN model (or an algorithm in case of FSR2/3) is trying to maintain the details in motion, avoid ghosting and blurring or it tries to hide the noise and use the history to create more detailed static image, etc.
NNs are great for this because they can mix and match based on what's happening on screen, applying one logic to one set of pixels and a different logic to another set in the same frame. But overall they still tend to flock to some predominant behavior, as is apparent with PSSR and the differences between DLSS models.
 
Last edited:
DLSS4 was a slam dunk over 3.7. 4.5 came out half-baked.
I have issue with dlss 4 in 3rd person game . It have that weird trail-ghosting over the character . Happen if you stand on grass/water background .


7:56 Look around the charaters on the left side. If you spin the camera around it even worse . Granted this is ultra performance mode , but I had it in quality mode too .

Its great in 1st person game tho .
 
Last edited:
Highguard has very interesting DLSS menu:

9X8OkWa1ropG9DeU.jpg
 


NVIDIA DLSS SDK 310.5.3 is now available for all developers:

- Added CUDA application support to DLSS Ray Reconstruction

- Bug Fixes & Stability Improvements


Is it safe to use for non-developers?
 
Last edited:


NVIDIA DLSS SDK 310.5.3 is now available for all developers:

- Added CUDA application support to DLSS Ray Reconstruction

- Bug Fixes & Stability Improvements


Is it safe to use for non-developers?

Why not? You're not getting much vs 310.5.2 though and these are cut down to support only the latest transformer models again, in comparison to what's available through NvApp overrides.
 
Last edited:
So, what's the overall conclusion? Seems 4.5 is a bit of a flop. Not only is it even more demanding than 4, but it's not even universally superior and is a regression in several ways?

It doesn't sound like it should have come out of beta so soon.
Its not a flop, but definitely didn't move the needle that much.

People praise nvidia for allowing 4.5 on all RTX generations, but you really can't use them on 2000/3000 series cards as the performance drop vs 4000 and 5000 is pretty significant.

I personally dont consider the performance cost worth it. I will likely stick to K preset (4.0) and be happy.

Where DLSS 4.5 REALLY shines is how good it can make ultra performance mode (720p to 4K)
 
Its great if you have 4060ti 5060ti 16gb and try to play 4k, or 4060 5060 1440p work too I guess.

if you already have great perf then dlss4 quality mode still superior in most case . Or 200fps goes brrrr .
 
Last edited:
Its great if you have 4060ti 5060ti 16gb and try to play 4k, or 4060 5060 1440p work too I guess.

if you already have great perf then dlss4 quality mode still superior in most case . Or 200fps goes brrrr .

Can attest to 4060ti 16gb 1440p being great too with 4.5. Other than when RR is in play, I'm always using Preset M Performance DLSS now, and the results and performance are fantastic. I'll even dabble with Preset L Ultra Performance if I'm chasing an extra 15fps or so, and usually that's acceptable too if the frames are really worth the chasing.

With RR I'll use Preset K Balanced DLSS, such as Cyberpunk 2077 with path tracing and 2x frame generation, and that's perfect for controller play in super heavy path tracing games where the RR would be used anyway.

As a general rule I'll use frame gen on any game that supports controller play, or on any mouse & keyboard game where my base FPS is around 80 or higher.
 
I just messed around a little bit with Preset M vs. K (DLAA vs. Quality vs. Performance) in Stellar Blade. From what I have read NVIDIA is recommending to stick to Preset K while using DLAA or Preset K.

In the case of Stellar Blade, however, I see a significant leap in image quality between M vs K. Especially with Eve's hair, I always see artefacts in her ponytail under Preset K. This happens with DLAA, Quality and, of course, Performance Preset. Preset M fixes this almost completely. Overall, the image appears much clearer (but not oversharpened). In some parts of the image K DLAA might have a small advantage, but overall M is just better if you ask me.

Preset K DLAA:

Preset M DLAA:
 
Last edited:
I just messed around a little bit with Preset M vs. K (DLAA vs. Quality vs. Performance) in Stellar Blade. From what I have read NVIDIA is recommending to stick to Preset K while using DLAA or Preset K.

In the case of Stellar Blade, however, I see a significant leap in image quality between M vs K. Especially with Eve's hair, I always see artefacts in her ponytail under Preset K. This happens with DLAA, Quality and, of course, Performance Preset. Preset M fixes this almost completely. Overall, the image appears much clearer (but not oversharpened). In some parts of the image K DLAA might have a small advantage, but overall M is just better if you ask me.

Preset K DLAA:
K-DLAA-1.png


Preset M DLAA:
M-DLAA-1.png

Yes, Nvidia recommends 4.0 for Quality/DLAA because there's a significant performance hit with 4.5 at those presets unless you're running a 5090. Quality wise it's a significant improvement though.
 
Yes, Nvidia recommends 4.0 for Quality/DLAA because there's a significant performance hit with 4.5 at those presets unless you're running a 5090.

Hmmm, interesting. I am running a 5080. So far I have not noticed a significant decrease in performance with Preset M. The FPS were at a comparable level in my case. I guess I will take another closer look at that in a moment.

Edit: Just ran some quick and dirty tests. And indeed, the general performance takes a hit with Preset M. However, in my case it seems like the 1 % Lows were pretty similar. With Preset K in Quality Mode I saw my normal FPS jump around between 100 - 160 PFS (often being >130 FPS) in the Wasteland area in Stellar Blade. With Preset M it seemed like the max FPS I saw were somewhere around 140 - 150 FPS while most of the time being around >120 FPS. Overall a FPS hit I can live with.
 
Last edited:


3:11 Here why preset K and E quality mode still recommend over preset L / M performance.

i dont get this. why no video show preset M in quality mode to actually compare between M and K ?

why M is always in performance and K in quality ?

or is preset M for Performance only ?

honestly I was doing fine before in understanding how this works till M and L came. now I am confused as fuck and I am a PC gamer.

can someone explain this to me ? if I want the best image quality, which one is it ?

I also hate shimmering and anti alaising lines. in preset K I don't see any of that shit. but in every video I am looking at, M looks like a downgrade.
 
if I want the best image quality, which one is it ?
The short version answer is "it depends on a game". Some will do better with M, some with K. L is a mid point between M and K but it is closer to M than K.

I also hate shimmering and anti alaising lines. in preset K I don't see any of that shit. but in every video I am looking at, M looks like a downgrade.
M could be a downgrade if the game has shimmering and aliasing w/o DLSS or TAA.
 
Last edited:
There is a performance hit regardless. As for how significant it is depends on the architecture. 20 and 30 series get -10 to -20%, 40 and 50 series are in -5 to -10% ranges.

There's a performance hit, but it's significantly less impactful on the 5090 vs even the 5080, to the point where it makes it a no-brainer to implement it if you have a 5090.
 
i dont get this. why no video show preset M in quality mode to actually compare between M and K ?

why M is always in performance and K in quality ?

or is preset M for Performance only ?

honestly I was doing fine before in understanding how this works till M and L came. now I am confused as fuck and I am a PC gamer.

can someone explain this to me ? if I want the best image quality, which one is it ?

I also hate shimmering and anti alaising lines. in preset K I don't see any of that shit. but in every video I am looking at, M looks like a downgrade.

Preset M can look worse than K, depends on the game.

They compare quality to performance probably because those are the presets nvidia recommends. Nothing stopping you from using Quality M profile...

Performance hit of M Performance is very similar to K Balanced and depending on the game it can look better or worse.
 
Preset M can look worse than K, depends on the game.

They compare quality to performance probably because those are the presets nvidia recommends. Nothing stopping you from using Quality M profile...

Performance hit of M Performance is very similar to K Balanced and depending on the game it can look better or worse.
is there a video showing M on quality to K on quality ? i don't do performance unless I am playing COD so I can hit 4k 240.

and is M performance better than K performance ?
 
is there a video showing M on quality to K on quality ? i don't do performance unless I am playing COD so I can hit 4k 240.

and is M performance better than K performance ?

Like above, it's game depended. In some games K Performance will produce more stable but for sure less sharp image than M Performance.

HU compared the same resolutions of M and K:

 
Last edited:
There is a performance hit regardless. As for how significant it is depends on the architecture. 20 and 30 series get -10 to -20%, 40 and 50 series are in -5 to -10% ranges.
On a 5090 the hit is generally negligible 2% or 4%. Sometimes there is even a positive impact on 50 series cards so not strictly always true.

Anyway funny to see the issues now being accepted as unrelated noise due to the renderer by some despite me saying that the upscaler makes underlying issues worse and getting pushback. Even mentioned that you have tradeoffs of that or ghosting all the way back to I think DD2 release.
 
Top Bottom