• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

NVIDIA DLSS 4.5 to feature 2nd Gen Transformer model and Dynamic 6x Frame Generation

Might need to do a new playthrough. This game with DLSS 4.5 is quite something.
peGJEjpJfM45Ucul.png
 
Tested in Cyberpunk at max settings (PT) at 1440p with RR turned off in perf mode, huge IQ boost. Looks almost like native res to my eyes in motion, if you have Ray Reconstruction turned on, it replaces Super Resolution & any override you have with it's own presets. RR didn't get a 2nd gen update yet so preset E is the latest RR preset which is first gen based.
So do think DLSS 4.5 is worth giving up RR during path tracing in Cyberpunk in performance mode? I'm going to check it out soon, but i'm interested in hearing others experiences.
 
So do think DLSS 4.5 is worth giving up RR during path tracing in Cyberpunk in performance mode? I'm going to check it out soon, but i'm interested in hearing others experiences.
personally ray reconstruction makes the lighting look incredibly immersive and cohesive
i experimented a bit but had to go back to ray reconstruction
 
personally ray reconstruction makes the lighting look incredibly immersive and cohesive
i experimented a bit but had to go back to ray reconstruction
Thanks. I've been using the RR and the DLSS transformer model that's built into the Cyberpunk menu. Do you (or anyone else) know if overriding preset K and the latest RR will provide further benefits or does Cyberpunk already have the latest models built in?
 
Last edited:
Thanks. I've been using the RR and the DLSS transformer model that's built into the Cyberpunk menu. Do you (or anyone else) know if overriding preset K and the latest RR will provide further benefits or does Cyberpunk already have the latest models built in?
overriding the ray reconstruction in nvidia app to latest makes the game use something called preset D for ray reconstruction
i didn't see any difference in image quality with that though
 
Anybody else getting weird dlss frame gen artifacts in Cyberpunk? Seems it's on all presets on my end. I tried FSR frame gen 3.1 and it disappears. It looks like you can see the frames generating. Almost like screen tearing. I notice it in the sunny desert while on the roads on foot.
 
So do think DLSS 4.5 is worth giving up RR during path tracing in Cyberpunk in performance mode? I'm going to check it out soon, but i'm interested in hearing others experiences.
Nah I'd rather wait for them to release a 2nd gen version for RR, path tracing looks weird without RR.
 
How low can we go with L and ultra performance and still have a decent image? I saw some 540p which looked great. Can we do 480p? 360p?!
preset L loooks mostly decent at 480p with 1440p output in terms of image clarity. but there are certain graphical issues. volumetric effects in long distances can appear blocky. there are probably more issues that would come up if I were to play more with it



ryzen 5600 is not happy though
y8Trm2i.jpeg

fwlbI1P.jpeg

WbiOJvt.jpeg
 
Last edited:
DLSS 4.5 is awesome for Star Citizen in VR. I don't have any screenshots to share here because it's VR but it is noticeably sharper in the headset with less artifacts and better performance with DLAA.


I also notice a difference with games like CP2077 but it's not as noticeable except with a side by side comparison. It's easier to notice the difference in VR but that's because the VR headset is next to your eyeballs lol.
 
Last edited:
So do think DLSS 4.5 is worth giving up RR during path tracing in Cyberpunk in performance mode? I'm going to check it out soon, but i'm interested in hearing others experiences.
I don't think RR disables the latest DLSS 4.5, even though the DLSS overlay says the game uses an old DLSS (D) preset with RR activated. There's still a noticeable quality improvement compared to standard D preset. The only downside is additional performance cost, but I think the trade-off is worth it because, before RR, the image looked like oil paint and lacked fine details.
 
Last edited:
Tried 4K DLSS 4.5 ultra performance L preset
Looks amazing and crisp
getting 40-50 fps at mix of Experimental, ultra and high

but i am still baffled at why DLSS 4.5 "M " preset 1440 P performance mode was giving me less frames then DLSS 4.0 1440P balanced
 
Last edited:
Instead of the Discord link, click where it says "Snapshot". And then once you have the file, just follow the installation steps on the top of the page.
Thanks! I installed this HDR fix for MGS3Delta, and as you said, the HDR is perfect now. MGS3 delta with RenoDX HDR is easily one of the best-looking games I've played. With that HDR contrast and detail the jungle in this game looks real. Even the biggest UE5 critic would be amazed by how good this game looks in HDR on a 4K OLED. I'm also happy that this game runs at 4K DLSS with the latest FG at 120 fps. I saw some ghosting previously, but that's been fixed.
Reno is fucking great, I used it in E33, C2077 and SHf and it fixes all the issues (E33 don't have HDR at all).
I tested Cyberpunk today. RenoDX fixed the black level, so now I can see pure black in dimly lit areas. However, in most locations, the difference isn't so noticeable because this game has strong highlights, and my eyes adjust to the strong contrast, so unless I'm looking at totally black space I perceive a good black level. I think I even prefer the default color grading despite black rise because it's easier to navigate in dark locations. The transition between extremely dark and bright areas is also smoother, which makes it more cinematic (filmmakers often use similar color grading). However, that's just my preference and I understand why people may like this fix.
 
Last edited:
I guess the idea is just to use P mode at 4K for everything going forward. Need to make sure the tensor cores themselves don't become the bottleneck to rendering though. With increasing load from DLSS and then framegen.
 
Yeah, 4.5 looks pristine in RDR2.
Do you have to play in ultra performance mode to use 4.5?

And in your opinion, is balanced transformer 4.0 every pixel as good as gcn quality or better? Or gcn quality only get beat by transformer quality?°
Like i want to save some frames going with balanced but not if the iq suffer, so i end up always choosing quality no matter what.
 
This is 2nd gen Transformer tech from Nvidia while AMD is (still) using CNNs. By the time PS6/Magnus launch, Nvidia will be several generations away on the Transformer tech. It seems that there will be a huge performance delta between modern Nvidia GPUs and next gen consoles. The tech consoles offer should last a minimum of 6 years but I am not sure how this will pan out this time considering the obsolete tech.
 
This is 2nd gen Transformer tech from Nvidia while AMD is (still) using CNNs. By the time PS6/Magnus launch, Nvidia will be several generations away on the Transformer tech. It seems that there will be a huge performance delta between modern Nvidia GPUs and next gen consoles. The tech consoles offer should last a minimum of 6 years but I am not sure how this will pan out this time considering the obsolete tech.

That is incorrect. FSR4 uses a mix of CNN and Transformer models.
 
This is 2nd gen Transformer tech from Nvidia while AMD is (still) using CNNs. By the time PS6/Magnus launch, Nvidia will be several generations away on the Transformer tech. It seems that there will be a huge performance delta between modern Nvidia GPUs and next gen consoles. The tech consoles offer should last a minimum of 6 years but I am not sure how this will pan out this time considering the obsolete tech.
Transformer models are not better than hybrid models. Most of current image projects are moving to a hybrid models because better data efficiency and performance than using pure transformer or CNN. If we are talking about tech, Nvidia is using old tech vs new tech on AMD (hybrid). DLSS4 being better is probably because better training, experience and better computational power. Transformer is not the reason
 
Last edited:
Transformer models are not better than hybrid models. Most of current image projects are moving to a hybrid models because better data efficiency and performance than using pure transformer or CNN. If we are talking about tech, Nvidia is using old tech vs new tech on AMD (hybrid). DLSS4 being better is probably because better training, experience and better computational power. Transformer is not the reason
That's simplifying partial truth into something that it not necessarily is.

Calling transformer "old tech" is like calling a jet engine "old" because it uses the same physics as a propeller.

The claim that "hybrids are newer/better" is true for static image generation (stable diffusion) where the goal is data efficiency. But for realtime graphics nvidias use of pure transformer in DLSS 4 is about solving a temporal stability ceiling that cnn hybrids struggle to break through.

Also, the hybrid "isn't necessary" for Nvidia because of the simple fact of tensor cores. FSR is platform agnostic and is dependent on compatibility and speed "without" tensor corse (AMD AI acceleration from what I understan is baked into the compute units thus not working as independently), and that's where CNN or a hybrid solution is a necessity.


Disclaimer: Not saying FSR is bad. That's not my point at all. And for that matter, in the future when specialized hardware is no longer needed to gain the upper hand, DLSS and FSR will combine into one shared standard.
 
Last edited:
how does DLSS 4.5 4K ultra performance compare to DLSS 4.0 performance
It does look much worse. The image no longer resambles true 4K, and it blurs during motion. There's also noise in games that use real time GI (UE5 games). However, even Ultra Performance offers better image quality than UE5 games on the PS5, so I think some people may actually use DLSSUP. From a normal viewing distance, Ultra Performance will look good.
 
Last edited:


Poorly done test. It assumes the M preset is for use in Performance mode, and even shows Nvidia's documentation.
He decides to test in Quality mode, for no apparent reason, thinking it would scale the same way in Performance mode.
And then he adds a pinned comment saying "oops, the performance scaling is actually different in Performance mode".
 
Poorly done test. It assumes the M preset is for use in Performance mode, and even shows Nvidia's documentation.
He decides to test in Quality mode, for no apparent reason, thinking it would scale the same way in Performance mode.
And then he adds a pinned comment saying "oops, the performance scaling is actually different in Performance mode".
Lol wow, they had one job.
 
Poorly done test. It assumes the M preset is for use in Performance mode, and even shows Nvidia's documentation.
He decides to test in Quality mode, for no apparent reason, thinking it would scale the same way in Performance mode.
And then he adds a pinned comment saying "oops, the performance scaling is actually different in Performance mode".
What, even I wouldn't do that type of mistake. And this is his job...
 
Last edited:
How hard is it to use?

Download reshade with add on support

Reno mod for specific game: E33 for example

After installing reshade drop mod files into main game folder - where .exe is, reshade is installed here as well

p0Jevxo.jpeg


In game you basically don't have to do anything, you can change mod settings (home button on keyboard) but default is already good:

g4euBVx.jpeg


Do you have to play in ultra performance mode to use 4.5?

And in your opinion, is balanced transformer 4.0 every pixel as good as gcn quality or better? Or gcn quality only get beat by transformer quality?°
Like i want to save some frames going with balanced but not if the iq suffer, so i end up always choosing quality no matter what.

Suggestions are (you can use whatever you want):

Quality, blanced, DLAA - preset K (4.0)
Performance - preset M (4.5)
Ultra Performance - preset L (4.5)

L is the heaviest so using it with higher pixel counts doesn't make sense.

Personally I would use preset M Performance for everything, it looks fucking great. But comparing to old DLSS3, even preset K performance is better than DLSS3 quality in many aspects and now preset M performance should be 100% better than DLSS3 quality setting.

Of course all depend on your GPU, I wouldn't suggest using 4.5 on Ampere and Turing, on those GPUs stick to Preset J/K (DLSS4) - best quality/performance ratio.

Preset M:

cmoe7Xo.jpeg
BH7h3Vp.jpeg
 
Last edited:
how does DLSS 4.5 4K ultra performance compare to DLSS 4.0 performance
It does look much worse. The image no longer resambles true 4K, and it blurs during motion. There's also noise in games that use real time GI (UE5 games). However, even Ultra Performance offers better image quality than UE5 games on the PS5, so I think some people may actually use DLSSUP. From a normal viewing distance, Ultra Performance will look good.

Preset K Performance mode is still better than L Ultra Performance but differences are much smaller than with previous DLSS version. UP is now absolutely usable in 4k output (I wouldn't use it below that).
 
Of course all depend on your GPU, I wouldn't suggest using 4.5 on Ampere and Turing, on those GPUs stick to Preset J/K (DLSS4) - best quality/performance ratio.
1440p dlss performance and preset M would make sense for ampere GPUs above 3060 most likely



it looks somewhat better and you get the other improvements related to ghosting and such. if you were able to play with 1440p dlss quality and preset K, chances are you would still end up with higher performance with 1440p dlss performance and preset M

i can see differences when I zoom in and carefully look but it still looks decent
 
Last edited:
I havent tried it for myself, but it would seem like I am better off with just sticking with Preset K for the time being on my 4090. I care most about getting framerates closest to my 240 Hz OLED refresh rate, so I already run many games at balanced and performance settings.

But then again, it's not like it cost monet to try it out.
 
1440p dlss performance and preset M would make sense for ampere GPUs above 3060 most likely



it looks somewhat better and you get the other improvements related to ghosting and such. if you were able to play with 1440p dlss quality and preset K, chances are you would still end up with higher performance with 1440p dlss performance and preset M


Yes, in the end people need to experiment and choose what suits them the best.

In my opinion, more options are always good - but I can see why all this alphabet + numbers combo from nvidia may be confusing to many people, lol.

I like that they allow to use 4.5 on GPUs that don't natively support FP8. Unlike AMD that keeps int8 version of FSR4 hidden (officially)...
 
Last edited:
1440p dlss performance and preset M would make sense for ampere GPUs above 3060 most likely



it looks somewhat better and you get the other improvements related to ghosting and such. if you were able to play with 1440p dlss quality and preset K, chances are you would still end up with higher performance with 1440p dlss performance and preset M

i can see differences when I zoom in and carefully look but it still looks decent

3000s and below perf drop like crazy on preset M/L . DLSS 4.5 performance's FPS equivalent to preset K quality mode actually .
Some game its look fine , but some game that scaling effect like fog , lumen , lighting, reflection with resolution you will see the drop in quality on performance mode . Shimmering n shit .
 
Last edited:
K Quality vs. M Performance in 4k and 1440p:




M Performance holds up pretty nicely. It's actually sharper than K Quality:

kf4zQsrVs4UH7o3a.jpg
pV9pJFbYucgHqJqx.jpg
 
Last edited:
The claim that "hybrids are newer/better" is true for static image generation (stable diffusion) where the goal is data efficiency. But for realtime graphics nvidias use of pure transformer in DLSS 4 is about solving a temporal stability ceiling that cnn hybrids struggle to break through.
Hybrids are newer indeed, Also i didn't say it's better, even CNN have advantages over other techs. But they are the future until someone get a new different tech. For realtime graphics precisely hybrids models have advantage mostly because they are computationally lighter and you don't require the huge dataset transformed model needs in training. So nope, the strength of hybrid models in the future are bigger than full pure transformer model, mostly for so small and light models such as DLSS, FSR, PSSR, etc where need super slow times for inference such as 1 to 3ms, despite you want to believe the Nvidia marketing.
 
Last edited:
Download reshade with add on support

Reno mod for specific game: E33 for example

After installing reshade drop mod files into main game folder - where .exe is, reshade is installed here as well

p0Jevxo.jpeg


In game you basically don't have to do anything, you can change mod settings (home button on keyboard) but default is already good:

g4euBVx.jpeg




Suggestions are (you can use whatever you want):

Quality, blanced, DLAA - preset K (4.0)
Performance - preset M (4.5)
Ultra Performance - preset L (4.5)

L is the heaviest so using it with higher pixel counts doesn't make sense.

Personally I would use preset M Performance for everything, it looks fucking great. But comparing to old DLSS3, even preset K performance is better than DLSS3 quality in many aspects and now preset M performance should be 100% better than DLSS3 quality setting.

Of course all depend on your GPU, I wouldn't suggest using 4.5 on Ampere and Turing, on those GPUs stick to Preset J/K (DLSS4) - best quality/performance ratio.

Preset M:

cmoe7Xo.jpeg
BH7h3Vp.jpeg
I have a 4080 and only care for iq, if 4.5 quality\dlaa is the best iq you can have by a noticeable margin, that's where i'm gonna land.
I can make an iq sacrifice if the improvement is like 5% while losing 20-30 frames, that's why i asked if balanced in the new model is as good as quality in the older models (at this point i consider 4.5 the new model and 3-4 the older models).

No game is gonna tank my gpu under 60 fps, i usually put shadows on medium and disable or lower rtx a lot so that gave me tons of frames to max out everything else.
 
Last edited:
Hybrids are newer indeed, Also i didn't say it's better, even CNN have advantages over other techs. But they are the future until someone get a new different tech. For realtime graphics precisely hybrids models have advantage mostly because they are computationally lighter and you don't require the huge dataset transformed model needs in training. So nope, the strength of hybrid models in the future are bigger than full pure transformer model, mostly for so small and light models such as DLSS, FSR, PSSR, etc where need super slow times for inference such as 1 to 3ms, despite you want to believe the Nvidia marketing.
On NV hardware with dense tensor core throughput, a hybrid architecture just isn't logical when transformers deliver far superior temporal stability and motion-aware reconstruction. DLSS doesn't use a hybrid because NV's hardware and long-term strategy are built around transformers. a hybrid would simply be slower, less stable, and less future-proof. FSr's design is reasonable for AMD's current constraint, but it shouldn't be framed as an "old tech vs new tech" thing.

And this is never gonna change since nvidia started training their ai much sooner than anyone else and they are always have this advantage.
That's part of it, but not the main reason why NV chose to "afford" transformer. It's what I mentioned above, and not only that, the execution and latency also plays an imporant part. In hybrid you're essentially forcing the gpu to juggle math workloads betwen cnn and transformer. It eats up the ms for no good reason (in NV's scenario)
 
Last edited:
I honestly can't answer that with 100% certainty, but when 4.0 was released, there were reports saying that performance mode was just as good as last gens quality mode. I'm not sure if it will translate to 4.5, but I have seen some videos and screenshot comparisons that seem to point to the new version of performance / ultra performance being as good, if not better, picture wise, but costing a bit more performance / frames.

Hopefully come release on the 13th (I think) the driver will be polished and the results will be more obvious.
Oh nice, I thought this was it in terms of performance. Still even at 1080p, DLSS4 Quality looked excellent, even if I played on a 27" 4k screen.
 
So if I have a 4k tv.i should set the games internal resolution to 1440p and just let it upscale to 4k output right?
 
So if I have a 4k tv.i should set the games internal resolution to 1440p and just let it upscale to 4k output right?

4K output and DLSS to whatever value you want (to achieve good performance).

Games rarely have internal and output resolution options to select.
 
On NV hardware with dense tensor core throughput, a hybrid architecture just isn't logical when transformers deliver far superior temporal stability and motion-aware reconstruction. DLSS doesn't use a hybrid because NV's hardware and long-term strategy are built around transformers. a hybrid would simply be slower, less stable, and less future-proof. FSr's design is reasonable for AMD's current constraint, but it shouldn't be framed as an "old tech vs new tech" thing.


That's part of it, but not the main reason why NV chose to "afford" transformer. It's what I mentioned above, and not only that, the execution and latency also plays an imporant part. In hybrid you're essentially forcing the gpu to juggle math workloads betwen cnn and transformer. It eats up the ms for no good reason (in NV's scenario)
I erased that part because i though i was coming out too confrontational :messenger_pensive:
 
Top Bottom