• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

DLSS 5 - Yes or No?

Do you think DLSS 5 is the future?

  • Yes and I like it

  • Yes but I don't like it

  • No, it's ugly and we'll forget about it

  • No opinion/other

  • No, we need less AI not more


Results are only viewable after voting.
Yes but its an expensive gpu that can do this. Im already satisfied good image quality and frame rate on console and of there is RT Im thankful for it.
 
Last edited:
Looks amazing! Can't wait to use it on my 5090.

Consoles are going to look even worse in comparison now. I'm interested to see AMDs version. Probably won't be until PS6 though lol.

I hope Cyberpunk gets DLSS5. CDPR usually do implement the latest tech. The game already looks great but with DLSS5 it will be even better.
 
Last edited:
Looks amazing! Can't wait to use it on my 5090.

Consoles are going to look even worse in comparison now. I'm interested to see AMDs version. Probably won't be until PS6 though lol.

I hope Cyberpunk gets DLSS5. CDPR usually do implement the latest tech. The game already looks great but with DLSS5 it will be even better.
Agreed. I really hope we can get an insider program to try it or something early. I really dont want to wait till fall :(
 
9t3DtJf.png
 
Yes, but I don't like it yet. I see where it can go, but the general filter they interpret from the visuals produces shading too soft, highlights are blown out, and adds too many facial details that age people.

When the effect can be trained better on the art direction the game is going for, and the devs can control how pronounced it is I think it will be useful.

Also, their first iterations aren't usually good with this stuff. It took until DLSS 3 for the upscaling to look good, ray-regeneration wasn't that great until this last iteration, and just this month they're making frame-gen more useful with the dynamic mode.
 
Its DLSS 1.0 all over again.

1. Barely any games supported it.
2. Its quality was extremely questionable and most reverted to older technologys.
3. it improved massively overtime to the point it become standardized.

History will repeat itself.
 
Last edited:
I wonder if there is some other assets than human faces AI could rerender, maybe VFX like fire, water.. It has to be something detailed and relatively rare to get best bang for buck.

But yes, Starfield example looks great, AC: Shadows/Oblivion examples mess the lighting. Some faces will look bad, too generic AI faces.
 
Last edited:
Environmental changes look fantastic. Faces yeah some work but give it time and it'll be brilliant. How long that takes remains to be seen
 
Im pro-DLSS5'er

Nvidia basically jumped 1-2 generations in fidelity using "ai-slop". Like their hair guy said, instead of bottom up approach of computing every single polygons and shaders, we can gift wrap it from the top using ground truths inference

Only concern is even 5090 will have trouble running this in its truer form.

I can see 6090 with double-triple tensor AI cores is needed
 
Last edited:
This tech is basically the early steps into neural rendering and will lead to vastly improved existing games as well as Realtime remasters of games.

I hear a lot of complaining about the art but if the tools allow for enough dev input, which once matured they should, many of the art issues people are bitching about will be less pronounced. Or what the hell, let players select what they want and they could alter the style of the game realtime, for example I want Skyrim Anime Style or whatever. This tech is going to be nuts once it matures.

Last thought, and excuse my ignorance, but does this tech, or could something like it, work on 2d and sprite based games? Wanna see a realtime remaster of say something like Metroid or SOTN.
 
Last edited:
I will say yes but with a caveat

Nvidia should tweak the algorithm to make DLSS 5 not change so much character faces. Just add the lighting improvements without touching so much the face.

I think there is tons of potential with this technology if they manage to fix that problem
 
45% like it, the other 55% will come around. This is just the beginning. It will get better and I'm sure devs will be able to tweak it to their liking.
 
I will say yes but with a caveat

Nvidia should tweak the algorithm to make DLSS 5 not change so much character faces. Just add the lighting improvements without touching so much the face.

I think there is tons of potential with this technology if they manage to fix that problem

The fine tuning is in the hands of the devs, ultimately.
 
I swear some people must be fucking blind, stupid, or some combination thereof.

aWMtcDj.png

Look at Grace's skin in this comparison. She goes from being oversmoothed and clay-like to the light actually interacting with the materials of her face.

aF2wYFV.jpeg

And here's Leon Light is interacting more accurately with skin textures. Better specular highlights on the skin and eyes. Leon's water line around his eye is even able to produce a specular highlight in the DLSS5 image. His chin hairs illuminate realistically rather than just being stuck to his face. I guaran-fucking-tee you that if the differences here were labeled as path tracing off vs on, people would be falling all over themselves to praise Capcom's engine for what it has achieved.

EA FC had some good shots, too.
ozbVuUt.jpeg

bjwX8xK.jpeg


You could tell me this was a comparison between console graphics vs ultra PC settings and I'd believe you. Face materials and geometry is actually interacting with light.

Even some of the Hogwarts Legacy shots, which I think go a bit too far, show some of the positives.
sdQBdZL.jpeg


Similar story to the RE9 shots here. Better interaction between light and characters. More specular highlights and detail coming from the underlying materials.

They're in the OP, but these Starfield shots also look really good. Starfield has terrible lighting and character rendering. DLSS5 adds a sense of realism and better grounds everything in the scene. It looks like it goes from having basically no global illumination or bounce lighting to RTGI with multiple bounces. Color changes are likely due to the lighting in the scene now taking the color of the environment into account instead of being unchanging balls of illumination. I also wouldn't be surprised if there is some tweaking to the color correction here, as SF always had really shit colors that you had to correct later with Reshade/RenoDX, and there are rumors circulating of an update coming. That could be one of the changes being made. That's just me speculating, however.
1-scaled.jpg

11-scaled.jpg

I mean, just look at those fucking eyes. They go from dead orbs of shiny glass to actually looking like they're in someone's head. Frankly, the character rendering looks entirely broken in the shot without DLSS5. It reminds me of those mods people would make to run Oblivion/Skyrim on hardware well below the minimum requirements.
 
Last edited:
I don't even know why it's called DLSS. If they had marketed it as something else, there wouldn't have been so much backlash.
?

So the backlash is because… it isn't actually Super Sampling?

I agree with your first sentence, but the second makes no sense. If they call it DLLF - Deep Learning Lighting Filter Function, it's still the same backlash
 
It's the beginning of

> DEVS: "Hold up, why should we even try our best if AI will fix it?"

From now on, making games would be equal to "prompting". Games as we know them will become raw, bare-bones drafts to feed to DLSS 5.
By the time DLSS 5 transitions to DLSS 6, you will no longer see completed, finalized "video games" underneath it. That's the real issue here.

It's a new era of "How much can we half-ass our way through development so that it would STILL look good enough with DLSS?"
 
The faces are catching flak for being too tikkytokky (rightly so) but still the tech is very impressive overall. The improvement to lighting, shadowing, material quality is massive. Plus, it surprisingly looks better in motion than in stills, faces included.

Many questions remain on performance cost, image permanence (will the DLSS5 faces remain the same at each run?). I have less questions on "artistic integrity". This is a tool, and looks like it's a very flexible one at that. So it's up to the Devs to tune it to their specifications.
 
Last edited:
?

So the backlash is because… it isn't actually Super Sampling?

I agree with your first sentence, but the second makes no sense. If they call it DLLF - Deep Learning Lighting Filter Function, it's still the same backlash
No, I do not believe it would have been the same backlash. The video is titled DLSS5, and ML upscaling has grown to be one of the most beloved features on recent GPUs. We all anticipated the next evolution of DLSS, especially after the disappointing results of 4.5, not this.

RTX Remix also leverages AI and can dramatically alter the art direction of games as well, yet it was welcomed as a useful tool because they never tried to pass it off as something else. Setting expectations and naming your features appropriately is important in marketing.
 
Last edited:
It's the beginning of

> DEVS: "Hold up, why should we even try our best if AI will fix it?"

From now on, making games would be equal to "prompting". Games as we know them will become raw, bare-bones drafts to feed to DLSS 5.
By the time DLSS 5 transitions to DLSS 6, you will no longer see completed, finalized "video games" underneath it. That's the real issue here.

It's a new era of "How much can we half-ass our way through development so that it would STILL look good enough with DLSS?"
None of this is happening though. We shouldn't get carried away by people drooling over it as a first impression or the execs thinking this will cure cancer.

For the very reasons you stated, the industry will push back. And then it will change into something that is actually useful and productive over a few years.

Hot take: There will not be one game from a respectable studio built to be reliant on this approach as it exists now. As long as Nvidia trains and owns the model with whatever data they deem fit, no art department worth their salt will use it to define their game's look. And lighting is an integral part of that look. They might bolt it on for sponsorships and "strategic partnerships" though, but that harms no one.

No, I do not believe it would have been the same backlash. The video is titled DLSS5, and ML upscaling has grown to be one of the most beloved features on recent GPUs. We all anticipated the next evolution of DLSS, especially after the disappointing results of 4.5, not this.

RTX Remix also leverages AI and can dramatically alter the art direction of games as well, yet it was welcomed as a useful tool because they never tried to pass it off as something else. Setting expectations and naming your features appropriately is important in marketing.
RTX Remix was never presented as the future of game rendering. It was a modder community tool from the get go. It's name is inconsequential imo. It's how it is being positioned in the industry. If they called it Deep Learning Reshade Filter and presented it as a game mod that gamers can mess with, people would love it. But when they present it as this is how studios would officially make games in the future and people like Todd Howard bless it, that's a much bigger issue than its naming. That's foreshadowing a paradigm shift in how games are made, for better or for worse.
 
Last edited:
None of this is happening though. We shouldn't get carried away by people drooling over it as a first impression or the execs thinking this will cure cancer.

For the very reasons you stated, the industry will push back. And then it will change into something that is actually useful and productive over a few years.

Hot take: There will not be one game from a respectable studio built to be reliant on this approach as it exists now. As long as Nvidia trains and owns the model with whatever data they deem fit, no art department worth their salt will use it to define their game's look. And lighting is an integral part of that look. They might bolt it on for sponsorships and "strategic partnerships" though, but that harms no one.
But it already happened with Ray Tracing, why wouldn't it happen with this 'neural rendering' thing or whatever it is called? I mean, some games look like shit without RT, lacking features they would normally have.
 
But it already happened with Ray Tracing, why wouldn't it happen with this 'neural rendering' thing or whatever it is called? I mean, some games look like shit without RT, lacking features they would normally have.
Adding RT doesn't get rid of jobs. Overhauling the entire scene's lighting with knobs does. Unless the keys are handed to the dev to completely own and control the model, prompt it, upload samples, make hand drawn edits, point issues and fix them etc. and works cross platform so there is only one version of the game's artstyle, this won't fly. It needs to become a whole development environment integrated into the game engine. By complete control, I mean to the extent that a look can be unique enough to be trademarked or copyrighted. That's not the approach here. Sliders ain't enough. No matter how many variations you add, it will simply not be enough to prevent games from looking the same.

Neural rendering still holds huge promise and IS the future. But this is not the way…
 
Last edited:
I'll post what I said in the graphics thread:

They really picked the worst way to showcase this. You do not take Requiem, a game with perhaps the best character models out there, and turn it into AI sloppa. I do not blame anyone for thinking that looks abysmal, as I do too.

But Nvidia also clarified that developers have a large amount of control over the artistic side of the presentation. They can use masking on any part of the visuals which they don't want DLSS5 to touch. The ideal would be if developers offer the ability to disable it on faces via the graphics options, but leave everything else to DLSS5. Look beyond the faces in those Starfield comparisons and it's working some real magic in the environment. When used conservatively, this might be able to clean up some long-standing issues in rendering.
The issue with that would be that the characters would look even more out of place in the scene.
 


I agree this with. To me it's not just about looking good or not (even if so far I really don't like the look on faces), it's more about how generative AI in general makes everything less... valuable.
You can make, see and get everything without efforts, making everything less special.
It also doesn't help that lot of stuff look very similar, because it has this typical "AI" look that our brains can recognize instantly because of being used to see it everyday now.

If the next Pixar was completely AI generated, without real art directions, artist thoughts behind it, efforts, would you not mind? To me it changes a lot. The process of how something was created does change its value.

Yes, with DLSS5, we will have to see if it can just be some kind of setting similar to ray tracing, without changing faces and stuff in such way, and with total control from the developers, becoming another lighting tool, but clearly with the showcase Nvidia decided to show here, this wasn't their first intention. What we saw was pretty scary for art direction in general.
 
Last edited:
I mean obviously some of these results are weird and uncanny valley.

But tailored for a thousand different use cases? Kinda mind-blowing if we're being honest.
You can see when the image moves on the edges that the lighting leaves some trails, this is cool but for me more screenshot cool than motion cool (but like UE5 recently difficult to see with pre recorded video sometimes unless it is trying to show it or not prevent it and it makes for great screenshots and people lap it up… more temporal artefacts coming up…).
 
Seeing the result, I realize that I never really wanted video games to be this realistic, let alone have this layer of artificial intelligence. It's a reflection of today's AI-generated art, where we're drifting toward a generic aesthetic defined by beauty filters, harsh contrasts, and oversaturated colors. Honestly, it's unpleasant. I'd rather stick with the current generation's style for several more years, because this trajectory creates a visceral disconnect that I can't quite articulate, but that is undoubtedly there.
 
This is certainly not a huge selling point like DLSS4 was.

AMD is likely going to catch to DLSS 4. So Nvidia is going this route. Not sure if investing much resources into this is gonna be as much benefit.

I will gladly choose AMD hardware if it didn't do this but was path tracing capable and had ample vram.
 
It's obviously the future, I can't see a scenario in which people go back to before after having access to this shortcut. I also liked everything I saw. Still somewhat worried for what it implies for artists and artistic expression.
 
I am pretty stoked to mess around with it actually. Not sure how much the game needs to support it or can it be forced unto any image?

Basically I want to play Splinter Cell Chaos theory but photo real.
 
You can see when the image moves on the edges that the lighting leaves some trails, this is cool but for me more screenshot cool than motion cool (but like UE5 recently difficult to see with pre recorded video sometimes unless it is trying to show it or not prevent it and it makes for great screenshots and people lap it up… more temporal artefacts coming up…).
Pretty ballsy of them to have all those open world games supporting it at launch. It's going to be a bloodbath! lol. They will probably split the community too… as both sides are going to think the other is ruining gaming and "in the way" of its future.
 
Last edited:
Most depressing Gaf thread I've seen. Another big step towards a soulless generative AI hellscape, and half the people are spreading their asscheeks for it.
If you want a hivemind there are other forums you can go to. Would rather have a spicy debate than a downloaded opinion plan.
 
Most depressing Gaf thread I've seen. Another big step towards a soulless generative AI hellscape, and half the people are spreading their asscheeks for it.
There are other places you can go if you want a mindless confirmation of everything you believe. I hear resetera and bluesky are open.
 
The second cancer after the Unreal Engine, where every bit will start looking even more same-y slope-y. And since it requires a lot of resources it's primarily for the data centers, so have fun paying your 50$+ Geforce Now subscription.
 
This is certainly not a huge selling point like DLSS4 was.

AMD is likely going to catch to DLSS 4. So Nvidia is going this route. Not sure if investing much resources into this is gonna be as much benefit.

I will gladly choose AMD hardware if it didn't do this but was path tracing capable and had ample vram.
Don't worry. I'm sure you'll be able to buy an AMD graphics card in a few years that has a much worse implementation that only works in a few games.
 
I think what was shown off is a mixed bag. Works really well on the environment, but most of the faces are bad. So no in its current form, but yes to more work being done on the technology to improve it.
 
Top Bottom