• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

DLSS 5 - Yes or No?

Do you think DLSS 5 is the future?

  • Yes and I like it

  • Yes but I don't like it

  • No, it's ugly and we'll forget about it

  • No opinion/other

  • No, we need less AI not more


Results are only viewable after voting.
JBEaOLwZXZ2g7tsQ.jpeg
Meta needs to add DLSS 5 to smart glasses.
 
Because for 40 years I have dreamed of photorealism and now it's here. I don't give a shit how its achieved, and I certainly dont care if scores of purple hairs get shit-canned and replaced. Technology moves forward and this is awesome.

I meant why anyone would say no … so yes, it's awesome.
 
People against this just for being AI seems pretty silly/illogical considering we all play games powered by crazy powerful tech now. The games you play are literally enhanced in like 1,000 ways by the tech and tools used. What a strange place to draw a line in the sand.

IMO, the real conversation shouldn't be do you like it 'yes' or 'no', it should be about what version of this future gives artists more control. Rather than pushing back on what it currently is, more attention should be given to how this tech will evolve and how it can better serve the artist's original vision.

A lot of the current criticism focuses on things like the 'modeling shoot overly make-upped' version of Grace or that glossy, 'hero lighting' Instagram look. But those are really just outputs based on how the AI has been trained, essentially default filters. The more interesting question is what happens when the models are better trained and developers get deeper control over that layer.

Imagine a DLSS 4.x toolset that gives artists real high level authority over the final image. Presets, sliders, movable scene lighting, and even the ability to train the model on a game's original art direction. Instead of a one-size-fits-all aesthetic like they demoed, it becomes a flexible system that can be tuned to match tone and intent, just like lighting, color grading, or post processing are used today.

What NVIDIA has shown so far feels limited of course, as should be expected, and them leaning into hyper-real, glamorized "hero lighting" styles might not have been the best choice. What we all really need to see now is the tool in developers' hands showing some level of artistic authorship.

If NVIDIA is listening their next presentation should be some behind the scenes or interview style presentations at studios like Capcom, showing them iterating on a character like Grace across multiple stylistic directions, really highlighting developer input with this tool. The same way artists already adjust lighting to define mood, show off a comparable set of controls for this. There's no reason this couldn't be guided by devs toward something grittier, flatter, or more horror focused simply by feeding it the right references and constraints. The real thing we all want to see is the AI tools like this being under the control of the artists, not just writing over the artist's work with a bunch of generic samey art.

NVIDIA say it's there, now they need to SHOW US they can overcome these limitations.
 
Last edited:
As long as it's controlled by devs own art direction- I guess I am okay with it.

I think Grace is the worst example as it feels like a Chinese beauty filter equivalant for games, with uncanny valley effect - but with or without DLSS, gfx trend has been going towards to hyper realism to boot with - this just made job easier(?) for devs I assume.

I however do like the environment enhancements. That I am pretty happy with.
 
Understadable, there is a lot more to it in the technical aspect, lot of nunces. However, the resident evil girl model doesn't change.

Look it at for yourself. EDIT; if it has nicer red lips is because capcom wanted that way, or maybe the previuos in game light was very flat. I don't suscribe with the tone of the tweet tough, but i do like busting balls with other fellow gaffers,


The model is trained to improve lighting, and while it does so, it achieves the result through emphasizing all those wrinkles and crevices. It goes balls deep into the task, lacking subtlety in what it does to the human faces. The worst example would be that old woman turning into a wrinkled hag.

You see an improvement in graphics, I see it too - represented by an evil doppelganger. No one denies the advancement in tech. It's just this advancement is being sold at the cost of having 'lizard brain' triggered in some way.
I see "it" staring back at me. It replaces the characters. It looks similar, it may even look better (if not messed up with wrinkles) but it's an evil twin that would eat me alive if I turn away.

cXzruMkgOnZLuflR.jpg
 
7zKHwS21m1KMwqkc.jpg


Thats impressive how much closer it is to the actual model. So my vote goes to yes. The tech is incredible and iam sure we see some devs making great use of it.
I don't know how you look at that image you posted and think the DLSS5 looks closer to the original human. It looks like a different, ai generated human that kind of looks like Amber Heard. The original actress and the original model, they look the same.
 
Last edited:
Like it not, I see this as being the future direction of interactive rendering. Maybe even as far as having the game under the bonnet being little more than an interactive grey box world (inc. gameplay) and then letting AI "fill in the blanks" based on reference material, art style, etc.
 
In a controlled environment it could be like a LA Noire thing, which is fine.

It can't just do its own thing on top of previously created characters, it has to be baked into the character design to avoid the hoe/himbo effects.

Why oh why doesn't it use renders from a pool of images from a Exteme/path traced version of the game and apply that to environments in a performant way. I am very ok with lighting via tricks, moreso than frames, but it has to be grounded and true to a single digital world and not drawing from every image in human history.

I know this would require individual games be patched, but if the patch was automated (run at high, apply filter to low) we would get plenty of games on the cutting edge while old games we could brute force via locally running lighting techniques.
 
I don't really understand the backlash. I think it improves the visuals on most examples. Is it because its AI?

Don't get it.

I think it's almost all because it turned Grace into something a bit like you'd see in AI-generated porn. They don't stop and consider that Capcom modeled her like that and it's just an unfortunate association, all they see is face filters and "AI slop" and just spaz out.

Also, it's just generally what you get when you combine idiocy with an urgent need for attention via sensational takes, which sums up a lot of internet discourse these days.
 
The 'devs will have control over it' argument is not well thought through. Developers are not going to spend time creating two different versions of each character.

That means using this makes you beholden to how Nvidia interpret the art direction in whatever game you're playing. No thanks.

It's also so uncanny and looks terrible in motion. Look at the Resident Evil clip again, the woman's hand in the background is all fucked up. This is jumping the shark.
 
The 'devs will have control over it' argument is not well thought through. Developers are not going to spend time creating two different versions of each character.

That means using this makes you beholden to how Nvidia interpret the art direction in whatever game you're playing. No thanks.

It's also so uncanny and looks terrible in motion. Look at the Resident Evil clip again, the woman's hand in the background is all fucked up. This is jumping the shark.

They mean devs have control over how DLSS 5 applies its lighting changes. No need for additional assets.
 
Are you fucking blind?
No, that's why I can see that dlss5 alters the faces.

As I posted above, Oliver from Digital Foundry who kicked off this shitstorm because he loved dlss5 noted explicitly it alters the faces to the point of looking like a different model at times.

Because he's not blind either.
 
Last edited:
No, that's why I can see that dlss5 alters the faces.

As I posted above, Oliver from Digital Foundry who kicked off this shitstorm because he loved dlss5 noted explicitly it alters the faces to the point of looking like a different model at times.

Because he's not blind either.

I've said repeatedly that it looks like a different model. What I don't say is that the old one with just (just!) path tracing, which looks like some kind of discount sex doll, actually looks more human than the one that looks pretty damn human. Because that's just retarded. And Oliver would agree (albeit in a nice way :))
 
I've said repeatedly that it looks like a different model. What I don't say is that the old one with just (just!) path tracing, which looks like some kind of discount sex doll, actually looks more human than the one that looks pretty damn human. Because that's just retarded. And Oliver would agree (albeit in a nice way :))
If you watch the further hands on, they do mention it doesn't know where the light sources are at all, which explains why it just blows everything out and messes up the direction of the lighting, the atmosphere, the reflections, and gives everything hero lighting.

It's just not good. It looks more vivid and superficially realistic, but it looks bad in every way. You lose shadowing, you lose mood, you lose things like the warm, bright red lights in the dark, rainy atmosphere that look tremendous on an HDR screen.

It looks bad.

It ends up looking like a poorly shot Marvel film, at best, where there are no shadows and mood, and sometimes characters look like they're just floating blue screened into a scene. It removes all intentionality to create atmosphere and variations in a single scene because the stupid AI doesn't know where the light sources are and doesn't know what to do with it.
 
Last edited:
The developer being on artistic control means nothing if there isn't enough artistic effort at the end. We were complaining about laziness, homogeneization, too focusing on photorrealism and not enough on art design and the rising of hardware prices...but this is ok now.

The thing being optional...yeah, and the horse armor dlc was just a horse armor dlc. Everything that makes money fast and safe by doing less is GOLD for these corporations, and this won't fix ANY of triple A problems, it will be on reverse.

The solution for any crap from this industry isn't being "replaced" by another, it's making things right, so cases like Starfield, the "wokism" and whatever we say aren't excuses for this.
 
Last edited:
If you watch the further hands on, they do mention it doesn't know where the light sources are at all, which explains why it just blows everything out and messes up the direction of the lighting, the atmosphere, the reflections, and gives everything hero lighting.

It's just not good. It looks more vivid and superficially realistic, but it looks bad in every way. You lose shadowing, you lose mood, you lose things like the warm, bright red lights in the dark, rainy atmosphere that look tremendous on an HDR screen.

It looks bad.

It ends up looking like a poorly shot Marvel film, at best, where there are no shadows and mood, and sometimes characters look like they're just floating blue screened into a scene. It removes all intentionality to create atmosphere and variations in a single scene because the stupid AI doesn't know where the light sources are and doesn't know what to do with it.

I've watched their video a couple of times and don't remember them saying anything like that. Their response was overwhelmingly positive, for a start. You're probably just taking one small line and misinterpreting it or at least extrapolating wildly away from its actual meaning.
 
I really believe when it comes to this tech, people are just focusing on the worst-case use scenario of the thing. Not that I blame them, it's what was shown and how it was shown.

But how about this...

In your typical RT based 60fps game, each rendered frame takes 16.7ms. Lighting (RT), is the single most expensive process in that render frame, taking up around 50% of that frame budget. Around 8ms. Everything else, including DLSS, FSR4, PSSR2, logic, geometry, shading, and textures, UI...etc all fit into the rest of that budget.

RT is expensive. Typically, you would need to shoot over 10k Rays per pixel if what you are trying to get is movie style, accurate ground truth RT. No hardware on the market can do this in real-time. So we are currently using "tricks". Instead of shooting thousands of rays, we are shooting maybe 1-4 per pixel. then using the denoiser to fill in for all the missing rays. And even then, we have to be aggressive in what we actually try and resolve for, or even calculate rays at a lower rez then the rendered rez to keep everything nicely in that 8ms budget.

What tech like this means is that the DLSS5, FSR4, PSSR2 part of the frame, which currently costs anywhere from 0.7ms to 1.2ms per frame... would instead cost like 4ms/frame. But more importantly, the lighting part will drop from 8ms to as little as 3-4ms. Because now, we can shoot significantly less rays... and then the AI model doeesnt just fill in the gaps, but informas for lighting data way beyond what the hardware could even have done. Simulating the effect of having resolved for a ground truth frame.

There are parts of the frame that wouldn't even need to use rays for to provide accurate lighting. And parts of the frame that a dev could flag for only real rays to be used (that's the artist's intent part).

Can this tech be generally abused and allow for AI slop? Absolutely, but how anyone can look at this and say its a bad thing is beyond me, when it can do so much for how games are made and lit.
 
I really believe when it comes to this tech, people are just focusing on the worst-case use scenario of the thing. Not that I blame them, it's what was shown and how it was shown.

But how about this...

In your typical RT based 60fps game, each rendered frame takes 16.7ms. Lighting (RT), is the single most expensive process in that render frame, taking up around 50% of that frame budget. Around 8ms. Everything else, including DLSS, FSR4, PSSR2, logic, geometry, shading, and textures, UI...etc all fit into the rest of that budget.

RT is expensive. Typically, you would need to shoot over 10k Rays per pixel if what you are trying to get is movie style, accurate ground truth RT. No hardware on the market can do this in real-time. So we are currently using "tricks". Instead of shooting thousands of rays, we are shooting maybe 1-4 per pixel. then using the denoiser to fill in for all the missing rays. And even then, we have to be aggressive in what we actually try and resolve for, or even calculate rays at a lower rez then the rendered rez to keep everything nicely in that 8ms budget.

What tech like this means is that the DLSS5, FSR4, PSSR2 part of the frame, which currently costs anywhere from 0.7ms to 1.2ms per frame... would instead cost like 4ms/frame. But more importantly, the lighting part will drop from 8ms to as little as 3-4ms. Because now, we can shoot significantly less rays... and then the AI model doeesnt just fill in the gaps, but informas for lighting data way beyond what the hardware could even have done. Simulating the effect of having resolved for a ground truth frame.

There are parts of the frame that wouldn't even need to use rays for to provide accurate lighting. And parts of the frame that a dev could flag for only real rays to be used (that's the artist's intent part).

Can this tech be generally abused and allow for AI slop? Absolutely, but how anyone can look at this and say its a bad thing is beyond me, when it can do so much for how games are made and lit.

Welcome to modern internet discourse. I've noticed it across topics more and more the past couple of years. I don't really understand what's going on tbh, because there hasn't really been some big shift like social media or smart phones, as in previous shifts. If I didn't know better I'd say it was AI, because that fits the timeline. Although what the mechanism would be I'm not sure.

I just know that society seems to be getting much, much, stupider.
 
People just look old and sweaty with it.
That's not the tech's fault.

It's how it's used. You can task two 3d artist to make you a model of say... your face. Both of these artists use the very same version of Blender. With one, your model looks like you, with the other, it looks like you, but your eyes spacing and jawline is off.

That's not Blender's fault.

Its no different here, and this is what it seems most people aren't getting.
Welcome to modern internet discourse. I've noticed it across topics more and more the past couple of years. I don't really understand what's going on tbh, because there hasn't really been some big shift like social media or smart phones, as in previous shifts. If I didn't know better I'd say it was AI, because that fits the timeline. Although what the mechanism would be I'm not sure.

I just know that society seems to be getting much, much, stupider.
My take on this is that when we now live in a world where it's easier than ever for everyone to put their opinion out there and have a say on things and as quickly as possible. Recognition, acceptance and gratification takes precedence over facts or reason. So what we end up with is the hottest takes out there, and unfortunately, most hot takes aren't backed with any kinda relevant fact or knowledge. But whats crazier is that to fit in the mould or stand out even more, you gotta come in with an even hotter take.

People these days are more concerned about winning, or being among, being right.... than being honest, fair, accurate... or even just spending a min to give something a thought.
 
My take on this is that when we now live in a world where it's easier than ever for everyone to put their opinion out there and have a say on things and as quickly as possible. Recognition, acceptance and gratification takes precedence over facts or reason. So what we end up with is the hottest takes out there, and unfortunately, most hot takes aren't backed with any kinda relevant fact or knowledge. But whats crazier is that to fit in the mould or stand out even more, you gotta come in with an even hotter take.

People these days are more concerned about winning, or being among, being right.... than being honest, fair, accurate... or even just spending a min to give something a thought.

Yeah I think it's largely the absolute saturation of content. Everyone has accounts, lots of people have channels, many of them post "AI slop" (which is a correct use of that cliche btw, folks) - it's just total and utter content overload.

And I guess a fight for attention is one of the obvious consequences. I'm sure there are other factors too - I think partisanship has been growing for other reasons as well - but yeah that's probably a huge factor.
 
Based on the current showcase of the tech, close to 300 people have no taste or discerning abilities whatsoever. :messenger_weary:

Like, it's literally WORSE in many aspects compared to the original images and that's not even taking into account the horrific changes to some of the facial features.
 
So im curious, are we finally acheiving actual photorealism by 2030?

We've all seen those videos that are run through runway to make them look like photorealistic gameplay, its only a matter of getting them to run real time which this DLSS seems to be the first step of.
 
Top Bottom