• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

DLSS 5 - Yes or No?

Do you think DLSS 5 is the future?

  • Yes and I like it

  • Yes but I don't like it

  • No, it's ugly and we'll forget about it

  • No opinion/other

  • No, we need less AI not more


Results are only viewable after voting.
JBEaOLwZXZ2g7tsQ.jpeg
Meta needs to add DLSS 5 to smart glasses.
 
Because for 40 years I have dreamed of photorealism and now it's here. I don't give a shit how its achieved, and I certainly dont care if scores of purple hairs get shit-canned and replaced. Technology moves forward and this is awesome.

I meant why anyone would say no … so yes, it's awesome.
 
People against this just for being AI seems pretty silly/illogical considering we all play games powered by crazy powerful tech now. The games you play are literally enhanced in like 1,000 ways by the tech and tools used. What a strange place to draw a line in the sand.

IMO, the real conversation shouldn't be do you like it 'yes' or 'no', it should be about what version of this future gives artists more control. Rather than pushing back on what it currently is, more attention should be given to how this tech will evolve and how it can better serve the artist's original vision.

A lot of the current criticism focuses on things like the 'modeling shoot overly make-upped' version of Grace or that glossy, 'hero lighting' Instagram look. But those are really just outputs based on how the AI has been trained, essentially default filters. The more interesting question is what happens when the models are better trained and developers get deeper control over that layer.

Imagine a DLSS 4.x toolset that gives artists real high level authority over the final image. Presets, sliders, movable scene lighting, and even the ability to train the model on a game's original art direction. Instead of a one-size-fits-all aesthetic like they demoed, it becomes a flexible system that can be tuned to match tone and intent, just like lighting, color grading, or post processing are used today.

What NVIDIA has shown so far feels limited of course, as should be expected, and them leaning into hyper-real, glamorized "hero lighting" styles might not have been the best choice. What we all really need to see now is the tool in developers' hands showing some level of artistic authorship.

If NVIDIA is listening their next presentation should be some behind the scenes or interview style presentations at studios like Capcom, showing them iterating on a character like Grace across multiple stylistic directions, really highlighting developer input with this tool. The same way artists already adjust lighting to define mood, show off a comparable set of controls for this. There's no reason this couldn't be guided by devs toward something grittier, flatter, or more horror focused simply by feeding it the right references and constraints. The real thing we all want to see is the AI tools like this being under the control of the artists, not just writing over the artist's work with a bunch of generic samey art.

NVIDIA say it's there, now they need to SHOW US they can overcome these limitations.
 
Last edited:
As long as it's controlled by devs own art direction- I guess I am okay with it.

I think Grace is the worst example as it feels like a Chinese beauty filter equivalant for games, with uncanny valley effect - but with or without DLSS, gfx trend has been going towards to hyper realism to boot with - this just made job easier(?) for devs I assume.

I however do like the environment enhancements. That I am pretty happy with.
 
Understadable, there is a lot more to it in the technical aspect, lot of nunces. However, the resident evil girl model doesn't change.

Look it at for yourself. EDIT; if it has nicer red lips is because capcom wanted that way, or maybe the previuos in game light was very flat. I don't suscribe with the tone of the tweet tough, but i do like busting balls with other fellow gaffers,


The model is trained to improve lighting, and while it does so, it achieves the result through emphasizing all those wrinkles and crevices. It goes balls deep into the task, lacking subtlety in what it does to the human faces. The worst example would be that old woman turning into a wrinkled hag.

You see an improvement in graphics, I see it too - represented by an evil doppelganger. No one denies the advancement in tech. It's just this advancement is being sold at the cost of having 'lizard brain' triggered in some way.
I see "it" staring back at me. It replaces the characters. It looks similar, it may even look better (if not messed up with wrinkles) but it's an evil twin that would eat me alive if I turn away.

cXzruMkgOnZLuflR.jpg
 
7zKHwS21m1KMwqkc.jpg


Thats impressive how much closer it is to the actual model. So my vote goes to yes. The tech is incredible and iam sure we see some devs making great use of it.
I don't know how you look at that image you posted and think the DLSS5 looks closer to the original human. It looks like a different, ai generated human that kind of looks like Amber Heard. The original actress and the original model, they look the same.
 
Last edited:
Like it not, I see this as being the future direction of interactive rendering. Maybe even as far as having the game under the bonnet being little more than an interactive grey box world (inc. gameplay) and then letting AI "fill in the blanks" based on reference material, art style, etc.
 
In a controlled environment it could be like a LA Noire thing, which is fine.

It can't just do its own thing on top of previously created characters, it has to be baked into the character design to avoid the hoe/himbo effects.

Why oh why doesn't it use renders from a pool of images from a Exteme/path traced version of the game and apply that to environments in a performant way. I am very ok with lighting via tricks, moreso than frames, but it has to be grounded and true to a single digital world and not drawing from every image in human history.

I know this would require individual games be patched, but if the patch was automated (run at high, apply filter to low) we would get plenty of games on the cutting edge while old games we could brute force via locally running lighting techniques.
 
I don't really understand the backlash. I think it improves the visuals on most examples. Is it because its AI?

Don't get it.

I think it's almost all because it turned Grace into something a bit like you'd see in AI-generated porn. They don't stop and consider that Capcom modeled her like that and it's just an unfortunate association, all they see is face filters and "AI slop" and just spaz out.

Also, it's just generally what you get when you combine idiocy with an urgent need for attention via sensational takes, which sums up a lot of internet discourse these days.
 
The 'devs will have control over it' argument is not well thought through. Developers are not going to spend time creating two different versions of each character.

That means using this makes you beholden to how Nvidia interpret the art direction in whatever game you're playing. No thanks.

It's also so uncanny and looks terrible in motion. Look at the Resident Evil clip again, the woman's hand in the background is all fucked up. This is jumping the shark.
 
The 'devs will have control over it' argument is not well thought through. Developers are not going to spend time creating two different versions of each character.

That means using this makes you beholden to how Nvidia interpret the art direction in whatever game you're playing. No thanks.

It's also so uncanny and looks terrible in motion. Look at the Resident Evil clip again, the woman's hand in the background is all fucked up. This is jumping the shark.

They mean devs have control over how DLSS 5 applies its lighting changes. No need for additional assets.
 
Are you fucking blind?
No, that's why I can see that dlss5 alters the faces.

As I posted above, Oliver from Digital Foundry who kicked off this shitstorm because he loved dlss5 noted explicitly it alters the faces to the point of looking like a different model at times.

Because he's not blind either.
 
Last edited:
No, that's why I can see that dlss5 alters the faces.

As I posted above, Oliver from Digital Foundry who kicked off this shitstorm because he loved dlss5 noted explicitly it alters the faces to the point of looking like a different model at times.

Because he's not blind either.

I've said repeatedly that it looks like a different model. What I don't say is that the old one with just (just!) path tracing, which looks like some kind of discount sex doll, actually looks more human than the one that looks pretty damn human. Because that's just retarded. And Oliver would agree (albeit in a nice way :))
 
I've said repeatedly that it looks like a different model. What I don't say is that the old one with just (just!) path tracing, which looks like some kind of discount sex doll, actually looks more human than the one that looks pretty damn human. Because that's just retarded. And Oliver would agree (albeit in a nice way :))
If you watch the further hands on, they do mention it doesn't know where the light sources are at all, which explains why it just blows everything out and messes up the direction of the lighting, the atmosphere, the reflections, and gives everything hero lighting.

It's just not good. It looks more vivid and superficially realistic, but it looks bad in every way. You lose shadowing, you lose mood, you lose things like the warm, bright red lights in the dark, rainy atmosphere that look tremendous on an HDR screen.

It looks bad.

It ends up looking like a poorly shot Marvel film, at best, where there are no shadows and mood, and sometimes characters look like they're just floating blue screened into a scene. It removes all intentionality to create atmosphere and variations in a single scene because the stupid AI doesn't know where the light sources are and doesn't know what to do with it.
 
Last edited:
The developer being on artistic control means nothing if there isn't enough artistic effort at the end. We were complaining about laziness, homogeneization, too focusing on photorrealism and not enough on art design and the rising of hardware prices...but this is ok now.

The thing being optional...yeah, and the horse armor dlc was just a horse armor dlc. Everything that makes money fast and safe by doing less is GOLD for these corporations, and this won't fix ANY of triple A problems, it will be on reverse.

The solution for any crap from this industry isn't being "replaced" by another, it's making things right, so cases like Starfield, the "wokism" and whatever we say aren't excuses for this.
 
Last edited:
If you watch the further hands on, they do mention it doesn't know where the light sources are at all, which explains why it just blows everything out and messes up the direction of the lighting, the atmosphere, the reflections, and gives everything hero lighting.

It's just not good. It looks more vivid and superficially realistic, but it looks bad in every way. You lose shadowing, you lose mood, you lose things like the warm, bright red lights in the dark, rainy atmosphere that look tremendous on an HDR screen.

It looks bad.

It ends up looking like a poorly shot Marvel film, at best, where there are no shadows and mood, and sometimes characters look like they're just floating blue screened into a scene. It removes all intentionality to create atmosphere and variations in a single scene because the stupid AI doesn't know where the light sources are and doesn't know what to do with it.

I've watched their video a couple of times and don't remember them saying anything like that. Their response was overwhelmingly positive, for a start. You're probably just taking one small line and misinterpreting it or at least extrapolating wildly away from its actual meaning.
 
Top Bottom