• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Nvidia at Live GTC : DLSS 5

Before 2019: Ok team, we have to optimize the game or it will run like shit!
Between 2019 and 2025: Ok team, AI will make the game run good, i don't care about performance, but we still need to make good textures, mesh and models.
Starting from 2026: Ok team, fuck optimization, textures, and geomtry, Just develop a cheap basic character model, something like a mannequin, AI will make it looks good.



Gaming is doomed
Complete doomer nonsense.

Studios spend years obsessing over optimization, dragging development cycles to 6–7 years, shipping games with ugly textures anyway, and even wasting time remaking games that are barely four years old. It's ridiculous.

When technology shows up that makes things like optimization and visual fidelity easier to achieve, you use it. Not using it is just dumb.

If developers have any sense, they'll embrace it and shift their time toward actual gameplay innovation. Better physics, deeper systems, real destruction, rather than spending years trying to polish an ugly turd of a game that will still end up ugly.

This is literally the best thing to happen in gaming since I cant even remember.
 
Last edited:
There's no way you can prefer the first image from this comparison

Captura-de-pantalla-2026-03-16-194857.png

Captura-de-pantalla-2026-03-16-194908.png


The first image is so flat and pale it's unreal - she looks like a porcelain doll.
The problem i particularly have is because i've seen so much AI images/videos around the internet, my brain just can't help but associate all of it's telltale signs with low-quality, low-effort stuff.

I don't feel like i'm looking at a videogame image, i feel like i'm watching a youtube short where epstein and diddy are about to have a dragon-ball level fight. To put in another way, if this had been show to me 5 years ago, before AI generation became know as AI slop and was basically everywhere, i would've been blown away.

But now it just looks like more AI slop.
 
Last edited:
I think it all mostly looks fucking fantastic. People are just mad because AI. I keep seeing all this "yassification" of Grace Ashcroft as if she wasn't a 20s blue eyed blonde to begin with. Nobody was bitching about that though?
 
Last edited:
They need to make it look more organic cause right now the Leon and Grace faces look like those basic A.I video meme on youtube. I guess eventually they'll make it more organic and animated more properly. Once they do that than more people will start being into it cause at the moment no interest but I guess it does have potential if I'm being honest. Been watching a bunch of those fun A.I vid like the Jetsons as a 80s, Resident Evil as a 80s, Zelda as a 80s dark fantasy setting, etc etc. And some of those are better done than others. So in the future with some improvements it'll be better. And hopefully by that time probably at least pass 2030, most of the cards able to do this would be sold at a consumer friendly price.

For me I just bought a new PC in December so I won't be getting a new PC until at least the RTX 8000 series. This A.I thing should be way better by then and hopefully a more fair price for the RTX 8000 series to pair with it. Going for another Mircocenter pre built again.
 
Try turning your light on at night in a dark room. It looks different, and I'll give you a clue why: it isn't because everything in the room suddenly transformed into different things.
Holy shit I'm not turning the lights off ever again.
 
So because of better lightning her lips are thicker and juicier hmm yes I understand now

Its all making sense
It doesnt change the models in any way according to Nvidia, it changes (upgrades) materials and lighting, in addition to subsufrace scattering.

The lighting itself can make the model appear different, even though it isnt. Lips look different as a result of material roughness being higher. I am not smart enough to explain how reflections and roughness are tied together, but thats what should be different.
 
Last edited:
Really impressive from the DF video i saw, but i'm wondering what the performance cost will be in games, i hope Nvidia keep that down, but what they have shown looks great.
 
I hate to break it to you but games aren't real.
crying-still.gif
The work put behind it is, there was a moment when someone thought about how to create a specific thing, worked on it and created that. It's very different from AI generating something. There is a thought process, an intent, art decisions.
That's what makes it real, what makes it art.
 
Starting from 2026: Ok team, fuck optimization, fuck textures, fuck 3D model, just develop a cheap basic character model, something like a mannequin, AI will make it run and looks good.
I half hope for this. All the devs just saying fuck to graphics and developing low-poly games with low-res textures and letting AI handle realistic imagery.

That way i can just leave DLSS 5 off and enjoy me some low-tech look just the way i like it. Of course, game would still need good artistic direction, but from what i'm seeing this tech can't do this for them.
 
Yes. The AI reconstruction uses the underlying information to inform its interpretation. As long as the underlying information for a given object doesn't change, the reconstruction will be identical every time.
I don't know man. You can give an AI a prompt multiple times and the result won't always be the same. Could imagine being the same case here, specially since "as the underlying information for a given object doesn't change" is just impossible. Characters move around, the lighting changes, the wather changes, fog, shadows, particles near them and what-not. Keeping those AI reconstructions stable must be an engineerning nightmare.

nope it won't.

it already uses a shit load of memory, if you wanted to make it consistent you'd literally have to keep previous generation in memory and even save it into storage.

this will literally look different every time you start the game, or just open the menu and close it again.
Yeah that would be a big problem. Imagine you are playing a long RPG and characters start looking different as the game goes on, that would take me out of the game completely.
 
Have they posted images using DLSS 4 or 4.5 vs 5 so we can see the real difference? Apart from a marketing sense, using extreme ends can be dramatic, but I'd like to see some real life comparisons. I know the new DLSS 5 has some new add ons in it so maybe it won't matter, just curious.
 
Any art form is not real then

Are paintings real?

Are movies real?

Is music real?

Is theater real?

But guess what: they are what they are are because they are made by individual human beings that cannot be replicated in any way....

This is the opposite of that
Yes those are 'real'. Games are code. The code generates the images. They can be Art but they aren't real.
 
I suppose this isn't a DLSS 5 OT, so I can speak freely, and I will: all of this looks like utter dogshit.

It's like any random dude's face becomes my avatar.
 
None of you can gaslight me into believing the AI slop filter on the right looks better. And the rest of the internet agrees with me
it objectively looks better.

And nothing about the art is changed.

You guys are just hating because its AI. Not because it looks worse.

If it looked the EXACT same, but took 15 years to make because it was hand crafted, you'd say it looks amazing. :messenger_grinning_sweat:
 
Last edited:
The work put behind it is, there was a moment when someone thought about how to create a specific thing, worked on it and created that. It's very different from AI generating something. There is a thought process, an intent, art decisions.
That's what makes it real, what makes it art.
And when someone puts in the effort to tailor this filter to create a specific look - that also applies.
 
The problem i particularly have is because i've seen so much AI images/videos around the internet, my brain just can't help but associate all of it's telltale signs with low-quality, low-effort stuff.

I don't feel like i'm looking at a videogame image, i feel like i'm watching a youtube short where epstein and diddy are about to have a dragon-ball level fight. To put in another way, if this had been show to me 5 years ago, before AI generation became know as AI slop and was basically everywhere, i would've been blown away.

But now it just looks like more AI slop.
EXACTLY THIS!
 
Stop being disingenuous...
Artistic intent doesn't have to be trying to make something look beautiful. It's simply the intent of what the artists tried to convey using their resources.

If you make a model of a child, the artistic intent is for the character to look like a child. If you ran it through DLSS5 and it make the character look like a young adult, congratulations the AI has butchered the artistic intent. This is not hard to find out.

to clarify, i am agreeing with you
 
Last edited:
Here's the direct comparison of the Grace pic on Nvidia's site: https://www.nvidia.com/en-us/geforc...equiem-geforce-rtx-comparison-screenshot-001/

It looks like the pics are taken in slightly different positions, so her lips looking larger in the DLSS5 version seem to be just her mouth being slightly more open, not a sign that the model geometry has been changed. The main differences come from lighting, and the more detailed skin texture and subsurface scattering. Even when comparing non-RT/RT/Path Tracing her face looks substantially different depending on the lighting.

Radical lighting changes like this can certainly have an effect on the "look" of characters that might not have been intended when they were designed for a simpler setup. People just got used to how Grace is "supposed to" look, so they naturally feel the face is off despite the model being identical.
 
Is this going to be able to "remember" the last AI reconstruction? Or are characters going to look different every time I turn the camera towards them?

Really impressive from the DF video i saw, but i'm wondering what the performance cost will be in games, i hope Nvidia keep that down, but what they have shown looks great.

These are my two major concerns with it. AI generation tends to be quite heavy on VRAM too.

If it's 5-series exclusive I can't use it anyway, and I'm not planning on upgrading anytime soon.
 
Here's the direct comparison of the Grace pic on Nvidia's site: https://www.nvidia.com/en-us/geforc...equiem-geforce-rtx-comparison-screenshot-001/

It looks like the pics are taken in slightly different positions, so her lips looking larger in the DLSS5 version seem to be just her mouth being slightly more open, not a sign that the model geometry has been changed. The main differences come from lighting, and the more detailed skin texture and subsurface scattering. Even when comparing non-RT/RT/Path Tracing her face looks substantially different depending on the lighting.

Radical lighting changes like this can certainly have an effect on the "look" of characters that might not have been intended when they were designed for a simpler setup. People just got used to how Grace is "supposed to" look, so they naturally feel the face is off despite the model being identical.
This is disengenuous. Zoom in.

It's not just her mouth, her eyes are also larger and shaped differently and her jawline has changed as well.
 
These are my two major concerns with it. AI generation tends to be quite heavy on VRAM too.

If it's 5-series exclusive I can't use it anyway, and I'm not planning on upgrading anytime soon.
Yeah, true, even though i have a 5 series card, i don't want it running at max constantly just to use DLSS 5, but i like what i've seen so far.
 
I'm starting to think a huge number of people's brains have been literally broken by some sort of psyop to have a Pavlovian reaction anything ai that literally stops their brains working and forces the tiresome "ahhregh ai slop, slop slop" to just slop out their mouths.

It's all very strange.
 
Todd agrees that it's amazing with Starfield, and he's right. Starfield (and FIFA) are the best examples so far, they look incredible with this on.


What's Morrowind's water rendering got to do with this AI shit? Come on man.
And "we can't wait for you to play/enjoy/experience this" is such pathetic marketing bullshit.
I'm so glad people see through this obvious nonsense.
 
I'm starting to think a huge number of people's brains have been literally broken by some sort of psyop to have a Pavlovian reaction anything ai that literally stops their brains working and forces the tiresome "ahhregh ai slop, slop slop" to just slop out their mouths.

It's all very strange.
I don't care whether this has been AI rendered or done by hamsters running in wheels. It looks like fucking shit.
It absolutely doesn't matter how this is created, it matters that it's poorly done.
 
It's just a slightly different angle. Mouth a little more open, eyes a little more open, looking to a different direction. You gotta do the zooming yourself.
It doesn't explain the eyeshape itself and jawline. Your argument would be better off if you simply said 'this tech is still early in development so some things might appear altered'.
 
NVidia is basically delivering the BluePoint remake version of these games with DLSS5. Whether or not it accurately captures the true spirit of the original source material, there's definitely some impressive technical uplift.
 
Top Bottom