• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Nvidia at Live GTC : DLSS 5

Wait till DLSS 6 or 7, it'll be what most will prefer.

DLSS 5 right now is super early. DLSS 1 sucked, DLSS 2 also was kinda meh, DLSS 3 was a huge improvement, DLSS 4 is almost perfect now. This has a long, long way to go, I'm glad the technology exists and is developed actively. Anyway, I'm positive there will be sliders whether you want the GenAI filter or just up-scaling or both or whatever. It's just melodrama now, let it release and then get optimized in scale and finally tuned by gamedevs and Nvidia.

I'm waiting and will use this in 2030.
 
just doesn,t use it at the end of process all the graphic like a filter and alter art direction... just add it before in the dev cycle... so all is more cohesive
 
Impressive, even if it looks overdone right now. Scale it back like 25% and this is a pretty hands down winner.

It'll run on a 4070 right? Right???

Most probably yes but Nvidia will probably not allow it.

The demo used two RTX 5090s. One to run RE9 and another to run DLSS 5. So right now DLSS 5 requires <= 32GB VRAM. The release is this fall. That's less than a year to make it run on a RTX 5060 which has 8GB VRAM which is quite incredible if they manage to run it but then again I doubt they will, it might support but a RTX 5060 or even 5070 will run DLSS 5 worse/will have higher perf. cost. RTX 5080+ is safe.

RTX4080+ will def. run this but this is Nvidia, they will say something that it has less tensor cores which neural rendering needs, etc, etc, some excuse. The best GPU to run DLSS 5 will be 100% the new RTX 6000 series.
 
just doesn,t use it at the end of process all the graphic like a filter and alter art direction... just add it before in the dev cycle... so all is more cohesive
If this technology succeeds this will be the logical next step. After that people are forced to buy into it or play the Switch 1 version of new games.
 
just doesn,t use it at the end of process all the graphic like a filter and alter art direction... just add it before in the dev cycle... so all is more cohesive
GIF by Giphy QA
 
It just looks like someone put the sharpness on the TV to 100, and then overlit everything.
All these images look like the shitty reshade mods combined with an instagram filter on the faces.
I don't get people saying it looks better, sure it's brighter but there's not even any shadowing in the forest. It's like there's a big light everywhere not allowing shade in any direction.

It reminds me a bit of the TV world where they have dynamic settings which oversaturate everything and make it stand out but lose all accuracy. It got so bad that there's now a Filmmaker Mode on modern TV's which turn off all the processing so you can watch a movie as it was designed.
 
I think the central issue is that most people are not seeing the in game presentation as being 'compromised' to begin with.

If the devs/artists say actually this is exactly the level of fidelity and responsive lighting we were going for but couldn't achieve, would the discussion change?

dZ8L2F0fLfLufgh8.jpg


I mean look at Sid in 1995. He looks like he fell out of a Bethesda game. And this was offline rendering. No way this was artistic vision. It was a limitation of the tech.

I mean yeah we can both agree Sid looks like shit because of limitations of the tech at the time.
But what they are saying here is that the AI slop look of every example they showed isn't something that's inherently part of the tech but rather an artistic choice that every dev just happened to share.

I don't buy it, same way I don't but that AI show I posted truly turned out exactly as Aronofsky envisioned it. And until they can show examples of how extensive these creative tools are I'll keep being skeptical about it.
 
I must be missing something here, isn't this optional? Why people are getting bent outta shape about this I have no idea.

Personally, I think this tech looks fucking amazing. Undoubtedly a system selling feature and the direction things are going to go whether people like it or not. Personally, I hope we will have the option to tailor or customize the AI output. Like adding custom prompts etc. May then be able to correct some bad developer choices, particularly ugly mannish looking female characters.
 
There is something quite queer going on with the way some people reacted to this, do they really hate it or are jealous and not having a good PC or w/e? I mean this tech is literally on the level of reshade mods (only bit more hibrow), you don't have to use it, it won't be pushed on to you, and even if it was, how would you know? The amount of forced/fake/impotent cynicism and hate is crazy.



nothing new imo.
people were also bitching about physx more than a decade ago, upscaling, rt, pt, frame gen... you can still find some screeching "rt gimmick" and "fake frames"

nvidia should try to explain more I guess, they quoted from their own dlss 5 faq and some guy here made a thread calling it damage control lol
 
Optional? Sure, when your game can't even hit 30fps running at 4k, you use DLSS to get it to actually run at or close to 60 at almost similar image quality. Is it really optional when you need it to gain the performance?

But then you get a response saying "just turn down the resolution and settings". To that I say, what's the point of DLSS then?

DLSS5 however, is like chromatic aberration, you turn it on, it changes the image, but it does it improve it? Depends on who you ask. If it's not needed, what's the point of having it? That on top of paying a hefty price for the GPU to get the feature.
 
Last edited:
nothing new imo.
people were also bitching about physx more than a decade ago, upscaling, rt, pt, frame gen... you can still find some screeching "rt gimmick" and "fake frames"

nvidia should try to explain more I guess, they quoted from their own dlss 5 faq and some guy here made a thread calling it damage control lol

Been discussing videogame gfx over three decades, and yeah, the pattern is always the same: almost every new graphical feature faces backlash before becoming an industry standard.

The overreactions to Nvidia's DLSS 5 demo are extreme though. The idea that f.ex. a studio like FromSoft would blindly slap an excessive AI filter over their own art is nonsense. Nvidia provides the tools and the granularity, and it's still the creator's choice how to implement it - or not.
 
Last edited:
Can AMD provide something similar?
Ai is Ai.

As long as the GPU has Matrix Cores/accelerators ( what Nvidia calls tensor cores, AMD calls Ai accelerators, and Intel calls XMX engines), you can do anything AI-related on it. To what degree you can will come down to what Ai models you have developed and trained, and how many of those Ai cores you have.

So everything that Nvidia can do with Ai... AMD can do too. If they can do it as well, or hw they go about doing it is another matter. It's like asking if a GPU that has vertex shaders can do 3d geometry.

Why AMD has been lagging behind in the Ai space is that for some reason, they didnt think having actual matrix cores in their GPUs was necessary until recently, and even their first implementation of the cores was half-assed and coupled instead of being decoupled.... long story. Yeah, AMD does dumb shit like that.
 
Ai is Ai.

As long as the GPU has Matrix Cores/accelerators ( what Nvidia calls tensor cores, AMD calls Ai accelerators, and Intel calls XMX engines), you can do anything AI-related on it. To what degree you can will come down to what Ai models you have developed and trained, and how many of those Ai cores you have.

So everything that Nvidia can do with Ai... AMD can do too. If they can do it as well, or hw they go about doing it is another matter. It's like asking if a GPU that has vertex shaders can do 3d geometry.

Why AMD has been lagging behind in the Ai space is that for some reason, they didnt think having actual matrix cores in their GPUs was necessary until recently, and even their first implementation of the cores was half-assed and coupled instead of being decoupled.... long story. Yeah, AMD does dumb shit like that.
Will next gen consoles have it?
 
Will next gen consoles have it?
Absolutely, the next gen consoles are based on RDNA5... and with RDNA5, AMD has properly decoupled its ai accelerators from the compute units and has them in a neural core/array. VALU is also fully implemented. Basically, its the closest to being a 1:1 framewrork from AMD compared to Nvidia there has ever been.

As usual, the PS6 will not be full RDNA5 though, but don't worry, Sony usually only cuts out hyper PC centric features from their APUs... basically stuff a console doesn't need.
 
If the devs/artists say actually this is exactly the level of fidelity and responsive lighting we were going for but couldn't achieve, would the discussion change?
Meanwhile actual devs..

jCwDJxB.png


Environment Artist at Gunfire Games (Darksiders, Remnant).

Path Tracing advancements are what's going to organically overcome tech limitations, while still providing full control over every single ray and bounce intensity when desired.
Which is where DLSS was and actually will be needed for years, pure ML upscaling and ray reconstruction. This is nothing but destructive.
 
Last edited:
Meanwhile actual devs..

jCwDJxB.png


Environment Artist at Gunfire Games (Darksiders, Remnant).

Path Tracing advancements are what's going to organically overcome tech limitations, while still providing full control over every single ray and bounce intensity when desired.
Which is where DLSS was and actually will be needed for years, pure ML upscaling and ray reconstruction. This is nothing but destructive.

Well, technically the released game could not necessarily be what they had in mind, it's a balance between the artistic vision and the limitations of the current technology. This is just a tool they can setup to try to see if the game can reach that last extra mile. Nobody (well, only Japanese devs) cry out when there's a mod with Grace with big boobs or in bikini, nobody says it's insulting to their "artistic view" (again, only the Japanese ones are the one who regularly complain).
 
I'm primarily a console gamer so maybe I get this in PS7 (lol), but I don't see the problem if devs are able to design games to incorporate this tech downstream of their engine.

Seems like an easy way to spend less time on optimization and more time building a better game.

I go back to my experience with FF7:Rebirth at launch on my PS5. I had to choose between a decent looking game at (very unstable) 30fps, or a dogshit looking game at 60fps. I look forward to a future where I no longer have to choose the performance mode. This tech seems like a quicker path towards that reality.
 
Meanwhile actual devs..

jCwDJxB.png


Environment Artist at Gunfire Games (Darksiders, Remnant).

Path Tracing advancements are what's going to organically overcome tech limitations, while still providing full control over every single ray and bounce intensity when desired.
Which is where DLSS was and actually will be needed for years, pure ML upscaling and ray reconstruction. This is nothing but destructive.

Until he gets his hands on the technology, it's just presumption.
 
This whole debate about which picture looks better (DLSS 5 Off vs On) is giving me flashbacks to the blue & black vs white & gold dress debate from 2015.

I literally cannot understand how the other side perceives it differently from me and vice versa.
 
Well, technically the released game could not necessarily be what they had in mind, it's a balance between the artistic vision and the limitations of the current technology. This is just a tool they can setup to try to see if the game can reach that last extra mile.
Reminder of what the "limitations of the current technology" and "last extra mile" can appear right now in gaming, culmination of a decades of progress, without the assist of GenAI rape.







But yeah, let's endorse the one thing that would objectively kill gaming as we know it, while standardizing the output of every single studio out there regardless of skills, sensibilities, talent and artistic merits.
 
Last edited:
That's basically because it's running an unoptimized model. once the AI model is optimized then it will be smaller. And can then be handled by just one GPU.
The RTX 5090 is the only card in either company's current lineup that has more than 16gb VRAM.
I repeat, what would AMD run this on?
 
HDmWImYXMAADv1G


tHe iMaGe On ThEE rIgHt LoOks BEtTer! hurr durrr

Oh for fucks sake, that comparison is disingenous by itself, the values of the image don't even match. Most of these images are just image value and contrast pumped up to eleven. You can do this shit in reshade if you wanted to with tonemapping. It's like going in the TV store and saying the TV with the brightest most saturation is the best.

Other than that, there's a fake rim light added to increase pop and the face filter.
Here 5 minutes of photoshop with some basic value changes. Minus the face and the fake inaccurate lighting, you can do this shit now with reshade and I don't need a 5090 either.

Untitled-1.png
 
Last edited:
Oh for fucks sake, that comparison is disingenous by itself, the values of the image don't even match. Most of these images are just image value and contrast pumped up to eleven. You can do this shit in reshade if you wanted to with tonemapping. It's like going in the TV store and saying the TV with the brightest most saturation is the best.

Other than that, there's a fake rim light added to increase pop and the face filter.
Here 5 minutes of photoshop with some basic value changes. Minus the face and the fake inaccurate lighting, you can do this shit now with reshade and I don't need a 5090 either.

Untitled-1.png
The one on the left looks 2 generations better.

Not even remotely close.
 
Oh for fucks sake, that comparison is disingenous by itself, the values of the image don't even match. Most of these images are just image value and contrast pumped up to eleven. You can do this shit in reshade if you wanted to with tonemapping. It's like going in the TV store and saying the TV with the brightest most saturation is the best.

Other than that, there's a fake rim light added to increase pop and the face filter.
Here 5 minutes of photoshop with some basic value changes. Minus the face and the fake inaccurate lighting, you can do this shit now with reshade and I don't need a 5090 either.

Untitled-1.png
Those aren't even comparable.
 
The one on the left looks 2 generations better.

Not even remotely close.
Minus the face and the fake rim light yes, it's pretty close. Do you need me to black out the face to show you ? The face itself can be adjusted too at the asset level if you want it to look closer to the left it's not as much a technical limitation as it is a decision by the artists.
 
The RTX 5090 is the only card in either company's current lineup that has more than 16gb VRAM.
I repeat, what would AMD run this on?
And I repeat. That's because it's running an unoptimized model. That's not just a RAM issue. It's down to what FP variant it's even using. Eg it's currently doing this on an fp 16 model, it would be down to fp4 when it ships. Unless u really believe Nvidia will release a dlss feature that ONLY a 5090 can run???

And with things like these there are many stop gaps, hrbrids and work arounds. Eg... This neural rendering we are talking about here is built on their ray reconstruction model. And as it stands projections are that at 1440p a GPU with 16-20GB would be ok and 24-32GB for 4k. Next gen consoles eg the ps5 will have 30GB. Hell... The neural rendering pass can even be done on the native render before that is ai upscaled to the output render which would mean they can get away with rendering everything at 1080p.... As I said. There are ways around these things.

Sigh...I don't know if people think ai is this magic unobtanium voodoo bullet. It's not.

With RDNA5, AMD has the very ai neural network Nvidia has been using (and by that I mean similar in design and function). What models and how they use it is another matter. But to say oh this is impossible is shortsighted at best... Ignorant at worst.
 
Oh for fucks sake, that comparison is disingenous by itself, the values of the image don't even match. Most of these images are just image value and contrast pumped up to eleven. You can do this shit in reshade if you wanted to with tonemapping. It's like going in the TV store and saying the TV with the brightest most saturation is the best.

Other than that, there's a fake rim light added to increase pop and the face filter.
Here 5 minutes of photoshop with some basic value changes. Minus the face and the fake inaccurate lighting, you can do this shit now with reshade and I don't need a 5090 either.

Untitled-1.png
Ok great. Now try doing it 30-120 times per second.
 
Oh for fucks sake, that comparison is disingenous by itself, the values of the image don't even match. Most of these images are just image value and contrast pumped up to eleven. You can do this shit in reshade if you wanted to with tonemapping. It's like going in the TV store and saying the TV with the brightest most saturation is the best.

Other than that, there's a fake rim light added to increase pop and the face filter.
Here 5 minutes of photoshop with some basic value changes. Minus the face and the fake inaccurate lighting, you can do this shit now with reshade and I don't need a 5090 either.

Untitled-1.png
Lol you can't be serious.
 
I absolutely hate DLSS 5. I will never use it, nor will I ever buy any piece of hardware that supports it - and I will tell all my frriends to do the same.
This is such bullshit, really.
 
Oh for fucks sake, that comparison is disingenous by itself, the values of the image don't even match. Most of these images are just image value and contrast pumped up to eleven. You can do this shit in reshade if you wanted to with tonemapping. It's like going in the TV store and saying the TV with the brightest most saturation is the best.

Other than that, there's a fake rim light added to increase pop and the face filter.
Here 5 minutes of photoshop with some basic value changes. Minus the face and the fake inaccurate lighting, you can do this shit now with reshade and I don't need a 5090 either.

Untitled-1.png
I like how all the DLSS Off pics are all super dark. lol
 
Top Bottom