• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Nvidia at Live GTC : DLSS 5

It's artistic reality vs real leality.

Realism is boring. That's not that hard of a concept to grasp, my AI bro.

All this does is make lighting accurate. You can still have any art style you want with accurate lighting, but it'll just tend to be a higher fidelity version of the original vision. It's like how the latest Pixar films use accurate lighting to look better while still maintaining a certain style.

Obviously some graphics styles aren't amenable to higher fidelity and that's fine, but way more of them are than people seem to imagine.
 
Some of these takes are just insanely dumb. IT'S THE SAME MODEL!!!!! She's just accurately lit now! If you don't like the model, fine. But don't hate on CGI level lighting ffs.

It's adding lighting and texture detail. It's not adding or transforming geometric detail. The wireframes in both images would be exactly the same.
 
It's adding lighting and texture detail. It's not adding or transforming geometric detail. The wireframes in both images would be exactly the same.
What wireframe would be AI be using... It only sees an image. The fact that it's inpainting the same silhouette is part of how it was tuned, but it is 100% unaware of any underlying model detail beyond what it is in the framebuffer (2d images). What wireframe did it use when it drew eyes over someone's eyelids. The geometric data would have had the eyes closed, they weren't just textures.
 
Last edited:
Will you stop with the "accurate lighting" crap, you don't know what you are talking about. It's REDRAWING EVERYTHING via a LORA, it is not using the underlying MESH or anything else. It's purely IMAGE based.

So what? All digital imagery amounts to a grid of pixels. It's adding in lighting detail. Nothing more and nothing less. You can say you don't like the result or you don't think the additions are accurate, but that IS what it's doing.
 
What wireframe would be AI be using... It only sees an image. The fact that it's inpainting the same silhouette is part of how it was tuned, but it is 100% unaware of any underlying model detail beyond what it is in the framebuffer (2d images).

I know. Again, so what? Which specific results of the process are so poisonous to you?
 
9bdf8f42cf7838b75c55bc0391a3d525.jpg
IGN: 11/10
 
Next gen consoles are already unexciting now. It's too late to get this sweet tech and AMD is prob way behind anyway.

Kind of depends how good an RTX card is needed to run it I guess. AMD have already said Project Amethyst is all about neural rendering etc so I wouldn't rule out the new consoles having some compromised version of this kind of thing. It's obviously the future.
 
Amazing what a simple lighting update can do.

Funny thing is most of the comments(all the ones I saw) think this would be awesome and are mad Rockstar didn't deliver something similar in their remaster. Just 4 months ago. Now everyone seems to hate it all of a sudden. It's probably just that the haters are really loud.
 
Funny thing is most of the comments(all the ones I saw) think this would be awesome and are mad Rockstar didn't deliver something similar in their remaster. Just 4 months ago. Now everyone seems to hate it all of a sudden. It's probably just that the haters are really loud.
Guess there is a big difference between artistic graphic design with some realism added and blind soulless AI elaboration.
 
Last edited:
So working in pixel spaces is modifying the underlying 3d topology - whether you like it or not, it's not constrained to 'same models', just the general silhouettes.
It's also fundamentally irrelevant - the question is whether 'made-up detail' that is also 'temporally incoherent' because it's in screen space is a 'good thing'.

After 15 years of SSR, SSAO and other similar ... %%%% - the answer to the second part is empathical 'NO'.

For the first one though - how people feel about imaginary details will ultimately be a matter of personal preferences - there's no objective 'better' there - but by same measure 'worse' is also debatable. We've been using made up detail in games with noise textures and other things for a long while so it's not entirely black & white - but personally I index on stable detail references, so I would go with 'I don't like it' most of the time.
But if NVidia uses 50% of the GPU for this in the future I guess I'll just use it to mine coin or something while playing games instead.
 
Last edited:
Guess there is a big difference between artistic graphic design with some realism added and blind soulless AI elaboration.

Vice City looks like trash. The look of the game was based on limited hardware. The A.I. retained the colors and the bad animations lol
 
Demo run with 2x5090. In labs they have already new version that runs on single 5090. But this obviously has huge FPS cost as you are reprocessing every single frame.

I expect this to come out for 5090 more like novelty like raytracing was for 2000 series cards but real performant version of it will be probably on 6000 or even 7000 series.
Yeah it's a showcase tech of what might come in the future, if you choose Nvidia.
I imagine it's sub 30fps on a single 5090. At that point it's inaccessible for 99% of PC gamers and all console gamers and it's easy to understand why people aren't excited about this, yet.
 
Overall, I think the DLSS On pics are much better than the Off pics.

Ya, some look overdone like the Hogwarts old lady, but most pics look much better. Yes, some of the Before pics look so bad it's last gen looking.
This whole thing feels like the motion-smoothing debate for modern televisions. Some people swear motion smoothing makes content look better, some don't have a preference, some don't even notice the difference, and some just stand in terrible wonder as they try to grapple with the fact that such an instantly distasteful image is getting any positive attention.

I'm in the latter camp here. I can't comprehend the lack of universal derision for dlss5. The images trigger some sort of instinctual level of disgust.

Honestly, I am at a loss. I don't think there's ever going to be a consensus on this dlss5. The place we're approaching it from is too primal and you either see the problem or you don't.
 
So working in pixel spaces is modifying the underlying 3d topology - whether you like it or not, it's not constrained to 'same models', just the general silhouettes.
It's also fundamentally irrelevant - the question is whether 'made-up detail' that is also 'temporally incoherent' because it's in screen space is a 'good thing'.

After 15 years of SSR, SSAO and other similar ... %%%% - the answer to the second part is empathical 'NO'.

For the first one though - how people feel about imaginary details will ultimately be a matter of personal preferences - there's no objective 'better' there - but by same measure 'worse' is also debatable. We've been using made up detail in games with noise textures and other things for a long while so it's not entirely black & white - but personally I index on stable detail references, so I would go with 'I don't like it' most of the time.
But if NVidia uses 50% of the GPU for this in the future I guess I'll just use it to mine coin or something while playing games instead.

It's only "imaginary" detail in the sense that ALL computed detail - even path tracing - is "imaginary".

It's just using some given algorithm to calculate the most accurate pixel colour given a series of inputs. I think it looks way more realistic even than currently available path tracing. Maybe you don't. I think you'll find that in time you'll be in a tiny minority. Cheers
 
This whole thing feels like the motion-smoothing debate for modern televisions. Some people swear motion smoothing makes content look better, some don't have a preference, some don't even notice the difference, and some just stand in terrible wonder as they try to grapple with the fact that such an instantly distasteful image is getting any positive attention.

I'm in the latter camp here. I can't comprehend the lack of universal derision for dlss5. The images trigger some sort of instinctual level of disgust.

Honestly, I am at a loss. I don't think there's ever going to be a consensus on this dlss5. The place we're approaching it from is too primal and you either see the problem or you don't.
I'm with you. Just the thumbnail, the first time I saw it, gave me that instinctual level of disgust. It's very similar to the bullshit of motion smoothing and vivid in that way, yes, where I'll just not watch anything rather than have to use that garbage and I don't understand the mind of anyone who could ever like it, or rationalize why its good.
 
What is bad? The tech? Or how its being used. Because those sre two totally different things.
Well, the biggest problem for now is how it's being used for sure. It's like a free 'get out of jail' card and Starfield is a great example of that (oher would be Mass Effect Andromeda ;) ). For the tech itself - for now I'm not sure, I would need to get more details and see how it work and if it can be used in a good way. Still it is a bit like cheating (the whole lighting part) and this is something that needs to be examed. Maybe it can be used in a good way (but then would be less flashy for marketing).

Becomes older... lol. Thats not because the lighting changed, thats becxause for whatever reason devs sucj at making good pre teen or teen models. This has been going on for decades.

I know something about those stuff ;) . I have some experience working on games and especially on faces. Making a good teen or a child is hard on many levels and skill is usually less of a problem. Sometimes is just when you see a young human in game it looks bad (especially for management) and you need to change that. Sometime it is more about the tech and tweaking it (but the question is do you have time for that because children aren't usually main characters or even secondary). It can be even a artistic decision.

This new tech doesnt make them look olderr, its makes certain things obvious. Like take that Hogwarts screenshot going around of the guy. Be honest, look at the DLSS off image... does that really look like a 15yr old to you?
It makes those subtle things (like wrinkles or skin imperfections) exaggerated and that's the problem. In original shots it is fine and consistent. I can't say about the age (like USA teens look more adult then in my country ;) ) but it looks somewhat believable/acceptable even if it's not actually a 15 yr old boy. It looks like a high pass filter that is being used for making something look more detailed. There where lots of pictures on the web that looked like that and it was taged as 'HDR'.

What I can say for sure is that it changed the character of the face. Shadows are deeper and this is what defines shapes. It even looks more glossy. The thing is that you could try to tweak many things to match this - make roughness/gloss smaller/higher, bump up a bit specular, maybe make the normal map for pores more aggressive or use a cavity mask to make them more noticeable (or that also might be subsurface scattering being too aggressive for normal maps). Still I bet it would look differently because it changed some characteristics of the face. I can't find any high poly screens for this face to by 100% sure. Those information aren't on the mesh, maybe they are on normal map but I doubt it.

However, like I said for DLSS back in 2019, and for PSSR... what I will say is whatever shgort comngs there are... will get better in time. That's kinda how AI works.
That is true.

Ok now this is bullshit. First off, RT (or more specifically, lighting) is the absolute holy grail or most important thing in graphic rendering. Its the single most valueable asset to how a game actually looks. And any new tech... has a cost.
Yes, it is very important and makes everything looks more real. And RT has a big cost to it, PT even bigger. Sure, thanks to Nvidia it is possible in real-time to some extend but still you need to make some exceptions, simplifications, sometimes degradations. I do like RT and PT and I would like to be in time when every single GPU can handle it with good results so that we don't need to have fallback to old methods (that was also one of the proclaimed pros for RT that lighting in game will be easier and less time consuming - for now it is the opposite). But we aren't it that time yet. This is only my opinion - RT was pushed out to early but this is what Nvidia needed at that time to grow bigger and stronger on the market. Now it's the same with AI although they don't need gaming any more for that (maybe for making people feel better about AI when it works). Sure, it works, but maybe the cost is actually to big. But it was already decided and people get used to it/expect it even though it has some negative effects (like Game Pass ;) ).

Do you think when we went from sprites to 3D it didnt come at a cost? Do you think when we ent from 460p to 4K it didnt come t a cost? When we went from forward renderring to deffered it didnt come with a cost? There has always been a cost associated with innovation, and then that is always followed with innovative ways to alleviate that cost.
Yes, I'm not saying you are wrong with this thinking but then the question is, when the cost is too much? Maybe you should slow down (I know the corporations are afraid of those words ;) ). For sure the AI market should stop (but it won't) and rethink the whole approach - even some of the people who helped the development of AI are saying that. It is driven by the corporations greed and not by common sense. And this is what have a big influence on our word. I feel like the tech upgrade (like GPU power) has became small and now AI became this easy solution for growth. Maybe I am wrong and this is the right way, but it feels wrong in a way. But as I said, this is my opinion and I can't force anyone to have the same.

And your takes are actually questionable. Gaming has ALWAYS been about smoke and mirrors. Its like using sprites to look like smoke or grass rather than actually render grass or volumetric fog. Or culling polygons or using LODs....reconstruction is no different. Its a better way to utilize hardware, why spend 100% of your resources vs 30% for a less than 5% visual gain?
I understand what you mean. Yes we are using tricks from the start, even normal maps are just that. But every trick has it weaknesses. I thought about reconstruction as this magic thing that can make a 1080p game to 4k. When I was reading opinions or watching videos about the subject it looked that way. Then when I started to play those games (well even a work in engine showed me that DLSS can damage simple thing like navigation and UI). I noticed that it's not that great. RE9 or FFVII Rebirth are good examples of games that are taunted as great on that front, but I see many downsides. And sure with RE9 there is a problem with denoiser (so RT) but even without it there are issues. And I'm not looking for them on purpose, they are just there saying hello and waving ;) . So I'm not sure if it's really better way if it makes a image less stable. But to be fair, there where many screen space effects before that and they also made the image less enjoyable, especially in motion.

I love AI... I do not like how it's used sometimes. But that's the issue here. There is a difference in that.
Agreed. Maybe not about loving AI, but I know there is a potential in that. I just don't think that what DLSS5 shows is what I want from that. Anyway, future will tell. I only hope that people that are also talking about problems will be in a high numbers so that it will force developers/engenires at Nvidia to make it good. For now, what they showed looked bad.
 
Dude at 2pm yesterday you where literaly singing Dlss5 tune. Wtf?
Where the fuck did this 180 happen?
Just because comments online?
Like what?
Check the graphics thread. By the time i finished the video i was able to see the cracks in the tech and the implementation.

I still like the tech though. Its definitely one of the coolest most mind blowing things I've ever seen.
 
I'm with you. Just the thumbnail, the first time I saw it, gave me that instinctual level of disgust. It's very similar to the bullshit of motion smoothing and vivid in that way, yes, where I'll just not watch anything rather than have to use that garbage and I don't understand the mind of anyone who could ever like it, or rationalize why its good.

Yeah this explains why your takes are so bad on this. You see it and it reminds you of AI generated porno girls and that's that: urggh gross no good take it away.

You just refuse to engage your brain and think about it rationally.

It radically enhances lighting quality (or the appearance of it in a digital image, whatever). This is good, but has certain implications because it's going to radically change how some assets present. So the effect might be more or less pleasing depending on preference. But the tech is still fantastic.
 
Last edited:
It's only "imaginary" detail in the sense that ALL computed detail
No - it's imaginary the same way a human painter looks at environment and paints something out of their own imagination that may or may not resemble it.

It's just using some given algorithm to calculate the most accurate pixel colour given a series of inputs.
That's not what it's doing at all. We've been working with machine models learning the real-world simulations and approximating them for decade+ in CG/simulations and none of those are what I'd describe as 'algorithms'. They also approximate by design - ground truth math is precisely what we can't afford to compute.
Besides even if this was ground truth math based - it'd still only be a vague approximation because it's operating on incomplete data (colors in the frame-buffer), the same reason all the screen-space raytracers are so limited. AI just helps fill in those data gaps with its own reasoning/imagining, ie. it's not accurate by definition.

But again - none of that is actually at the heart of what is being debated - as you yourself say:
I think it looks way more realistic even than currently available path tracing.
All of the above has 0 relevance to how you find the results, so enjoy it if so,
 
Yeah this explains why your takes are so bad on this. You see it and it reminds you of AI generated porno girls and that's that: urggh gross no good take it away.

You just refuse to engage your brain and think about it rationally.

It radically enhances lighting quality (or the appearance of it in a digital image, whatever). This is good, but has certain implications because it's going to radically change how some assets present. So the effect might be more or less pleasing depending on preference. But the tech is still fantastic.
Bro. It looks like shit. The majority of people think it looks like shit. You like it, great. I'm begging you, give it a rest lol.
 
Last edited:
No - it's imaginary the same way a human painter looks at environment and paints something out of their own imagination that may or may not resemble it.


That's not what it's doing at all. We've been working with machine models learning the real-world simulations and approximating them for decade+ in CG/simulations and none of those are what I'd describe as 'algorithms'. They also approximate by design - ground truth math is precisely what we can't afford to compute.
Besides even if this was ground truth math based - it'd still only be a vague approximation because it's operating on incomplete data (colors in the frame-buffer), the same reason all the screen-space raytracers are so limited. AI just helps fill in those data gaps with its own reasoning/imagining, ie. it's not accurate by definition.

But again - none of that is actually at the heart of what is being debated - as you yourself say:

All of the above has 0 relevance to how you find the results, so enjoy it if so,

I think my inference is basically accurate: this is to lighting detail as DLSS 2-4 is to resolution. Right?

It's using inputs to predict a more accurately lit image. I think that's a totally fair lay description of the tech, isn't it? Or rather, the intention of the tech.

Which is to say that it's trained sufficiently well that the changes it applies don't appear to change geometry, or particles etc. Just detail that would be there if lighting were accurate. I mean, I think that's a reasonable approximation of what this does, is it not?

And then the question is simply - as agreed - whether those changes present an image which looks more accurately lit. Which I honestly think is hardly even in doubt. It just obviously does. But each to his own.
 
Turning a "realistic" game into a cartoon would have turned out much better than what they showed.

The reaction could have been different.
 
Last edited:
I think my inference is basically accurate: this is to lighting detail as DLSS 2-4 is to resolution. Right?

It's using inputs to predict a more accurately lit image. I think that's a totally fair lay description of the tech, isn't it? Or rather, the intention of the tech.

Which is to say that it's trained sufficiently well that the changes it applies don't appear to change geometry, or particles etc. Just detail that would be there if lighting were accurate. I mean, I think that's a reasonable approximation of what this does, is it not?

And then the question is simply - as agreed - whether those changes present an image which looks more accurately lit. Which I honestly think is hardly even in doubt. It just obviously does. But each to his own.
How it can it know how a scene should be lit if it only has access to light sources that are on the screen? If the sun is behind you, how does it know how to put shadows on the objects in front of you?
 
Good goals for videogame manufacturers:

- lessen the input lag to minimum
- make the image as readable as possible
- make the framerate as stable as possible
- make the loading times as low as possible
- make the directional sound as understandable as possible
 
How it can it know how a scene should be lit if it only has access to light sources that are on the screen? If the sun is behind you, how does it know how to put shadows on the objects in front of you?

Surely this is possible from inference right? I dunno, I'm not a graphics programmer OR an AI engineer. But if this thing is trained to pick out lighting detail in an image (seems reasonable) then it can presumably construct some model of where light sources are. Like a human can, basically. If I see shadows in certain places I can infer where the light source must be, even if it's not in my field of view.

Doesn't really seem that mind blowing, prima facie.
 
Unless my math is garbage 50% say they don't like DLSS5, 10% have no opinion on it and only 40% say they like it. I haven't seen a single place where the feedback is positive in its majority.

Yeah but what you're forgetting is that the majority of people everywhere are REALLY DUMB.
 
I think my inference is basically accurate: this is to lighting detail as DLSS 2-4 is to resolution. Right?

It's using inputs to predict a more accurately lit image. I think that's a totally fair lay description of the tech, isn't it? Or rather, the intention of the tech.

Which is to say that it's trained sufficiently well that the changes it applies don't appear to change geometry, or particles etc. Just detail that would be there if lighting were accurate. I mean, I think that's a reasonable approximation of what this does, is it not?

And then the question is simply - as agreed - whether those changes present an image which looks more accurately lit. Which I honestly think is hardly even in doubt. It just obviously does. But each to his own.
I dont think so, it does a lot more than lighting and the other stuff seems improved (albeit overdone, nothing that cant be fixed). I dont hate those, kind of on board there.

Lighting specifically tho looks wrong, imprecise, random even. At best it might be improvement vs no RT (or shit RT) in most cases. It might look prettier, but will it be correct? I believe that no, it wont. Hope we can pick and choose what we want changed because id like to skip the lighting changes completely.
 
Last edited:
Next gen consoles are already unexciting now. It's too late to get this sweet tech and AMD is prob way behind anyway.
yeah. DLSS5 is going to widen the gap between console and PC. AMD will be lucky to bring out their own version by the time next gen consoles release.

sounds like it's already a VRAM hog if it needs two 5090's in preview. That's 64GB VRAM capacity needed right now. i'm sure Nvidia will eventually get it running on the 5090 but i think cards with less than 16GB might struggle. I thought DLSS5 would be exclusive to next gen cards especially after us just getting DLSS4.5. Maybe they've decided to bring forward DLSS5 for 5000 and then DLSS6 will be exclusive to next gen cards.

next gen consoles are going to need a load of RAM and that shit is expensive as hell right now as we all know. I'd say next gen consoles will need 48-64GB RAM if they want to keep up with PC.
 
Top Bottom