• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Nvidia at Live GTC : DLSS 5

"Painting has been held back for years with challenges around lighting and making images photorealistic. As you can see, DLSS5 tremendously benefits all oil paintings, including all-time classics."
iu
Western painters vs Japanese painters.
 
Hey that's me told great argument bro
It's gen-AI bullshit, it's not magic.

From their own press release:

"Twenty-five years after NVIDIA invented the programmable shader, we are reinventing computer graphics once again," said Jensen Huang, founder and CEO of NVIDIA. "DLSS 5 is the GPT moment for graphics — blending hand-crafted rendering with generative AI to deliver a dramatic leap in visual realism while preserving the control artists need for creative expression."

 
I think the Starfield NPCs look garbage, because their facial animations barely look any better than Oblivion. Not the two NPCs from the into (?) those look decent. I suspect they put more effort into those animations. The random NPCs will look like an HD texture mod.
This whole face filter stuff hinges on the facial animations. For now.
 
Last edited:
It's gen-AI bullshit, it's not magic.

From their own press release:

"Twenty-five years after NVIDIA invented the programmable shader, we are reinventing computer graphics once again," said Jensen Huang, founder and CEO of NVIDIA. "DLSS 5 is the GPT moment for graphics — blending hand-crafted rendering with generative AI to deliver a dramatic leap in visual realism while preserving the control artists need for creative expression."


Yes it's generative AI. Is that some kind of terrifying concept to you or something? Basically it uses probability to fill in missing lighting detail given some source image. The entire point is that it's generative AI ffs.
 
Ok lets try and be honest here....
This is so bad. It literally looks like all that thumbnails for "Consoles vs PC" videos, where PC image is generated for bigger contrast and is a clickbait. Now we can have this in games, great!
What is bad? The tech? Or how its being used. Because those sre two totally different things.
It looks bad, it makes everything look the same (I mean game by game), it changes the look because it's exaggerating everything (everyone becomes older). And it breaks the photorealistic feel when the animation starts because it still is an in-game animation. But probably soon they will 'fix' even this (but this will be harder to sell as 'game assets').
Becomes older... lol. Thats not because the lighting changed, thats becxause for whatever reason devs sucj at making good pre teen or teen models. This has been going on for decades. This new tech doesnt make them look olderr, its makes certain things obvious. Like take that Hogwarts screenshot going around of the guy. Be honest, look at the DLSS off image... does that really look like a 15yr old to you?

However, like I said for DLSS back in 2019, and for PSSR... what I will say is whatever shgort comngs there are... will get better in time. That's kinda how AI works.
I don't like the direction in which corporations are pushing the mankind. Nvidia is forcing everything to sell it's AI dominance. They forced RT (that it was one of holy grails of graphics but it was and actually still is too early for that). They are forcing PT (which is even more problematic for hardware). One of the reasons we need upscaling/reconstructing from lower resolutions is that they are pushing too much. So now we need AI to do that. And even though that DLSS and FSR (now even PSSR) are becoming good at it, it still degrades the image. I was playing RE9 on PS5 Pro and the image has many instabilities, noises, strange behaviors. Pragmata's demo on PC (5090) also had issues. Now Nvidia will be forcing more 'realistic' character.
Ok now this is bullshit. First off, RT (or more specifically, lighting) is the absolute holy grail or most important thing in graphic rendering. Its the single most valueable asset to how a game actually looks. And any new tech... has a cost.

Do you think when we went from sprites to 3D it didnt come at a cost? Do you think when we ent from 460p to 4K it didnt come t a cost? When we went from forward renderring to deffered it didnt come with a cost? There has always been a cost associated with innovation, and then that is always followed with innovative ways to alleviate that cost.

And your takes are actually questionable. Gaming has ALWAYS been about smoke and mirrors. Its like using sprites to look like smoke or grass rather than actually render grass or volumetric fog. Or culling polygons or using LODs....reconstruction is no different. Its a better way to utilize hardware, why spend 100% of your resources vs 30% for a less than 5% visual gain?
I see that at least some influencers are seeing the problems (at least for now ;) ). On the other hand seeing other people cheering for that, it is already over, like the whole AI market. I don't think that betting everything on AI (like hardware and rendering development) will end well. Just like Game Pass wasn't good for Xbox but some people were thrilled about it.
I love AI... I do not like how it's used sometimes. But that's the issue here. There is a difference in that.
 
This thread blew up no? This stuff is optional and for relatively few people that can afford to run it like PT, I dont think games will suffer in anyway whatever, yet the fear of it being so seems the only plausible explanation for the amound of salt and buttblast itt. Why do anyone cares otherwise if I like it and will want to try/use it??? You'll be fine on ps5 anyways, 30fps and all.

Looking at it again, it seems like an rtx remix mod in realtime more or less, it just perhaps needs some tune ups. I mean, if anything is improving today it's AI tech. So bring it on.

Were not getting back to pure raster anytime soon if ever, AI will be shoved regardless.
 
Last edited:
It's gen-AI bullshit, it's not magic.

From their own press release:

"Twenty-five years after NVIDIA invented the programmable shader, we are reinventing computer graphics once again," said Jensen Huang, founder and CEO of NVIDIA. "DLSS 5 is the GPT moment for graphics — blending hand-crafted rendering with generative AI to deliver a dramatic leap in visual realism while preserving the control artists need for creative expression."


But if you reduce everything down to their base level then it removes the nuance. PSSR, FSR and DLSS are generative AI. They generate pixels.
 
This is so bad. It literally looks like all that thumbnails for "Consoles vs PC" videos, where PC image is generated for bigger contrast and is a clickbait. Now we can have this in games, great!

It looks bad, it makes everything look the same (I mean game by game), it changes the look because it's exaggerating everything (everyone becomes older). And it breaks the photorealistic feel when the animation starts because it still is an in-game animation. But probably soon they will 'fix' even this (but this will be harder to sell as 'game assets').

It was ok-ish when it was only for those funny pictures, but now…

And you are really cheering for that?

I don't like the direction in which corporations are pushing the mankind. Nvidia is forcing everything to sell it's AI dominance. They forced RT (that it was one of holy grails of graphics but it was and actually still is too early for that). They are forcing PT (which is even more problematic for hardware). One of the reasons we need upscaling/reconstructing from lower resolutions is that they are pushing too much. So now we need AI to do that. And even though that DLSS and FSR (now even PSSR) are becoming good at it, it still degrades the image. I was playing RE9 on PS5 Pro and the image has many instabilities, noises, strange behaviors. Pragmata's demo on PC (5090) also had issues. Now Nvidia will be forcing more 'realistic' character.

I see that at least some influencers are seeing the problems (at least for now ;) ). On the other hand seeing other people cheering for that, it is already over, like the whole AI market. I don't think that betting everything on AI (like hardware and rendering development) will end well. Just like Game Pass wasn't good for Xbox but some people were thrilled about it.

So yeah, no for this deepfake, soft generative slop. And if characters in Starfield look bad, blame the developers and don't give them easy/lazy/cheap 'fix'.

Let's smash up the tractors everyone! They're stealing our fantastic ploughing jobs!
 
The tech is absolutely incredible. Just because the face doesn't look exactly like the source material, it doesn't mean you should throw the baby out with the bathwater. You get real-life photorealism at relatively little cost. The technology is mind-blowing.

XjNfV9QDUL9wRmwN.jpg

It's funny how this completely destroys the original expression on the character's face.
 
Yes it's generative AI. Is that some kind of terrifying concept to you or something? Basically it uses probability to fill in missing lighting detail given some source image. The entire point is that it's generative AI ffs.
I think you should go back and read what you posted earlier, magic man, about it being NOT a filter, and the difference in appearance due to the Huang'er's lighting from the future.
 
I find it hilarious we've had all these advances in GPU hardware to be able to run ray tracing and try to simulate lighting accurately only to have DLSS 5 AI garbage destroy the accuracy and directionality of the lighting and make it look like an Instagram filter.
 
Last edited:
Why are so many worried here? It's not like PS6 and Xbox next will be using this. Well, not Nvidia's version.
Earliest it shows up in console space could be ps6pr0 so maybe by 2032 , and even then it will be super niche anyways kinda like ps5pr0 is now.
That tech needs substantial vram pool, nuff said :messenger_astonished:

Actual widerange console adoption wont happen earlier than 2036 aka ps7 launch and by then who really can predict how its gonna look or how much of performance hit will be needed to work properly.
 
But if you reduce everything down to their base level then it removes the nuance. PSSR, FSR and DLSS are generative AI. They generate pixels.

There have been huge threads discussing games that use generative AI. Most people argue that these games should be clearly labeled and should be disqualified from GOTY consideration. Many companies have already faced significant backlash for using this technology.

Why should the criticism stop at Nvidia? If we are going to push back against generative AI, every company using it should be held to the same standard.


Anyway i think this is gonna be standard by next generation even if i dont like it so the fighting is probably over before it even started.
 
Earliest it shows up in console space could be ps6pr0 so maybe by 2032 , and even then it will be super niche anyways kinda like ps5pr0 is now.
That tech needs substantial vram pool, nuff said :messenger_astonished:

Actual widerange console adoption wont happen earlier than 2036 aka ps7 launch and by then who really can predict how its gonna look or how much of performance hit will be needed to work properly.

Isnt sony working on another ai tool which compress textures and shit to a big extent.

I think they will have something similar for ps6 ready(less advanced tho).
 
I think you should go back and read what you posted earlier, magic man, about it being NOT a filter, and the difference in appearance due to the Huang'er's lighting from the future.

It's not a filter!! Is DLSS 4 a "filter"? It's just taking some inputs with missing detail and using AI to correctly generate and insert what's missing!
 
Last edited:
Why are so many worried here? It's not like PS6 and Xbox next will be using this. Well, not Nvidia's version.
I'm actually praying that they do... If they have some version of this tech running then it can do wonders for lighting. I'm not worried about much else because I know devs will figure out ways to use this the right way.

I'm more interested in what the tech would allow.

Before this thing was announced I had always wondered why if AI can be used to make a 1080p image look like a 4k image and even seeming adding detail into the image... Why couldn't it be used for lighting so we won't have to worry about things like RT as much as we do.

Thankfully I wasn't the only one thinking about that.
 
Isnt sony working on another ai tool which compress textures and shit to a big extent.

I think they will have something similar for ps6 ready(less advanced tho).
I think max we can expect is proper ai upscaling and rt capabilities this time, maybe including coding to the metal in exclusive games 5080 grade, bit less in multiplats.
 
If this AI filter needs 2 RTX 5090's to run, then just use path tracing to get a better and more accurate lighting result :messenger_tears_of_joy:
 
Last edited:
Second picture loses all the art for the sake of realism.

Thing is, realism is really, really boring. Even MSFS and GT7 are striving for more stylized and polished level of 'realism' for said reasons.
nZH9r5XsOWS6YkeL.jpg


vQncE5TsACUtl6qj.jpg


v5SMBixkWL2rTakX.jpg


oYR2Wa7CrxWO6TWF.jpg
Comparing pictures of concept art on a sunny day to a real life winter shot is some next level cope I've seen 😂
 
I think max we can expect is proper ai upscaling and rt capabilities this time, maybe including coding to the metal in exclusive games 5080 grade, bit less in multiplats.

Imo sony and Microsoft will go full ai mode next gen. Its the only way where a generational leap is possible. Its not gonna be as good as dlss 5.0 but i think we will head in the same direction already.
 
No, NTC is just that, a compression method that upscales smaller textures use at the various layers of scene generation. Albedo, Normal Maps, Roughness maps etc and returns a higher quality texture using a NN upsampler trained on similar data across thousands of games. This is not what is happening here. This is using the output Framebuffer, after all those processes, to generate an image on a pre-trained model, with additional scene data being provided, such as motion vectors, lighting directions. Some of these parameters will be per pixel (motion vectors), some will be nearly prompt like, or a simple vector direction for lighting, but ultimately the work on the composited output scene, after shading and post processing has run to paint into it. It's not part of the graphics pipeline in a traditional sense, it's a post process. I imagine they can also create stencils and material maps which can be used, per pixel, to inform areas that require different treatments, or are even skipped altogether, but ultimately... It's an Image to Image LORA.

I think this works alongside NTC. So you already have the developer choice of highest quality textures scanned into the ai file. Nvidia can use these scanned data and infer the lighting conditions around the objects

Because it makes no sense to use NTC and then apply another skin filter on top of it
 
Last edited:
I think this works alongside NTC. So you already have the developer choice of highest quality textures scanned into the ai file. Nvidia can use these scanned data and infer the lighting conditions around the objects

Because it makes no sense to use NTC and then apply another skin filter on top of it
This is nothing to do with "skinning". It's a post processing step performed on the generated output. The input frame would already have all the NTC data encoded in it. i.e.: The "Before" image. While a GBuffer will have material data and other data encoded in it, "Textures" are just colours, and can be sampled arbitrarily. The Neural Net doesn't know where it came from.
 
Last edited:
Some of these takes are just insanely dumb. IT'S THE SAME MODEL!!!!! She's just accurately lit now! If you don't like the model, fine. But don't hate on CGI level lighting ffs.
No, the "model" isn't used by this method. There is no "model" or geometry seen by the AI. It's inpainting, that's all. I'm not sure what you think the AI is doing....
 
Top Bottom