• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Nvidia at Live GTC : DLSS 5

I don't understand the meltdowns. This is technology in its infancy that will obviously get better as DLSS did. It's going to be optional as DLSS is. It's going to be niche and super expensive. What am I supposed to be mad about?
 
What I would really like to see with this is gibs. Something like Dead Island 2… how would it handle the gibs, the dismemberment, the layers of flesh and bone, etc. Wow that sounds pretty morbid when typed out, but you know what I mean?
 
Last edited:
I like how all the DLSS Off pics are all super dark. lol
It's fucking disingenuous as shit lol. But it's not even the DLSS off pics being the issue, it's the DLSS on pics that are "wrong". Nvidia just pumped up the contrast and that makes a HUGE difference in getting something to "pop". Sure there's other differences, but if you eliminate the color adjustments, it's a lot closer than it seems.
 
I absolutely hate DLSS 5. I will never use it, nor will I ever buy any piece of hardware that supports it - and I will tell all my frriends to do the same.
This is such bullshit, really.
giphy.gif
 
Reminder of what the "limitations of the current technology" and "last extra mile" can appear right now in gaming, culmination of a decades of progress, without the assist of GenAI rape.







But yeah, let's endorse the one thing that would objectively kill gaming as we know it, while standardizing the output of every single studio out there regardless of skills, sensibilities, talent and artistic merits.


Yeah, path tracing is the true future of visual fidelity. Most exciting breakthrough in years, and makes games genuinely appear a generation ahead.
 
They aren't even close.
What's different other than what I already aknowledged in my original post. Yes the colors don't match exactly, I'm not a professional color grader and getting an 100% match is a pointless effort anyway in proving the point.

Are you saying my edit is FURTHER apart than the original DLSS OFF image that I quoted ? If the answer is no, it's closer, than I have proved my point. The goal is to eliminate the difference in image levels.
 
Last edited:
The one on the left looks 2 generations better.

Not even remotely close.
If Capcom wanted her model to look like that AI Girlfriend ad, they would have made it look like that.

This is the amount of skin detail in the game..

RESIDENT-EVIL-requiem-20260314120141.png


RESIDENT-EVIL-requiem-20260314115912.png


RESIDENT-EVIL-requiem-20260314115359.png


You're supposed to believe they would have had trouble giving her eye bags and fuller, redder lips if they wanted to?

But especially what you see in that shameful comparison is the Path Tracing character rendering completely smoothing out all detail originally there and removing all the subtle lighting nuances and texture detail present in the other rendering modes.

ofo7Htt.gif


QzGqhmI.gif


It's a laughably flawed comparison to begin with.

And speaking of laughs, LOL at all of those prefering that puke-inducing altered image version. I remember reading Represent. Represent. unironically using Dynamic mode on their panels, it all makes sense now.
 
What's different other than what I already aknowledged in my original post. Yes the colors don't match exactly, I'm not a professional color grader and getting an 100% match is a pointless effort anyway in proving the point.

Are you saying my edit is FURTHER apart than the original DLSS OFF image that I quoted ? If the answer is no, it's closer, than I have proved my point. The goal is to eliminate the difference in image levels.

If one person is 1000 miles away from there destination and the other person is 999 miles away from their destination. Relatively speaking one is not much closer than the other to their destination. Comparable in the same realm to the original non-DLSS5 version vs your edited, they aren't very different when compared to the dlss5 version.
 
Last edited:
The reaction to this is hilarious.

We've spent decades getting closer to photorealism and this is a huge step forward. The results are fabulous.
 
If one person is 1000 miles away from there destination and the other person is 999 miles away from their destination. Relatively speaking one is not much closer than the other to their destination. Comparable in the same realm to the original non-DLSS5 version vs your edited, they aren't very different when compared to the dlss5 version.
999 (lol at that value comparison are these still 999: https://slow.pics/s/vatet6Fp , but I'll grant it) is still closer to destination than 1000 by basic math. We're done here. My edit is closer per your own admission.
 
I had a similar thought earlier today:


For real. Like the laziness with raster now for those who rely on RT. Flat blank textures, no SSR or cube maps, or even crafted light source placements.

I'm looking at you, Remedy.

At least Insomniac provides good raster still even without RT. For now.
 
what? I dont understand now....what is a filter? what is a post processing here?

I thought it still have wireframe with its own animation routines computed by the cuda cores. The NTC is supposed to provide better than conventional texture quality on these skeletal meshes, why need another layer of filter?

This is mixed AI rendering, best of all worlds

Can a game ever be majorly rendered by the tensor cores?
You need to stop using ChatGPT. All Textures are sampled during rasterization / modified via shading. Those textures are compressed with NTC. (Textures can feature Albedo, Normal Maps, Roughness, whatever else). This allows a nice high quality base image to be composited, but there is only so much high resolution textures will give you, there is additional lighting passes, post processing etc done on the composited image. However this image will have game like lighting, and other compromises that a texture alone cannot provide. Then the entire image as a whole, is fed into the upscaler, and it samples the image and generates new pixels, throwing away all of that texture detail from the rasterization. They are different stages of the image generation, and the highest res textures you can imagine, are not enough to give the scene more quality than the lighting and art direction provide.
 
Dynamic mode
Hate, hate, HATE dynamic tonemapping. It ruins so much content, and is part of the reason why HDR implementation is so inconsistent across the industry. I posted these on AVSForum last year to showcase why dynamic tonemapping sucks balls to folks who wouldn't listen, may as well share them here too. Below are two pictures I snapped of a night scene in The Division 2 on a Bravia 9, with exposure locked to be consistent for both photos. First photo is with dynamic tonemapping kept off:


agZfQZ5DUE0tvQyI.jpg


And here's the same scene with dynamic tonemapping on (Sony calls it Brightness Preferred):
dyXboWbLFaJd2VfL.jpg


Everything in the scene "pops" but it looks like dogshit to anyone not suffering from moon brain. Dynamic tonemapping has zero context for how scenes are established by talented artists and will over-brighten everything.
 
For real. Like the laziness with raster now for those who rely on RT. Flat blank textures, no SSR or cube maps, or even crafted light source placements.

I'm looking at you, Remedy.

At least Insomniac provides good raster still even without RT. For now.

It's going to be a long and painful transition.
 
If Capcom wanted her model to look like that AI Girlfriend ad, they would have made it look like that.

This is the amount of skin detail in the game..

RESIDENT-EVIL-requiem-20260314120141.png


RESIDENT-EVIL-requiem-20260314115912.png


RESIDENT-EVIL-requiem-20260314115359.png


You're supposed to believe they would have had trouble giving her eye bags and fuller, redder lips if they wanted to?

But especially what you see in that shameful comparison is the Path Tracing character rendering completely smoothing out all detail originally there and removing all the subtle lighting nuances and texture detail present in the other rendering modes.

ofo7Htt.gif


QzGqhmI.gif


It's a laughably flawed comparison to begin with.

And speaking of laughs, LOL at all of those prefering that puke-inducing altered image version. I remember reading Represent. Represent. unironically using Dynamic mode on their panels, it all makes sense now.

it's hilarious how the AI just hallucinates nonexistent light sources, while toning down the actual light sources that are there, every time you see a close-up of a character, seemingly to even out the light across the face, because it was most likely using trainig data from studio-lit closeups of faces, so it sees the face casting a stark shadow onto itself as a mistake 🤣

every face looks like it's lit by a diffused studio light, just with varying brightnesses.
 
Last edited:
Hate, hate, HATE dynamic tonemapping. It ruins so much content, and is part of the reason why HDR implementation is so inconsistent across the industry. I posted these on AVSForum last year to showcase why dynamic tonemapping sucks balls to folks who wouldn't listen, may as well share them here too. Below are two pictures I snapped of a night scene in The Division 2 on a Bravia 9, with exposure locked to be consistent for both photos. First photo is with dynamic tonemapping kept off:


agZfQZ5DUE0tvQyI.jpg


And here's the same scene with dynamic tonemapping on (Sony calls it Brightness Preferred):
dyXboWbLFaJd2VfL.jpg


Everything in the scene "pops" but it looks like dogshit to anyone not suffering from moon brain. Dynamic tonemapping has zero context for how scenes are established by talented artists and will over-brighten everything.
Yep, dynamic tone mapping on TVs are doo doo for gaming. HGiG is the most accurate with what the developers intended and is processed internally by the console/GPU itself.
 
Last edited:
I had a similar thought earlier today:


Within the next decade:


zzbzscnf.png




Cool if you want every person on the planet to be an "artist", nightmare if you actually care about the medium and its survival.

Represent. Represent.

Crying-man-with-gun-meme-9.jpg


I'm sorry brother:

Purposely playing most games in Vivid mode, 30fps on my OLED.

Not giving a shit about performance at all
I play everything on Vivid. Shit looks way better and is perfectly smooth.

This game needs to be seen on Vivid mode.
This is how the game looks on my TV, in vivid mode.
Bruh. Stop listening to guys like DF and TV manufacturers and just try it for yourself

Do this test right now.

Go turn on Horizon burning shores in 30fps mode. First try it in game mode. Then, switch to vivid mode.

Tell me it doesn't look SIGNIFICANTLY better in vivid mode.

This goes for LG OLED tvs. I have no say on other tvs
 
Last edited:
The memes are fun, but the outrage is just an example of why we can't have nice things. I think this whole thing is great and shows incredible potential. I was planning on keeping my 5080 for much longer, but if this feature is implemented well in time for the 60XX generation (in 2027? 2028?) I will sure as hell
upgrade my system. I honestly can't wait for this.
 
I had a similar thought earlier today:


For real. Like the laziness with raster now for those who rely on RT. Flat blank textures, no SSR or cube maps, or even crafted light source placements.

I'm looking at you, Remedy.

At least Insomniac provides good raster still even without RT. For now.
In the same week we heard more about people talking about how studios/developers are going to have to get better with optimization and performance due to the nonsense with PC components. Which is a great thing, especially after the kind of performance we've seen over the last couples of years. Then we get this. You can best believe some studios are going to take advantage of this so that it does a lot of the "heavy lifting." I actually find it pretty comical that Bethesda and Ubisoft are talking it up so hard, because of course they are.

Pretty surprised about Capcom though, I mean, I still think the RE Engine is one of the better if not best looking and performing recent engines. It doesn't need any of that AI filtering, lol.
 
Last edited:
"Man, those lighters suck dick. They're going to replace people slamming two rocks together for twenty minutes until they create a sparkle and I fucking hate it. Now everyone can make a fire in one second in the exact same way. Fucking boring."

#fireslop
#notmyflintstone
#bruisedhandsmatter
 
I just have this worry that games are all going to start looking the same. It's like looking at a wall of AI-generated images on Pinterest. They were each generated by a different user who applied a different set of prompts, but because they're all being fed through the same pipeline they all start to take on a similar look.

Even with the examples nvidia used yesterday, RE9, Hogwarts and Starfield all start to kind of look like they're using the same game engine, even though they aren't.

Look at these characters' faces, they look like they could have been taken from the same game, because they have the same post processing applied by DLSS. There's no identity.

gUNq9mDbeBITXlv3.jpeg


Just generic, homogenous slop. Just not a fan of it.
 
OBLIVIONLONG.gif

OBLIVIONGIRL.gif


God Bless Nvidia. This is going to look so insanely good on your actual TV screen. Im telling you a year from now, this thread will be serving up the most crow ever on this forum.

Vick Vick You'll be happy to know, I've since discovered the light of Game-mode customization. I use that mode now. However, I still play around with both Vivid and FILMMAKER mode depending on the game.
 
Last edited:
I just have this worry that games are all going to start looking the same. It's like looking at a wall of AI-generated images on Pinterest. They were each generated by a different user who applied a different set of prompts, but because they're all being fed through the same pipeline they all start to take on a similar look.

Even with the examples nvidia used yesterday, RE9, Hogwarts and Starfield all start to kind of look like they're using the same game engine, even though they aren't.

Look at these characters' faces, they look like they could have been taken from the same game, because they have the same post processing applied by DLSS. There's no identity.

gUNq9mDbeBITXlv3.jpeg


Just generic, homogenous slop. Just not a fan of it.
0f6O0MR0yan3E10A.jpg


Because dead eye fake ass looking face is better? Seriously. The DLLS5 version looks like the actually model which is she is based off.
 
I just have this worry that games are all going to start looking the same. It's like looking at a wall of AI-generated images on Pinterest. They were each generated by a different user who applied a different set of prompts, but because they're all being fed through the same pipeline they all start to take on a similar look.

Even with the examples nvidia used yesterday, RE9, Hogwarts and Starfield all start to kind of look like they're using the same game engine, even though they aren't.
Not a worry, this is already confirmed.

651667131_946886211038350_2432331044960303812_n.jpg


Games using DLSS5 model are indeed about look the same. A Bethesda game, a Rockstar one, a Capcom, a Naughty Dog, a Ubisoft, a CDPR, a From.

Can't actually believe so many people here are okay with this. It's a whole different story outside of these virtual walls, but still very surprised.
It's not that we don't see the potential for good, it's just too hard to appreciate being a tiny little fraction of the catastrophic potential there is for the entire industry.
 
It's certainly nice to have more options, won't be useful for all games, sometimes more facial details make the character look older, they shouldn't use grace as the first showcase pic, in that case she looks older after it's on and actually kind of worse.
 
It's fucking disingenuous as shit lol. But it's not even the DLSS off pics being the issue, it's the DLSS on pics that are "wrong". Nvidia just pumped up the contrast and that makes a HUGE difference in getting something to "pop". Sure there's other differences, but if you eliminate the color adjustments, it's a lot closer than it seems.
I also cannot help this trend of not hating bullshots anymore (sorry but this trend of images turning into soup or having plenty of visual artefacts when the camera moves can be best summarised by calling them bullshots… most of these examples have a still camera or have artefacts at the edges of the screen).

Do not look forward to UE games with even more temporal accumulation induced artefacts and this on top…
 
Top Bottom