• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Nvidia at Live GTC : DLSS 5

Um9DIOAaWLR7qxxS.png


WbDsliCpxmXrnP4M.png


This could be really good depending on how devs implement
Full PC next gen

I'd let Nvidia impregnate me

fucking never been happier as a graphics WHORE
 
Wtf are you talking about?

Wow, look how the lighting makes this dude's eyes go fucking goofy and somehow stare in 6 different directions at the same time!

bafkreieygvkltnkqypgbqbbgbnt4nqnptfyntlqwebsutwikaepbwv4vcm




Look how the lighting here magically makes half of this person's eyes generate right through their eyelids!

sASQ1YZmLAbQ5MxZ.jpg




I better stay away from certain lighting conditions if that's all it takes to force my fucking eyeballs through my flesh
 
Last edited:
I like how in their own showcase video of FIFA they show Virgil van Dijk looking like three different dudes with DLSS on depending on the camera angle lol.

Brilliant stuff Nvidia. *slow clap*
 
Last edited:
It doesn't explain the eyeshape itself and jawline. Your argument would be better off if you simply said 'this tech is still early in development so some things might appear altered'.

I'm not sure what to tell you, it's the same model. The only differences are in the lighting and the texture. Nvidia has the other comparison shots on the site. The other Grace shot you certainly can't think is a changed model?

Now, how they've generated the skin texture is something I'd love to have more info on. Does the detail already exist in the original texture and isn't really perceived with the vanilla graphics, or if it's fully AI generated.
 
I'm starting to think a huge number of people's brains have been literally broken by some sort of psyop to have a Pavlovian reaction anything ai that literally stops their brains working and forces the tiresome "ahhregh ai slop, slop slop" to just slop out their mouths.

It's all very strange.

I always think stupid people think the quickest shortcut to sounding clever is to just be reflexively negative about everything.
 
I'm starting to think a huge number of people's brains have been literally broken by some sort of psyop to have a Pavlovian reaction anything ai that literally stops their brains working and forces the tiresome "ahhregh ai slop, slop slop" to just slop out their mouths.

It's all very strange.
It's wild. You see it everywhere, in unison, on Reddit, X, forums, YouTube comments, you name it. It's like some sort of instantaneous scripted response. Any mention of AI and they'll swarm to it like bees on honey while screaming slop slop slop slop slop until they pass out.

This is the most amazing tech we've seen in god knows how long. Just look at the differences with Starfield, FIFA, and AC Shadows. It's unbelievable. I'm hoping devs and Nvidia as well don't shelve this based on the robotic slop spam response from the crowd.
 
Looks like SubSurfaceScattering, better skintexture, and whatever its called when the skin do not look bonedry on Grace IMHO.
 
Last edited:
From aiming for "better than native" they now aim for "we'll transform the game to our machine learned vision of what it should look like instead" huh, weird.

I mean I could see stuff like that being fun fan experiments but even if a game looks "like a ps2 game" I'd prefer to play it as intended if at all rather than claim to "fix" it.

Poor Nvidia, all they do is democratize good graphics and we're being so mean to them, for shame, who needs art directors and artists any more, fire them all.

I was just crying about hardware prices yesterday but maybe I don't need a new gaming PC, lol.
 
I'm starting to think a huge number of people's brains have been literally broken by some sort of psyop to have a Pavlovian reaction anything ai that literally stops their brains working and forces the tiresome "ahhregh ai slop, slop slop" to just slop out their mouths.

It's all very strange.

and I am starting to think... well, that's not true, I KNOW that AI bros have no idea how AI works, and that's why they support shit like this.

tech illiterates that fanboy over a blackbox technology, because their tech illiteracy only lets them see the flashy demo images, while they can't comprehend that this tech lacks quality consistency, lacks temporal stability, in the case of DLSS5 has not enough information about the scene to accurately depict lighting, and that it will constantly hallucinate details that aren't there (and won't be there once you move it off camera and back in again)
 
This is disengenuous. Zoom in.

It's not just her mouth, her eyes are also larger and shaped differently and her jawline has changed as well.

It doesn't explain the eyeshape itself and jawline. Your argument would be better off if you simply said 'this tech is still early in development so some things might appear altered'.

Her jawline is hidden by shadowing in the off image. It's literally just lighting. This is a perfect demonstration of just how fundamental lighting is to realistic rendering.


Look, I'll concede that there are going to be instances where ultra-realistic lighting is going to look a but jarring or inconsistent with an intended style. That's reasonable. But what people have to keep in mind is that what this fundamentally is is a HUGE leap in lighting rendering quality. It's like 15 years of RT progress in one step. And what that ultimately means is that devs will design environments and characters WITH THAT IN MIND. So the next RE won't have a simplified and idealised AI-slop looking model for a main character, because they know it'll be realistically lit and will render it with realistic imperfections etc. But the real news here is that the model will look fucking REAL. This is GOOD NEWS. Maybe the biggest advance in real time graphics since 3D.
 
The demo had two RTX 5090s, one to run the game, another to run DLSS 5. (so the old model of DLSS 5 requires <= 32GB VRAM) So by release, the best case will be that DLSS 5 approx will need >= 16GBVRAM (more if you want framegen). RTX 6000 GPUs will definitely have more VRAM compared to RTX 5000 for this and because memory is expensive, oh boy, you want good frames for new games or take a loan to get to new shiny GPU which supports all the new tech?

Questions.

- option to only use the "upscaling" part and not "neural rendering"
- on the fly swapping to lower versions of DLSS ( basically 4/4.5 is perfect right now for most of the games) (you can do this today using an app called DLSS Swapper, free on github)
- uniformity for DLSS presets, imagine perf. looks different than balanced than quality, not everyone can afford a strong GPU to use quality or DLAA, also what about FrameGen?

I like the tech and I would definitely upgrade to use this if it gives me more fps without any art change ( "AI plastic, smooth filter" ) without a huge perf. cost.

Nvidia says "devs" have complete artistic control but what will the dev team do if the publisher forces to ship a product because DLSS 5 will make it run anyways so no need to optimize the light/exposure etc for each preset as it's just more time and more costs.

Also holy fucking embarrassing PR, I'm furious that these AI CEOs/folks to literally announce stuff like this and say "this is the future", "AI will replace jobs" to hype and get funding only to wonder why there's an insane negativity and push against AI in the West. AI is genuinely an incredible tech, totally will change the world but these fucking dumbfucks are destroying it's image. The AI industry has a huge PR problem and it's only getting worse.

Such a shame.
 
Last edited:
It's wild. You see it everywhere, in unison, on Reddit, X, forums, YouTube comments, you name it. It's like some sort of instantaneous scripted response. Any mention of AI and they'll swarm to it like bees on honey while screaming slop slop slop slop slop until they pass out.

This is the most amazing tech we've seen in god knows how long. Just look at the differences with Starfield, FIFA, and AC Shadows. It's unbelievable. I'm hoping devs and Nvidia as well don't shelve this based on the robotic slop spam response from the crowd.

The irony is that the most AI-slop like part of all this is the universal bleat of "AI slop".
 
With all these DLSS On pics, it almost seems like if someone just calibrated their TV's or monitor's brightness and color contrast a bit higher, that could be half the improvements already.

It seems whenever there's DLSS examples, it's always a much brighter pic.
 
So glad I've been priced out of PC upgrades so that this could happen
 
Of course Todd Howard is happy about this, he's hoping AI can fix Bethesda's incompetence and close the massive gap between them and more skilled developers.
 
Last edited:
tinfoilIt's wild. You see it everywhere, in unison, on Reddit, X, forums, YouTube comments, you name it. It's like some sort of instantaneous scripted response. Any mention of AI and they'll swarm to it like bees on honey while screaming slop slop slop slop slop until they pass out.

This is the most amazing tech we've seen in god knows how long. Just look at the differences with Starfield, FIFA, and AC Shadows. It's unbelievable. I'm hoping devs and Nvidia as well don't shelve this based on the robotic slop spam response from the crowd.
Yah it looks great particularly in motion. Is it without issue? No, but this is an early instance of this tech, you'd have to be blind to not see its potential.

And yeah the reaction looks like bot in part because I think a lot is. But a lot is also real and social conditioning.

The questions start getting interesting if you start to wonder why someone would run bot farms to condition people to have this immediate reaction, as it very much looks like a psyop.

It follows very much the same patterns as past Chinese and Russian funded psyops that were designed to cause division and destroy western societies (hell Russia even funded IRL white nationalist and blm marches in the same locations), imo tho this is China, and a little more insideous.

The western games industry in collapse, primarily due to astronomical unsustainably high budgets and dev timelines. Games are costing 300 mil, and taking 6 years…even games that aren't outright rejected need such high sales it's wiping out tens of thousands of jobs.

China is already far more efficient with regard dev cost and timelines. Ai could be a saviour to this, but western studios are primarily full of the very people who are the most likely to have the strongest gut reaction abject rejection. China meanwhile has embraced it throughout their society. Ai training is mandatory for 6th graders, consumers have embraced it, artists and creatives have embraced it, the public has embraced it, businesses have embraced it. Within game dev the studios such as Tencent are fully integrating it into their dev pipelines. We're going to very quickly get to the point where a Chinese studio can create a AAA game that competes with $200mil US AAA games, but in 2 years, with $10mil, and they will be good, and numerous, and many of the paying consumers won't give a fuck.

A major key to Americas success as an empire in the last century was due to cultural export and narrative control, its why the government funded Hollywood in its early days. Chinas empire is rising but cultural export and narrative control has been a weakness, they don't have that sort of media empire. But this allows them to achieve two things, destroy western entertainment companies (and or buy them up for pennies on the dollar), and rapidly supplant them with their own, whilst ensuring their is no way for them to financially compete with their output.
Always Sunny Reaction GIF
 
I don't know man. You can give an AI a prompt multiple times and the result won't always be the same. Could imagine being the same case here, specially since "as the underlying information for a given object doesn't change" is just impossible. Characters move around, the lighting changes, the wather changes, fog, shadows, particles near them and what-not. Keeping those AI reconstructions stable must be an engineerning nightmare.
Not quite. You're including environment information in the fundamentals of a character. For context purposes, the AI can distinguish between the person and the environmental impact on that person, such as lighting, rain, wind, and occlusion. The model and its textures and materials don't change - effects are simply applied to those models and textures. The AI contextualises all of this information when creating its output. It might be worth highlighting that DLSS5 isn't a blind filter being applied to a static frame. The AI filter requires "hooks", such as model and texture info, motion vectors, lighting structures, and so on, to actually work. Inside, the AI knows who Grace is, what the lamp post is, what the NPC is, and so on. It's why they showed it off on mostly modern games - games that have those hooks in place for them to tap in to. You can't run DLSS5 on a flat image of DOOM (1993) and get DOOM (2016) because DOOM (1993) doesn't have those hooks to feed the AI.

If the reconstruction process were as fragile as you're supposing, you'd see what's called temporal instability in the footage they showed off today. That's just a fancy way of saying "Frame 2 looks different to Frame 1". Instead, the AI uses Frame 1 as a guide rail when creating Frame 2, which itself uses all the same hooks that Frame 1 used. So, in RE9, Grace turning around away from the camera and then back again won't show different results for her face. Her face is always the same under the hood, and so the AI will output the same result every time.
 
Last edited:
The real question. Can you run every thing at low settings and then get this cost of paint? Would it be greatly diminished?

Well this only improves lighting so it's not going to counteract unrelated settings. But yeah I've been thinking it might pay to turn off PT or maybe even RT completely and just rely on AI to do the lighting. That Starfield footage does look pretty path traced, after all...
 
the retard dragon with his wonderful takes once again :))))
Imagine being ok with the company that made that now being promoted to same exact level as the studios that have cultivated the industry's best artists for decades, all thanks to a toggle. Except the output of both studios now looks like AI porn.

The amount of shit some graphics whore will eat just in pursuit of their retarded chase for photorealism, that they're surrounded with 24/7, is truly something.
 
Top Bottom