• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

DLSS 5 will be trainable and promptable -Jensen

ResurrectedContrarian

Suffers with mild autism


Hard to transcribe the pauses and halts, but this is what he said:

Now, the question is about enhancing. DLSS 5 also lets, because the system is open, you could train your own models to determine, and you could even in the future prompt it. You know, 'I want it to be a toon shader, I want it to look like this kinda,' so you can give it even an example. And it would generate in the style of that, all consistent with the artistry, the style, the intent of the artist."

I think that they got the impression that the games are gonna come out the way the games are shipped, and then we're gonna post-process it. That's not what DLSS is intended to do.

He mentioned in previous statement that you could "tune" the effect, but that was very vague, and could have even just meant that devs get access to a slider for its effect strength, or something minor.

But I think this is the first time I've heard full confirmation that:
  • the model itself will be open to developers, not purely pre-baked and static weights;
  • training and tuning the model on your own dataset will be a normal practice and expectation;
  • you'll even be able to simply prompt it, with either words or possible even with example images if you don't want to do a full training
This is what I hoped it would be like, but it wasn't clear before. This is also a radical break from their prior generative models for DLSS or frame gen or anything, because they've never released it in a form where the core model itself is trainable rather than a proprietary frozen transformer model.
 
"DLSS make the girls hot"

I can't make too many more trips to the future because gas prices are up but I brought this nugget back with me a few months ago 😂

hrsPqIwTPueeR9y6.png
 
Fuck me, this is sounding like they really are going all in on this dumbass idea…

If this is the killer feature of their next GPU where all their R&D bandwidth went… I don't even…

Good to know they have some training capability (none of which was in any of the material to date so who even knows if any of this is true or this is another "we have achieved AGI" level BS), but as long as this is running post rasterization, none of this fixes the fundamental flaw with the idea. All AI is doing here is playing with colors. Instead of 1 filter, you got many filters?! Yay?
 
Last edited:
Fuck me, this is sounding like they really are going all in on this dumbass idea…

If this is the killer feature of their next GPU where all their R&D bandwidth went… I don't even…

Good to know they have some training capability (none of which was in any of the material to date so who even knows if any of this is true or this is another "we have achieved AGI" level BS), but as long as this is running post rasterization, none of this fixes the fundamental flaw with the idea. All AI is doing here is playing with colors. Instead of 1 filter, you got many filters?! Yay?

You might be mad at the problems & damage that AI is causing but if you think these ideas are dumb you're out of your mind lol.



A.I is damn near the devil himself but it can be used to do some amazing things , sadly it's going to do long term damage to the future because intelligence & skills will be devalued & the next generation will be lost without it 😢
 
You might be mad at the problems & damage that AI is causing but if you think these ideas are dumb you're out of your mind lol.
Never said AI is bad. I'm very pro AI, use it extensively in my line of work and think its future is bright. I just don't think applying anything related to "lighting" or artstyle that goes beyond basic color adjustments, post rasterization, is a very bright idea. If they come up with a version that can do it pre-rasterization, I'm all for it. Until then, I'm just going to call it dumb. There is plenty of evidence for it not lining up with Jensen's tall claims. It's neither geometry level nor really trainable, unless they all had a meeting in the past couple of days and decided to go a different route after the backlash. If they did course correct, then great. That vindicates the backlash. And if they didn't, then some of Jensen's claims are straight up lies. All this sounds like every time someone pushes back, he doubles down with an even bigger lie.
 
Last edited:
You all just need to stop buying Nvidia GPUs. The only power consumers have also happens to be the only language these companies speak. Money.

Not even saying stop forever, just stop for one year, and watch whatever reasons you all gave for stopping just go away. Unless you all think that if you all refuse to spend $2000 buying a GPU, the price won't be reduced.
 
You might be mad at the problems & damage that AI is causing but if you think these ideas are dumb you're out of your mind lol.
This particular appliance is waste of power and shittization of picture.
This power could be spend elsewhere, also on AI things, with better quality-to-performance ration like AI PT, AI enhancement of textures/geometry, AI compression - if you bring AI just some steps back in rendering pipeline, results also should be very good without making a picture into slop

But it's understood why Jensen push it so hard - because this "training" part means more sales for him as developers have to create their own model training infrastructure
 
This particular appliance is waste of power and shittization of picture.
This power could be spend elsewhere, also on AI things, with better quality-to-performance ration like AI PT, AI enhancement of textures/geometry, AI compression - if you bring AI just some steps back in rendering pipeline, results also should be very good without making a picture into slop

But it's understood why Jensen push it so hard - because this "training" part means more sales for him as developers have to create their own model training infrastructure
You have to start somewhere, it's not going to always need an extra GPU
 
the prompt part is interesting since this could change landscape for modding. for example users/modders can just @grok change character outfit or appearance as whole.

as long it by users choice. not like that blatant DLSS5 demo. worse? case, there is devs that would rely on the tech to 'finalize' their model lol.
 
Last edited:
I can't make too many more trips to the future because gas prices are up but I brought this nugget back with me a few months ago 😂

hrsPqIwTPueeR9y6.png
You made a prediction millions of others have "I can GenErAte MuH own Games" that doesn't have anything to do with this thread yet still go back to quote it lol
 
You have to start somewhere, it's not going to always need an extra GPU
They already started with ray reconstruction/regeneration
But then they decided that it's not enough money and broke it in 4.5
And then went to a completely different direction of making picture into a slop via heavy post-processing. Because it's means more money for them. And it'll have all the problem genAI has (it's already altering models, loosing context it doesn't know about like lighting etc).
This is not the way this tech should be going
 
A.I is damn near the devil himself but it can be used to do some amazing things , sadly it's going to do long term damage to the future because intelligence & skills will be devalued & the next generation will be lost without it 😢
We're already at the point where knowing low level to the metal programming is extremely rare. Maybe through AI we could actually optimize game engines like UE5 and Unity and similar engines and get rid of stutters? I refuse to accept the idea that the problem lies in the hardware. It's 100% our knowledge that is lacking.

Either way, I'm thinking that western devs will struggle to uglify women with this tech. I've used AI enough for art to know that AI create what most people want, and that's not ugly women.
 
We're already at the point where knowing low level to the metal programming is extremely rare. Maybe through AI we could actually optimize game engines like UE5 and Unity and similar engines and get rid of stutters? I refuse to accept the idea that the problem lies in the hardware. It's 100% our knowledge that is lacking.

Either way, I'm thinking that western devs will struggle to uglify women with this tech. I've used AI enough for art to know that AI create what most people want, and that's not ugly women.

Actually AI generate what it's been trained to generate, so if they feed it ugly women it's going to spit out some ugly women 😂
 
Trainable means they can make the characters look like the trillion polygon mega scans they take and not the deformed ghouls Playstation Portable and Switch can run.

A528512D-B300-4A52-B1C447CB5B47C213_source.jpg
 
Actually AI generate what it's been trained to generate, so if they feed it ugly women it's going to spit out some ugly women 😂
True I guess but that would take serious dedication. In general AI for art generation goes with what the majority wants, like global beauty standards. At this point they would have to put in rules to avoid normal standards and I don't think many devs will bother at that point.
 
If it's trainable, it's injectable, and if it's injectable, then Dev's are handing visual control over to anyone with a little know how. Reshade on steroids. Basically an AI version of NVidia's modding setup.

"Just drop in the infamous 'dewesterniser' DLSS5 plugin to remove all those nasty American-centric design choices which reflect the thoughts of 32 mile radius in greater Los Angeles and nowhere else on Earth..."
 
Let's see how this can work positively for us:
- Devs can mask the faces/models of main characters (and important NPCs) so DLSS5 doesn't touch those.
- Devs can tailor the effect of DLSS-5 and then present their characters for the first time to users. So people will only ever see the DLSS-5'ied version of their game/characters, and that gets set as the reference and expectation.
- DLSS-5 is placed earlier in the pipeline so it doesn't just act as a filter, but has access to other parameters from the game which it uses to build final image.
- Another option is to just keep this stuff in the filter section of the Nvidia overlay.

What they should have done for revealing DLSS-5 is used a new game and presented it for the first time.

I do actually want to play some old games with this DLSS 5 "filter", especially if we can control the model somehow. That could be cool in some cases.
 
Last edited:
So Slopvidia wants to dump training on developers…

On one hand it is reasonable for better matching with the game. But there is one big issue - it will be difficult for many devs. It will need a lot of effort that many studios won't be able to do.

I'm also wondering how all the actors look at this. Are they ok with AI enhancements?

In the end there are still other issues, that are harder to fix because they are more technical. So, slop continues.
 
So we went from an Nvidia engineer saying 'devs can only turn it up or down or off' to 'devs can fully train it' in the space of less than a week.

Can you present and explain it any more hamfistedly?
 
Retard? You might not like where the company that he leads go, but making him a retard...You show your own insecurity by acting like this...
Insecurity about what, exactly? The guy is a pretty obvious hype merchant to keep sales going, not above lying to people about what he's selling (remember when he said that DLSS5 is not generative? And then when he got called out he tried to spin it as "developers control it, so it's not like other generative models!" and then that turned to "well, yeah it's generative"). He's not even good at PR.
 
They reckon in the future, DLSS is going to be so powerful, it will be able to pull the leather jacket off this cunts back, and be able to completely remove Caitlyn Jenner's Adam's Apple. I must stress this is speculation, so don't get your hopes up!
 
The idea of people not wanting anything "generated" in games seems to completely fly over that man's head. He needs a good old bitch slap to the face, a wake-up call from this AI fever dream he's having. He's literally trying to sell a thing that triggers revulsion just because of what it is, not how it does whatever it does.
 
Wow that is going to be really fucking cool.

Like I could drop Ellie in Horizon, lol. You know what, I love DLSS 5.
The idea of people not wanting anything "generated" in games seems to completely fly over that man's head. He needs a good old bitch slap to the face, a wake-up call from this AI fever dream he's having. He's literally trying to sell a thing that triggers revulsion just because of what it is, not how it does whatever it does.
I want things generated in my games.

The exact second anything commercially available with anything cool comes out all this whining will go away and people will buy it so quick you won't be able to find one for years.

Just remember that. They will not be able to keep this in stock. Gamers en masse don't give a shit about your artwork or that the artstyle for the demons in Demon's Soul is slightly changed. Nobody really cares about that, but they would LOVE to see old games made new again. They would LOVE to be able to improve and customize the graphics. Mods have done this for PC gamers for generations. Now the GPU can do it at the system level as a customization. It's the best reason to be a PC gamer that has yet been made. Best of all it is optional. You don't have to select it in the dropdown. It is also the most powerful gpu money can buy.
 
Last edited:
Fuck me, this is sounding like they really are going all in on this dumbass idea…
nvidia been saying for a while that neural rendering is the future and rasterization is a dead end

mods will get crazy--imagine eventually you can train your own model, or download different ones, completely changing how a game looks. want everything to look like sailor moon? just apply the model.
 
nvidia been saying for a while that neural rendering is the future and rasterization is a dead end
When Nvidia used to say neural rendering was the future, this wasn't it. This is some team within Nvidia, led by Jensen, that has gone rogue and is indirectly asking everyone else to set years of research on fire because they found the "holy grail" with this shortcut.

This literally throws all other neural rendering techniques out the window. Neural materials, neural shading, neural radiance cache, neural lighting, neural texture compression.... Literally none of those are required with this approach. Just do the most basic shit and have AI play pretend. Why even bother path tracing if it wipes out all those shadows and "re-lights" the scene anyway? Why make high res textures if AI can imagine a high res version of it on the fly? To even call this neural "rendering" is a joke. It's rendering only in the way a video is rendered. Not actual 3D rendering, which it ironically needs to even function meaningfully. It's Nano Banana at 60 fps, with constraints that prevent it from going wild.

And that's fine if they presented it as a mod that gamers can enjoy. Not as the future of rendering.
 
Last edited:
i mean... jensen is the CEO. what he wants isnt "going rogue", its just the official direction of the company.
jensen has said multiple times he sees full fat neural rendering as the future.
things could change though, of course.

This literally throws all other neural rendering techniques out the window. Neural materials, neural shading, neural radiance cache, neural lighting, neural texture compression.... Literally none of those are required with this approach. Just do the most basic shit and have AI play pretend. Why even bother path tracing if it wipes out all those shadows and "re-lights" the scene anyway?
eh we dont know that yet.
like regular ol' dlss, it works best with a detailed base image, and all those techs (except maybe neural materials) are just faster ways of doing existing things in the rasterization world. keep them for now.
devs like RT/PT because its real time and makes pre-baked super inconvenient. for example, you can still do RT/PT but use way fewer bounces and let dlss5 fill it in.

dont have a crystal ball, but likely this tech will change a lot.
 
i mean... jensen is the CEO. what he wants isnt "going rogue", its just the official direction of the company.
Lol. Agreed. I'm being hyperbolic. It's just not hard to see that a large number of people even at Nvidia don't approve of this route. Hence the "going rogue".

It doesn't.
Wait for the actual information on the technology and how it can be integrated.
Pretty much everything on the web about DLSS5 right now is pure bullshit.
Fair enough. This is just my armchair conclusion based on what actual information was revealed and all the footage I've analyzed. If the information is updated, I'm happy to adjust my stance accordingly. So far what I'm seeing is the actual official information is honest about what the model does. Jensen alone isn't being honest. If they update official information to describe if and how model training is done by devs, we can re-evaluate it.
 
Last edited:
Lol. Agreed. I'm being hyperbolic. It's just not hard to see that a large number of people even at Nvidia don't approve of this route. Hence the "going rogue".


Fair enough. This is just my armchair idea based on what actual information was revealed. If the information is updated, I'm happy to adjust my stance accordingly. So far what I'm seeing is the actual official information is honest about what the model does. Jensen alone isn't being honest. If they update official information to describe if and how model training is done by devs, we can re-evaluate it.
yeah agreed.
also sometimes i am r e t a r d e d

ps check out what the quote function captured.
that stuffs not part of your post.
 
Top Bottom