• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Jensen Huang says gamers are 'completely wrong' about DLSS 5 — Nvidia CEO responds to DLSS 5 backlash

this is the first time in 20 years that I felt a new jump in generations, like the jump from 2D to 3D or from 480p to 1080p. It kind of falls apart on the faces, sure, but where people see the beauty AI slop filters, I see the FMVs of the games I played when I was a kid.

Ooh pretty pictures
 
Nice that the picture is cut off at the bottom not revealing the pages and pages of people not playing MW2

But sure, this is exactly the same and just a minor drop in the water, totally not the endgame of the AI corporation trying to sell you graphics cards

Hey I wish they had real competition, that would mean lower GPU prices for us all but it is what it is.
 
People like you deserve AI and the slop it produces

Holy fuck man, are you saying Ai can make something better than the fucking Mona Lisa or Van Gogh's paintings?

You know AI is trained on existing data right? And that it literally cannot create something from nothing?
I'm referring to the future of AI. Honestly the very near future the way things are going.

Edit: what I'm ultimately referring to is that people seem to think humans have some unique value that just can't be replicated and they will have to come to grips with the reality that it's all an illusion created through millions of years of evolution.

Edit2: I'll also add. I'm an AI doomer. I think nothing good is gonna come from this but here we are on a path to self destruction.
 
Last edited:
Disbelief No GIF
 
Nvidia is just gonna keep on gargling that AI however they can. Money going round and round. So of course Huang is gonna say something like this.

Improving fidelity of textures and lighting, cool, potential for sure. But the face changing and altering stuff is wack in most cases. Just looks like a glorified Snapchat filter, and I can't get behind that. Also, what we saw was with two 5090s? By the time this is even doable and given to the public who knows what games will look like then, lmao.
 
Last edited:
I disagree that true skill is ever outdated. Quality, intent, and craftsmanship are important. There's a reason we all call it AI slop and why people criticize garbage that is thrown together in UE5.
Maybe "outdated" is the wrong term. True skill becomes niche while the matching technology becomes mainstream due to speed, convenience, cost etc.
 
A lot of these people rapidly switching back and forth between saying that AI is inevitable and we'd better get used to it, and being angry that people aren't into it.

They're not quite saying that they think it's morally wrong to not like AI, but that's what they mean imo.
Historically, only the bad guys who're forgotten but their invention lives on, the good guy here is Geoffrey Hinton.
 
How many people here even have a 5090 that are getting their knickers in a twist, it's an option as well not something mandatory for every developer.

Fortunately or unfortunately this most likely the way gaming is going, if you want to see graphical advancements it was always going to go this way. Imagine trying to achieve something similar to what DLSS5 has shown without using ML/AI, The cost would be insane.
 
He's just ahead of his time, as we've seen before.

People mocked DLSS 1 at launch, but now it's basically standard for every game. Same with Ray Tracing—gamers complained at first, but now it's one of the most essential features in modern gaming.
Even Frame Gen got trashed, but with PS6 and the next Xbox integrating it into their dev kits, it's clear where things are headed.
 
Not coming out till the Fall. They should have just worked behind the scenes and had more ready to show in some streaming event, middle or end of summer. Now, they have to endure months of shitposts and memes. I'm personally excited for this, but I can see the concerns coming from people, about art direction.
 
Gamers are dead wrong, but it was a shit reveal.

Knowing the general antipathy terminally online gamer types have towards generative AI, the correct approach would have been to have the art leads and artists from five games show how they put the tech into use, customised it to match their vision, and how it delivers a much better game. Five case studies - RE9, Starfield, EAFC, the medieval adventure and one more. The Harry Potter example should have been binned for now, it looks like people's worst fears.

Reveal better, Nvidia.
 
Gamers are dead wrong, but it was a shit reveal.

Knowing the general antipathy terminally online gamer types have towards generative AI, the correct approach would have been to have the art leads and artists from five games show how they put the tech into use, customised it to match their vision, and how it delivers a much better game. Five case studies - RE9, Starfield, EAFC, the medieval adventure and one more. The Harry Potter example should have been binned for now, it looks like people's worst fears.

Reveal better, Nvidia.
Agreed.

In the end, the fallout from this reveal?

- Won't lead to a boycott : especially none of these streamers/youtubers will lead the charge.
- The companies that were going to work with this tech still will, it's actually being worked on and are not as emotional as gamers on a forum. The one's that didn't want that kind in their game will proabably keep their ground...

But I'm sure Nvidia will partner with more studios in the future, could see this coming for GTA 6 when it finally comes to pc. We will see the reactions then.
 
Gamers are dead wrong, but it was a shit reveal.

Knowing the general antipathy terminally online gamer types have towards generative AI, the correct approach would have been to have the art leads and artists from five games show how they put the tech into use, customised it to match their vision, and how it delivers a much better game. Five case studies - RE9, Starfield, EAFC, the medieval adventure and one more. The Harry Potter example should have been binned for now, it looks like people's worst fears.

Reveal better, Nvidia.
I agree 100%. This was the equivalent of going inside a room of anti-woke gamers and begin with "Hi, my pronouns are He/They ..."
 
Gamers are dead wrong, but it was a shit reveal.

Knowing the general antipathy terminally online gamer types have towards generative AI, the correct approach would have been to have the art leads and artists from five games show how they put the tech into use, customised it to match their vision, and how it delivers a much better game. Five case studies - RE9, Starfield, EAFC, the medieval adventure and one more. The Harry Potter example should have been binned for now, it looks like people's worst fears.

Reveal better, Nvidia.

That will take up too much of GTC screen time. You saw how long Jensen had to stand on stage last last night?

Let's be grateful he allowed gaming to kick off the adult GTC show with a 5-10 minute reel on DLSS5
 
Last edited:
If it takes two 5090's to run I can assure you it is the opposite of free.
In this stage it takes two, but it's being refined to 1 GPU. You didn't even watch the video or read anything to understand the technology. You just echoed what the community THINKS it is going to be. What's it like to be you?

This isn't just gamers having this reaction. This moment became large enough to where people who have nothing to do with gaming have been commenting on how off-putting it looks.

Going down this road you are just going to blame humanity itself.
It's not off putting and it's the first generation of neural processing and it looks phenomenal. It certainly looks better than the games do with rasterized rendering. With this tech, gaming will see the generational leap we've been clamoring for and out the gate, people are shitting on one of the better enhancements we've seen in a while. The AI slop trend was true at first, but eventually be less true.
 
Last edited:

He added that developers can still "fine-tune the generative AI" to make it match their style, adding that DLSS 5 adds generative capability to the existing geometry of the game, but that it "doesn't change the artistic control."

"It's not post-processing, it's not post-processing at the frame level, it's generative control at the geometry level," he said.


What does generative means in this context? Dlss is supposed to use a global model, not game specific. What does that mean too?

Model meaning the training logic and not the samples fed in? So capcom just need to feed in their desired artwork. The output will not get corrupted by other artwork like say starfield or stock female images? Then what is the problem?
 
I recognize this is somewhat off topic, but is the supposed performance efficiency coming from the fact that GPU's won't have to render complex lighting and shaders anymore? Is efficiency (more for less) even a goal with this tech? Does this tech just shift the compute burden and we will still need crazy powerful GPU's to utilize effectively?

Edit: This isn't one of those "I'm trying to make a point by asking questions" situation. I'm genuinely curious, as I don't know the answers to these questions.
 
Last edited:
Yeah, this is exactly what I said.

It's mind-blowing tech and it's pretty lame to pretend not to see the potential.

"Mind-blowing" tech and then it's just a Snapchat beauty filter that completely falls apart in motion and generally just looks like ubiquitous vomit. Even in the tiny glimpses they showed us, it's full of oddities. Like characters not even being able to blink properly, objects disappearing/morphing (soccer ball in FIFA), cloth physics going haywire, etc.

8 grand worth of hardware to run this monstrosity only to look like fucking garbage lmao
 
Gamers are dead wrong, but it was a shit reveal.

Knowing the general antipathy terminally online gamer types have towards generative AI, the correct approach would have been to have the art leads and artists from five games show how they put the tech into use, customised it to match their vision, and how it delivers a much better game. Five case studies - RE9, Starfield, EAFC, the medieval adventure and one more. The Harry Potter example should have been binned for now, it looks like people's worst fears.

Reveal better, Nvidia.

This. People are tired of A.I. and every video being fake nowadays. So, when they see an overlay that looks like the A.I. they are sick of looking at, you're going to get this pushback. I imagine if none of that was the case and this was a fresh look at it; reception would have been much better.
 
I recognize this is somewhat off topic, but is the supposed performance efficiency coming from the fact that GPU's won't have to render complex lighting and shaders anymore? Is efficiency (more for less) even a goal with this tech? Does this tech just shift the compute burden and we will still need crazy powerful GPU's to utilize effectively?
A person a few pages ago said it best. This is a tech not meant for the 50xx generation, in the same way that Ray Tracing wasn't meant for the 20xx generation. This tech will be completely feasible by the 70xx generation.
 
It's not off putting and it's the first generation of neural processing and it looks phenomenal. It certainly looks better than the games do with rasterized rendering. With this tech, gaming will see the generational leap we've been clamoring for and out the gate, people are shitting on one of the better enhancements we've seen in a while. The AI slop trend was true at first, but eventually be less true.
Two things can be true at the same time:

1) The current iteration of we currently saw is off-putting to the public.

2) This tech has large potential and can be the game changer you are describing.

There, just like that a person doesn't have to lie about number 1 just to be supportive of number 2.
 
This. People are tired of A.I. and every video being fake nowadays. So, when they see an overlay that looks like the A.I. they are sick of looking at, you're going to get this pushback. I imagine if none of that was the case and this was a fresh look at it; reception would have been much better.
100%. If no one had seen A.I. videos or pictures before in their lives, if they didn't know about A.I. PERIOD, they would've crapped their pants from excitement about the future. But now they are exposed to the visuals of the tech, they have become cynical about it (it's like seeing the 100th Marvel movie - the CGI do not impress you at all anymore), some have joined the hate-train just for the fuck of it (or because they're afraid they'll lose their jobs), so they hate it.
 
Last edited:
A person a few pages ago said it best. This is a tech not meant for the 50xx generation, in the same way that Ray Tracing wasn't meant for the 20xx generation. This tech will be completely feasible by the 70xx generation.
I imagine my confusion comes from the fact that I would expect graphical fidelity to already look like this, coming from a 70xx generation GPU, using traditional rendering mixed with high quality ray tracing and letting ML Super Sampling and Ray Construction do their things.

Does this replace traditional game engines? Is the sales pitch really aimed at studios, promising them faster/efficient development?

Because if you asked me "what would a game like RDR2 look like when aiming for 70xx series hardware", my answer would look like the fidelity we saw in some of the shots from the demoes shown. In some cases, I would expect it to look even better, in comparison to some of the games that looked a little off.

Edit: To be clear, I'm not dunking on what was shown. It is just that I would be more impressed if they told me it was going to run on all 50xx series cards. Maybe even some limited support on 40xx series. I already expect games to look like this on the next generation of GPU's.
 
Last edited:
Doesn't this guy have a crisis management consultant?

You never call your customers "wrong", you say you didn't communicate your message well and created a wrong impression, and then you move onto re-communicating your message.
Except you're the CEO of the world's most valuable company and the people you insult are no longer your customers but data centers and the companies of investor buddies. Then you can say whatever you like
 
I was watching Angry Joe react and he thought the characters looked better from a distance. He was like, see that looks good, and the only difference was that it was a character 10 feet away from the camera in Hogwarts. First person games and cut scenes may be weird at first because it reminds people even more of A.I. filters. 3rd person games with the scenery upgrades will win people over eventually. One of the best screens was from AC Shadows. The tech will mature and devs will refine it to their liking. Until then we have to deal with all the Tiktok like reacts and memes lol
 
He's just ahead of his time, as we've seen before.

People mocked DLSS 1 at launch, but now it's basically standard for every game. Same with Ray Tracing—gamers complained at first, but now it's one of the most essential features in modern gaming.
Even Frame Gen got trashed, but with PS6 and the next Xbox integrating it into their dev kits, it's clear where things are headed.
Yep. Looking forward to everyone doing a 180 on this tech in the future.

Hey I wish they had real competition, that would mean lower GPU prices for us all but it is what it is.
Nah bro slightly cheaper cards with much worse upscaling, frame gen, and ray tracing features are actually competitive. Don't you want to miss out on upscaler updates when a new gen of cards comes out?
 
Last edited:
History is on his side. Gamers were in fact completely wrong about DLSS 1.0, which was also widely panned and poo-poo'ed when it was first announced and in its earliest examples. It's very likely DLSS 5.0 will follow an even more accelerated trajectory. Also important to note the vast majority of the peanut gallery don't have hardware capable of running it, and won't for a decade or more, so there is certainly an aspect of console warring to it as well.
 
Last edited:
Top Bottom