• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

DLSS 5 - Yes or No?

Do you think DLSS 5 is the future?

  • Yes and I like it

  • Yes but I don't like it

  • No, it's ugly and we'll forget about it

  • No opinion/other

  • No, we need less AI not more


Results are only viewable after voting.
cmmua0lw6000uo40l3grgigzs

Another Nvidia bot.

I am flattered.

Especially since I don't have NVIDIA in my PC.
 
Don't worry. I'm sure you'll be able to buy an AMD graphics card in a few years that has a much worse implementation that only works in a few games.
Consoles surely won't do anything like this.

Going PC route, its clear. If you like this, absolutely go pc. Else, consoles should be fine. Nvidia will be progressing this only for entire generation.
 
It's obviously the future, I can't see a scenario in which people go back to before after having access to this shortcut. I also liked everything I saw. Still somewhat worried for what it implies for artists and artistic expression.
The thing is, in the future (which is almost here...) there simply won't be any artists, just a game designer who gives the AI a prompt and receives a game with a visually consistent style in return. There will undoubtedly still be unique games, but I'm sure there will be many more games with the same artistic style, just like with the "Unreal" games.
 
No, I do not believe it would have been the same backlash. The video is titled DLSS5, and ML upscaling has grown to be one of the most beloved features on recent GPUs. We all anticipated the next evolution of DLSS, especially after the disappointing results of 4.5, not this.

RTX Remix also leverages AI and can dramatically alter the art direction of games as well, yet it was welcomed as a useful tool because they never tried to pass it off as something else. Setting expectations and naming your features appropriately is important in marketing.
Ultimately, the name doesn't actually matter. I agree that this shouldn't have been called DLSS, but I understand why they did. At the end of the day, DLSS (which is super sampling) and this, alongside frame generation... all are working from the same core data set. What differes is what their end goal is. DLSS, focusing on figuring out what is missing from the lower rez image vs a hypothetical higher rez one. FG does fills in the gaps between two actual rendered frames... and this generates lighting and materials interactions.

In truth, if all these companies aren't being obtuse (cause not only Nvidia is guilty of this), the actual name for all this stuff should have been more akin to DL-SS, DL-FG, DL-NR (for this latest addition)....etc. With the only commonality being that they are all based on DL and the data sets that require.

Having said all that, I don't get the backlash at all. It's a tool. And does it work? Absolutely, and from a real perspective, that is all that matters. We can go crazy on how its used by whichever dev uses it, and in time that will inform a best practices use case... but that has nothing to do with the tool and its merits.

So while I am seeing a lot of people talking about its IG filter 'ness, I am instead looking at what it can actually do if and when used right, but more importantly, what it means for gaming hardware. I am legit more concerned that this is running right now on two 5090s and wondering why that is and how that could ever be scaled down. I wondering what this means for RT, does this mean we can reduce RT cores and instead add in more matrix cores? Because this tech would mean that we would need even less rays going into a scene for lighting.
 
In reality, because the poll is so shit, the Nos and Don't Like Its are actually winning. Thankfully.
Yeah, there is no: "no, I don't like it" which would be the logical question but given who did the poll it was expected. Still a 44% if enthusiasm about this thing is sad.
Ultimately, the name doesn't actually matter. I agree that this shouldn't have been called DLSS,
I don't agree.
Digitalfoundry Light Super Slop is a very appropriate name for this technique.
 
Last edited:
I picked no opinion for two reasons. First is that the faces have that AI look to them that I'm not a fan of, but also it's not releasing for a while and has time to improve. Six months is a long time and there are already AI "people" that are indistinguishable from real people online, so this can always get better.

Also, it was running on a dual-5090 setup so it will probably unusable for 99.99% of people when it does release and for a while afterwards, so I don't see the point of getting upset about something I most likely couldn't use anyway.
 


I agree this with. To me it's not just about looking good or not (even if so far I really don't like the look on faces), it's more about how generative AI in general makes everything less... valuable.
You can make, see and get everything without efforts, making everything less special.
It also doesn't help that lot of stuff look very similar, because it has this typical "AI" look that our brains can recognize instantly because of being used to see it everyday now.

If the next Pixar was completely AI generated, without real art directions, artist thoughts behind it, efforts, would you not mind? To me it changes a lot. The process of how something was created does change its value.

Yes, with DLSS5, we will have to see if it can just be some kind of setting similar to ray tracing, without changing faces and stuff in such way, and with total control from the developers, becoming another lighting tool, but clearly with the showcase Nvidia decided to show here, this wasn't their first intention. What we saw was pretty scary for art direction in general.


The tech is currently experimental, nobody gave it a thorough analysis, when it comes out it will be optional and most likely take a good while before becoming a standard (games still have RTX has an option, people still bitch when it's mandatory).

So I ask again, what's his issue? Fake this fake that...In the same way some people dislike frame gen he's free not to use it when it's out.
Pretending to be on this high ground and defending the artists when those same people might very much be on board is hilarious. White knight syndrome I guess.
 
The tech is currently experimental, nobody gave it a thorough analysis, when it comes out it will be optional and most likely take a good while before becoming a standard (games still have RTX has an option, people still bitch when it's mandatory).

So I ask again, what's his issue? Fake this fake that...In the same way some people dislike frame gen he's free not to use it when it's out.
Pretending to be on this high ground and defending the artists when those same people might very much be on board is hilarious. White knight syndrome I guess.
The issue is it looks like total shit, and they're telling us it looks amazing, buy buy buy. They showed this, and wanted people to be impressed, but it looks terrible, because they have no taste, and they don't care if anything looks good, they just want to sell.

It's just nVidia doing the same shit as those people who wired up their own AI filter over a game, and it looked like shit there, too.

They don't show a ton of motion, because when they do, it makes everything look like a gen AI video.
 
Last edited:
Game by game basis. Some games I can see myself using it, other games such as like a dragon I probably wouldn't want it.

It's better for games going for realism like FIFA.
 
gaf truly has the worst opinions on the planet. Thankfully, it's super insignificant. Fuck this nvidia trash.
To be fair, only the first option is "I'm into this".
The second one is "I don't like it but I'm a doomer."
And then two other No options, and the one about AI got added long after the poll was posted.
So like someone else said, the No side is winning ✊
...By a little bit.
 
Last edited:
The issue is it looks like total shit, and they're telling us it looks amazing, buy buy buy. They showed this, and wanted people to be impressed, but it looks terrible, because they have no taste, and they don't care if anything looks good, they just want to sell.

It's just nVidia doing the same shit as those people who wired up their own AI filter over a game, and it looked like shit there, too.

Looks like shit to you and every other sheep youtuber grifting over every news they can spin as a negative.
I'm currently playing Space Marine 2 on the pro. Pause on Titus face and he looks goofy as fuck. If I could have an option to have a switch that pushes the lighting and makes his face look less cartoony I would go for it.

No one is holding a gun to your head and asking you to buy. Like I said people bitch but Nvidia dominating the market lol
 
Surprised about the negativity. It's true it may distort the style of some games as envisioned by the developers but at the end of the day it's optional and I guess it can be tuned by the developers themselves too before release. I think overall it's cool.
 
Looks like shit to you and every other sheep youtuber grifting over every news they can spin as a negative.
I'm currently playing Space Marine 2 on the pro. Pause on Titus face and he looks goofy as fuck. If I could have an option to have a switch that pushes the lighting and makes his face look less cartoony I would go for it.

No one is holding a gun to your head and asking you to buy. Like I said people bitch but Nvidia dominating the market lo
l
Fixed it for you.
 
Yes of course, this is the future and it's gonna be completely under the developers control, no tiktok filter unless that is their intention, not sure how such a simple thing can make people panicking...

People want shorter development cycles but they are against a tool that is gonna speed development ten folds.
 
Something like it (FSR, PSSR) may become widely adopted, but it's extremely unlikely that this tech will.

As much as i dont like it amd and console makers are probably heading in a similar direction. It might be a little bit less advanced as dlss 5.0 but it will be close enough. Consoles cant so generational jumps anymore so the ps6 will be alot more heavy on AI.
 
YES

People are being extremely retarded about this, like a child that just learned a new word and can't stop screaming "slop! slop! slop!"

It's very impressive for a first iteration. It remains to be seen how it actually does when a single card running both the game rendering and the DLSS added detail on top.
My question is, seeing how Starfield looks path traced with this despite being not, if we can save performance by not using path tracing in addition to DLSS 5.
 
I'm sure it can get better, but right now I just think it's hilarious that female Capcom employees provided input on old Leon to make him as attractive as possible down to the tiniest details, leading to high praise from fans for their work, just to have an algorithm change his bone structure as an example of AI improvement, LOL
 
Bad poll but no - the AI is taking liberties I'm not ok with. Nvidia seems to be getting high on its own supply. The fact that they even thought people would be ok with some of this stuff is kind of crazy to me. Those Hogwarts Legacy faces are just hideous and totally inappropriate, the lighting changes shown in Oblivion were awful, the image of Virgil Van Dijk actually stops looking like the player.

Maybe if devs can somehow take control of the project and if it helps them speed things up - but not as some kind of filter to apply over all games.
 
Last edited:
female japanese character probably gonna end up like generic korean idol AI slop
Grace dlss5 is just as realistic/unrealistic as leon is, especially taking into acount he is 50yo in re9 with a body that amatour (aka not on roids) bodybuilder in his prime would be proud of :P

When u are westerner and only are exposed to pierced and tatooed whales walking on the streets those faces/figures look like "ai slop" ;D
 
Need a response "Doesn't matter because I won't afford it" I mean I could stop saving for a boat and go all in on GPUs but that is lame.
 
Everyone is going to be like "OMG *click* this is ASS *Yes to turn on dlss 5*
I totally won't use it. (ultra dlss5 = ON)
Nope, never gonna use it. (unzips)
It's Mw2 all over again
 
Last edited:
Putting aside the cost of two RTX 5090s to generate images that could be done with Kojima studios skill or as FMV, the real question is: With Moore's law reaching the end how do we expect to take 1600-1800watts of GPU processing on 2nm lithography and get that down to 100watt console GPU power draw to make it the future?

The answer is we won't.

As for the Virgil Van Dijk (and Leon) graphics, when playing a football game you never see the players on those camera angles in gameplay, that is a replay camera, and no one is going to commit ÂŁ10k on GPUs for marginally nicer replays than Konami's eFootball already does.
 
It has potential for sure. I've no doubt they have it turned up to 11 to be flashy for the presentation. I would expect there to be less aggressive settings come release time.

I agree with the other people that say it shouldn't be called DLSS though.
 
Yep. Obviously early stages and definitely in the uncanny valley territory, but with advancements in R&D and tech and proper use by developers.. I can see this becoming standard by the time the next, next generation hits.

Naysayers are in a rude awakening when this comes to consoles. Where Nvidia steps, AMD isn't far behind.
 
Last edited:
Putting aside the cost of two RTX 5090s to generate images that could be done with Kojima studios skill or as FMV, the real question is: With Moore's law reaching the end how do we expect to take 1600-1800watts of GPU processing on 2nm lithography and get that down to 100watt console GPU power draw to make it the future?

The answer is we won't.

As for the Virgil Van Dijk (and Leon) graphics, when playing a football game you never see the players on those camera angles in gameplay, that is a replay camera, and no one is going to commit ÂŁ10k on GPUs for marginally nicer replays than Konami's eFootball already does.

Yeah RTX is too taxing, no one in their right mind will ever use it.

Fixed it for you.
I get it, you have very high standards and you're here to steer this game industry towards the purest form of art.
Dude, just admit other people think different than you and move on.
 
Last edited:
Putting aside the cost of two RTX 5090s to generate images that could be done with Kojima studios skill or as FMV, the real question is: With Moore's law reaching the end how do we expect to take 1600-1800watts of GPU processing on 2nm lithography and get that down to 100watt console GPU power draw to make it the future?

The answer is we won't.

As for the Virgil Van Dijk (and Leon) graphics, when playing a football game you never see the players on those camera angles in gameplay, that is a replay camera, and no one is going to commit ÂŁ10k on GPUs for marginally nicer replays than Konami's eFootball already does.
Same thing could be said about raytracing and pathtracing some decade(s) ago. And here we have hardware capable and games on commercial and mainstream hardware.
 
Yeah RTX is too taxing, no one in their right mind will ever use it.


I get it, you have very high standards and you're here to steer this game industry towards the purest form of art.
Dude, just admit other people think different than you and move on.
If my standards of not having games look like a complete dumbass took a shit on the screen while wanking are what you'd consider the purest form of art, then yes, I have high standards.

iu
 
Last edited:
The tech is currently experimental, nobody gave it a thorough analysis, when it comes out it will be optional and most likely take a good while before becoming a standard (games still have RTX has an option, people still bitch when it's mandatory).

So I ask again, what's his issue? Fake this fake that...In the same way some people dislike frame gen he's free not to use it when it's out.
Pretending to be on this high ground and defending the artists when those same people might very much be on board is hilarious. White knight syndrome I guess.
I have theories as to why this is, and why there's been so much negativity/bullshit among gamers, especially PC gamers, over the last half decade.
  • Content creators and YouTubers with little knowledge spewing bullshit to their audience, which gets views, which then creates a feedback loop where they have to keep acting that way in order to keep getting views. The BS spreads and becomes the general consensus among those who don't have any real ability to think for themselves. Think about it. If someone told you they regularly watched Asmongold or any of the other big streamers, would you think anything positive about that person? I'd immediately disregard any opinion they have, as I know it's not their own and absolutely no real thought went into it.
  • A near complete lack of curiosity, critical thinking, or cognitive depth. Many do not attempt to understand anything, but instead are merely reacting to words and waiting for their opinion to be given to them by the rest of the hive mind.
  • A distinct lack of understanding of how game rendering works and how many things are faked, lower than native res, baked, etc. Look at Threat Interactive, for example. He's a grifter that has been exposed as an idiot who uses DMCA takedowns to silence critics, yet even he has a following of morons that continue to watch his bullshit videos. It doesn't matter to them that people with actual experience or knowledge have shown him to be a fraud.
  • Many people built a PC during the PS4 era, when you could build a machine for very little money that could absolutely destroy the consoles. Getting to 1080p60 with high settings was not difficult and represented a significant improvement over what the consoles offered. These people did not have the experience of building a machine and having it be unable to run the latest/greatest games at high settings, so they can't understand why that would ever be the case.
  • The slowing down of advancements in traditional rasterized rendering due to the decreasing benefits of node changes. Previously, jumping to a new node offered massive improvements in performance, power draw, and efficiency. As we get to the limits of physics, however, it's become clear that we cannot rely on those node changes to deliver the kind of upgrades they did in the past, so companies have had to find new ways to spend the limited die space. This is what lead to nVidia dedicating hardware to RT and ML. They saw what was coming and made the change well ahead of time so they could be ready for it. AMD didn't because their entire GPU business model is to wait for nVidia to do something and then react to it.
  • AMD's inability to compete with nVidia on feature set. Whether it's upscaling, frame gen, or ray tracing enhancements, AMD has demonstrated that they simply do not have the capability to face nVidia head-on. If you're an AMD fanboy, it behooves you to shit on anything new nVidia does as a form of post-purchase rationalization.
  • The PS5 and XSX represented a significant improvement over the previous console generations. These became the new baseline hardware, which means a significant number of PC gamers actually had machines that were below the new target spec for most developers. While this wasn't a huge deal at first, we've seen it become more of an issue with time as developers leave non-RT capable hardware completely behind. Look at how pissed people were when Indiana Jones or Doom The Dark Ages required RT hardware to even boot, despite the fact that the first RT cards came out in 2018, a full two fucking years before the new consoles.
 
Last edited:
"Yo dawg! I heard you like Instagram so I put Instagram in your game so you can filter while gaming."
If you are old enough to recognise this meme, you are old enough.
 
The thing is, in the future (which is almost here...) there simply won't be any artists, just a game designer who gives the AI a prompt and receives a game with a visually consistent style in return. There will undoubtedly still be unique games, but I'm sure there will be many more games with the same artistic style, just like with the "Unreal" games.
If that happens I hope there'll always be a gaming niche that caters to people who like the works of real people with real thoughts and original ideas. It's not that the results aren't impressive or pleasing to the eye, at least at first glance. It's that for cold, hard technical reasons, these machines can never create something creative, daring or spontaneous. Just remixes of stuff that has already been done. It's certainly not art.
 
I think it looks impressive and has a lot of potential. But I agree that the character faces look a bit too uncanny.
Also Nvidia had 2x 5090 running those games. They had a whole separate 5090 only for DLSS 5, so the computing power needed for this is also a concern.
If they can improve the faces and make it run well without needing a whole separate 5090 then this could be fruithful.

But if this is just some weird push to make all games run completely with AI and have us rent computing power from the cloud, then I will be against this thing with every fiber of my being.
 
Last edited:
I mean, we always knew this was the future as soon as generative AI for images and video became a thing, why is everyone so surprised? This is the future of rendering, at least before AI makes games on it's own.

Deal with it, it's amazing.
 
Same thing could be said about raytracing and pathtracing some decade(s) ago. And here we have hardware capable and games on commercial and mainstream hardware.
I take it you aren't familiar with Moore's law? and how it has slowed to a snails pace. You'd need something like an equivalent drop from 120nm to 12nm to get from 1800watts down to 100watts. but these two Nvidia GPUs are already at such low lithography that we'll need a major break through in fabrication to even get the power draw to 450watts.
 
Top Bottom