• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Graphical Fidelity I Expect This Gen

That top pic is from Love, Death, and Robots, right? So it was rendered offline. It does look more convincing than Grace to me, but Grace was done in real-time. And today is the first time we've seen the DLSS 5 tech, which is only going to improve.

in film, yes. in games, no.

Also in movies, you have artists making sure the CG facial animations are accurate. there is a reason why these CG movies cost $200 million. A lot of work goes into it.

starfield NPCs have zero work put into their facial animations. AI is going in there with the photorealism filter but it cannot change the facial animations or rather add them. hence, the uncanny valley effect which creates this disconnect that makes our brain simply reject it as fake.

the few cutscenes they showed of resident evil looked ok. no issues whatsoever because capcom animators mocapped and likely handkeyed in facial expressions.

The point I'm making is that the Twitter dude is trying to claim these faces look "bad" to so many people because we're not used to seeing faces in such high fidelity and our brains don't know how to process it. Which is patently false.
 
STARFIELD.gif


Im gonna say it with my chest,

This is the biggest graphical leap we've had since PS1-PS2. Easily.
Damn, that's insanely impressive. Do you think the PS6 will support something like this?
 
My GTA bias will be on full blast in this post but pics like these still look way more pleasing to the eye than anything nvidia showed today.
The characters look like people but they don't look out of place like they're trying to be "too" realistic but missing the mark somehow. Idk how to explain it.
And now imagine it on PC with path tracing.
Like DLSS upscaling and framegen this tech will only get better but this ain't it yet for me I think. Still plenty of time for them to train better models until release though.

XqkqIC1rEFWZFyOY.jpg
Same. i love it. That screenshot of that man blew me away because i know an artist sat there and modeled it, another artist sat there and lit the scene, another one animated the face expression, and perhaps dozens if not hundreds more worked on just that one shot trying to get chain physics just right, reflections to look that good, arm hair physics, etc etc. its a celebration of everything we like in art. An achievement in human ingenuity.

Having an AI model insert itself in the middle and replace it with data it trained on movies, porn, and netflix tv slop is kinda unfulfilling. i mean i will take it but i appreciate the other work more.
 
I'm sorry but this is bullshit:



We see characters rendering at photorealistic levels in film regularly and when done right, we don't get this uncanny-valley, AI slop nonsense feeling because it actually passes for "real", or at least "realistic". These AI faces do not.



I mean, shit, we have a good example from the first page of this very thread.

Which looks more "uncanny valley" to you?

This:
JsubuPB.jpeg



Or this:
bomqLrf6xMU4vw1a.png

The top one is more gamey looking, and clearly rendered.

If you prefer the rendered look that's fine but Grace (makeup preferences aside) is obviously the more realistic character.
 
Last edited:
We are completely ditching this that would have been the base line visuals for PS6 and Xbox Magnus for characters and then the later Path tracing 2.0 for PC for a REAL makeover. AI Slop Filter!

b2b90de1ff74646ff709943096fdbb15efbd1904.gif








The path tracing we would have gotten on PC next gen and then console would have gotten Resident Evil level path tracing (1.0). but no fuck this... we need a REAL makeover !


Exactly, NVIDIA needs to reevaluate DLSS5, this is the future of gaming I wanted above...

Hopefully NVIDIA tweaks DLSS5 because the tech is awesome, but the results besides the Assassins Creed: Shadows picture are deep fake like and have an eye straining superimposed painting like look to them...
 
Last edited:
The point I'm making is that the Twitter dude is trying to claim these faces look "bad" to so many people because we're not used to seeing faces in such high fidelity and our brains don't know how to process it. Which is patently false.
He isn't saying they look "bad" to people. He's saying that the delta between before DLSS 5 and after is larger with faces compared to environments. That we haven't gotten that high fidelity of faces in video games, and that's why so much of the conversation about DLSS 5 is focused on them.
 
Tech illiterates on the interweb will argue that if you render in-game assets with a real path tracer in Maya and they look completely different, then it's "not the artist's vision."
 
Are people going crazy? Like what the fuck does that even mean?



Aren't engines also limit & "filter" what artists can do THROUGH them? Look at Elden Ring's concept art and look at the game, they look worlds apart, in actual fact, the DLSS 5 model looks way closer to the concept intended art than the game engine render. I cannot believe the amount of cope people are huffing right now, and his profile picture is Cloud, the guy that had a pre-rendered CGI in the intro cutscene of FF7 Remake which looked light years better than the game

DLSS 5 can bridge that gap and close it. WTF is he on? 😭😭

I think this stuff looks so awful (looks too clean like today's movies), but this X user is stupid. PC games are all about mods so no one cards about artistic anything. If we're being for real, you'll be able to turn this crap off just like when RTX came out, people did the same memes. And half the people will be using it and gloating with it on.

I still think it looks like crap because I know in motion, it would look incredibly cheap and over produced. But if it actually liked good and work as an option, heck yeah I'd use it.
 
Guys I do not believe that we have reached it THIS SOON! I literally could not believe what I was seeing when Jensen showed it live, the visual jump is so big people on the internet cannot comprehend it yet.
DLSS 5 can bridge that gap and close it. WTF is he on? 😭😭
This is the biggest graphical leap we've had since PS1-PS2. Easily.
This tech is the future whatever people's like it or not and I believe this will blur the line between realtime and cgi
AHAHAHAHHAAAA!!! TOLD YA SO!!!
What the hell is with the pushback?

How can you look at the comparison directly above this post and not think DLSS 5 looks better?
The top one is more gamey looking, and clearly rendered.

If you prefer the rendered look that's fine but Grace (makeup preferences aside) is obviously the more realistic character.
Tech illiterates on the interweb will argue that if you render in-game assets with a real path tracer in Maya and they look completely different, then it's "not the artist's vision."
apple-tv-pluribus-apple-tv.gif
 
I think nvidia engineers have kinda misread the market here. This is currently happening at every major tech firm so im not surprised that the execs and engineers at nvidia are also in their own little bubble.

Here is the thing, gamers largely praised games that went with a photorealistic art styles and character models. Hellblade 2, Matrix and Marvel 1943 were roundly praised by the same people showing an absolute disdain over this. I still think it's largely due to the AI faces completely missing the original lighting pass resembling Sora AI videos than the original game. But it's still a valid concern that apparently nvidia engineers and marketing guys were completely oblivious to.

The examples below showcase that people are enamored with photorealism in video games. This is possible on a 10 tflops card. We are going to 30-40 tflops cards next year. We can easily get to photorealism organically without taking any shortcuts. I think Nvidia just didnt realize that we had to get there ourselves.

rBWdEWZ.gif
8ba82e817e6ee5d2b9658a3b624a20137fcd3c68.gifv



b570ccd329c084906dd82b0654db042ba5491a3a.gifv
HDko0qpaIAA8Cbn
 
I think nvidia engineers have kinda misread the market here. This is currently happening at every major tech firm so im not surprised that the execs and engineers at nvidia are also in their own little bubble.

Here is the thing, gamers largely praised games that went with a photorealistic art styles and character models. Hellblade 2, Matrix and Marvel 1943 were roundly praised by the same people showing an absolute disdain over this. I still think it's largely due to the AI faces completely missing the original lighting pass resembling Sora AI videos than the original game. But it's still a valid concern that apparently nvidia engineers and marketing guys were completely oblivious to.

The examples below showcase that people are enamored with photorealism in video games. This is possible on a 10 tflops card. We are going to 30-40 tflops cards next year. We can easily get to photorealism organically without taking any shortcuts. I think Nvidia just didnt realize that we had to get there ourselves.

rBWdEWZ.gif
8ba82e817e6ee5d2b9658a3b624a20137fcd3c68.gifv



b570ccd329c084906dd82b0654db042ba5491a3a.gifv
HDko0qpaIAA8Cbn

Yep. Literally every one of these faces look better than what Nvidia showed today.
 
no it wont look like concept art because this AI model is not trained on that concept art. its one AI model thats trained likely on photorealism and is applying the detail, the character models, the faces, and other textures based on the data its been trained on.

There is a reason why Grace looks like a pornstar. A lot of the early AI models were trained on porn. In fact, you can download your own AI model with no NSFW filters and it will immediately let you undress every woman and even have them give out blowjobs without the need to bring in any other models that are trained on blowjobs. So, no it wont look like concept art, it WILL however look like a porno.

For it to look like concept art, devs will need to go in and adjust various different settings but those settings are limited to just how hard they want to lean in on photorealism. Again, they do not have access to the AI model, and they cannot train it on the concept art. the best they can do is adjust just how far they want the realism to go. my guess is that once devs see the backlash, they will reduce the realism from faces down quite a bit so it doesnt look like you are playing as Lana Rhoades.

My GTA bias will be on full blast in this post but pics like these still look way more pleasing to the eye than anything nvidia showed today.
The characters look like people but they don't look out of place like they're trying to be "too" realistic but missing the mark somehow. Idk how to explain it.
And now imagine it on PC with path tracing.
Like DLSS upscaling and framegen this tech will only get better but this ain't it yet for me I think. Still plenty of time for them to train better models until release though.

XqkqIC1rEFWZFyOY.jpg

WIi1KxS9LFiXH1Ot.jpg
I think people who dislike it are focusing way too much on the faces.

They're missing the bigger picture ... the potential it has for actual gameplay and environment visuals...
  1. Faces really only matter during cutscenes, not moment-to-moment gameplay.
  2. Players spend about 95% of their time traversing the world, interacting with the environment.
That's where this tech really shines. The environmental upgrades I've seen in clips are fantastic.

HDj6TzZaoAADlzr



This tech is brand new, still a baby in infancy. And we're already getting results like this, from studios that aren't even top tier.

Imagine when CDPR, Game Science, Pearl Abyss, R*, get their hands on this? It's going to be absolute insanity.

I am more excited for the potential.

This has the potential to speed up development in the long run. Because inevitably, AI will make spending years optimizing a game a thing of the past. Development becomes faster, iteration becomes easier, and teams can spend less time fighting technical limitations and more time building bigger worlds, deeper systems, and more ambitious ideas.
 
I think people who dislike it are focusing way too much on the faces.

They're missing the bigger picture ... the potential it has for actual gameplay and environment visuals...
  1. Faces really only matter during cutscenes, not moment-to-moment gameplay.
  2. Players spend about 95% of their time traversing the world, interacting with the environment.
That's where this tech really shines. The environmental upgrades I've seen in clips are fantastic.

HDj6TzZaoAADlzr



This tech is brand new, still a baby in infancy. And we're already getting results like this, from studios that aren't even top tier.

Imagine when CDPR, Game Science, Pearl Abyss, R*, get their hands on this? It's going to be absolute insanity.

I am more excited for the potential.

This has the potential to speed up development in the long run. Because inevitably, AI will make spending years optimizing a game a thing of the past. Development becomes faster, iteration becomes easier, and teams can spend less time fighting technical limitations and more time building bigger worlds, deeper systems, and more ambitious ideas.
I will always acknowledge and agree regarding the vast potential of AI.

But so far I still dislike the execution.

Here's another image I forgot to bring up:

0pdjkkptwgpg1.png


Where did his shadows go? He now looks like a photographer's photobucket sample image.

I just can't buy the whole 'accuracy' argument. Not yet. Who knows, maybe 2.0 of this mess might be a large improvement, but I need to see it get there first.
 
in film, yes. in games, no.

Also in movies, you have artists making sure the CG facial animations are accurate. there is a reason why these CG movies cost $200 million. A lot of work goes into it.

starfield NPCs have zero work put into their facial animations. AI is going in there with the photorealism filter but it cannot change the facial animations or rather add them. hence, the uncanny valley effect which creates this disconnect that makes our brain simply reject it as fake.

the few cutscenes they showed of resident evil looked ok. no issues whatsoever because capcom animators mocapped and likely handkeyed in facial expressions.
Have you people completely lost it?
Do you think grace looks as good or better than these? It doesn't even look close. it looks 10x worse.
Its almost like everyone in this thread has had collective amnesia.

There are no free lunches, either you spend the 2nd 5090 equivalent power rendering the path tracing 2.0 environment below or you spend it on your beloved AI slop filter.

You can't have it both ways. People say, well its optional you can just turn it off. These people need to turn on their brains.
Its NOT optional. When DLSS was added it took up a significant portion of the die space that would have gone to traditional gpu pipeline.
Now it was worth while and a worthy cause from Day 1. But just turning it off didnt remove the tensor cores from your gpu and give you those removed gpu. The same way the 2nd 5090 equivalent tensor cores have now been reserved for this AI Slop.

So rather than spending the power of that tensor cores for the below. it will be spent on AI slop.
So turning it off just means ALL THAT die space on your GPU become useless because they have been reserved for AI slop.

bmI6m.png


widen_1840x0.jpeg



6960c4130f4e8.gif


giphy.gif


d4a89f7e9403218b79d67bde24287c57522abbf8.gif




dd61ff146438613.62b17c22bd338.jpg


18884e146438613.62b17c22c0a61.jpg


14ffd7146438613.62b17c22bb26e.jpg
 
I think people who dislike it are focusing way too much on the faces.

They're missing the bigger picture ... the potential it has for actual gameplay and environment visuals...
  1. Faces really only matter during cutscenes, not moment-to-moment gameplay.
  2. Players spend about 95% of their time traversing the world, interacting with the environment.
That's where this tech really shines. The environmental upgrades I've seen in clips are fantastic.

HDj6TzZaoAADlzr



This tech is brand new, still a baby in infancy. And we're already getting results like this, from studios that aren't even top tier.

Imagine when CDPR, Game Science, Pearl Abyss, R*, get their hands on this? It's going to be absolute insanity.

I am more excited for the potential.

This has the potential to speed up development in the long run. Because inevitably, AI will make spending years optimizing a game a thing of the past. Development becomes faster, iteration becomes easier, and teams can spend less time fighting technical limitations and more time building bigger worlds, deeper systems, and more ambitious ideas.
One of the first things I said in this thread about this tech is that I recognize the potential, especially when it comes to lighting but nvidia's deliberate choice to focus on people so much instead of materials feels wrong at the moment and the more footage I watch the less I like it. Hard not to focus on that when they put it front and center.

We'll see how it goes but I'm reading so much negative feedback, developers might actually be shooked by this backlash, if so it was a completely unnecessary blunder from nvidia.
 
My GTA bias will be on full blast in this post but pics like these still look way more pleasing to the eye than anything nvidia showed today.
The characters look like people but they don't look out of place like they're trying to be "too" realistic but missing the mark somehow. Idk how to explain it.
And now imagine it on PC with path tracing.
Like DLSS upscaling and framegen this tech will only get better but this ain't it yet for me I think. Still plenty of time for them to train better models until release though.

XqkqIC1rEFWZFyOY.jpg

WIi1KxS9LFiXH1Ot.jpg
I still don't believe that the game will look that good in real time. I believe it when I see it
 
Have you people completely lost it?
Do you think grace looks as good or better than these? It doesn't even look close. it looks 10x worse.
Its almost like everyone in this thread has had collective amnesia.

There are no free lunches, either you spend the 2nd 5090 equivalent power rendering the path tracing 2.0 environment below or you spend it on your beloved AI slop filter.

You can't have it both ways. People say, well its optional you can just turn it off. These people need to turn on their brains.
Its NOT optional. When DLSS was added it took up a significant portion of the die space that would have gone to traditional gpu pipeline.
Now it was worth while and a worthy cause from Day 1. But just turning it off didnt remove the tensor cores from your gpu and give you those removed gpu. The same way the 2nd 5090 equivalent tensor cores have now been reserved for this AI Slop.

So rather than spending the power of that tensor cores for the below. it will be spent on AI slop.
So turning it off just means ALL THAT die space on your GPU become useless because they have been reserved for AI slop.

bmI6m.png


widen_1840x0.jpeg





giphy.gif


d4a89f7e9403218b79d67bde24287c57522abbf8.gif
Sure they look good.

But.

One of these games is barely even a game, took 5 years to make, and has very little gameplay. Graphics at that level of quality are not possible today on a large scale.


Until today.

The others are just tech demos. Games dont look like that yet

(until today)

And Kojima's game wont release for another 5 years if we're lucky. It will also likely not have any extended gameplay, like Hellblade 2.


DLSS 5 changes the equation. It can deliver faces that look just as good, at a fraction of the development time, in real-time, in full-scale games...something any studio could realistically achieve.


STARFIELD.gif
 
Last edited:
I think nvidia engineers have kinda misread the market here. This is currently happening at every major tech firm so im not surprised that the execs and engineers at nvidia are also in their own little bubble.

Here is the thing, gamers largely praised games that went with a photorealistic art styles and character models. Hellblade 2, Matrix and Marvel 1943 were roundly praised by the same people showing an absolute disdain over this. I still think it's largely due to the AI faces completely missing the original lighting pass resembling Sora AI videos than the original game. But it's still a valid concern that apparently nvidia engineers and marketing guys were completely oblivious to.

The examples below showcase that people are enamored with photorealism in video games. This is possible on a 10 tflops card. We are going to 30-40 tflops cards next year. We can easily get to photorealism organically without taking any shortcuts. I think Nvidia just didnt realize that we had to get there ourselves.

rBWdEWZ.gif
8ba82e817e6ee5d2b9658a3b624a20137fcd3c68.gifv



b570ccd329c084906dd82b0654db042ba5491a3a.gifv
HDko0qpaIAA8Cbn
Weren't we seeing how many performative gamers used to trash "photorealistic" graphics because they lack "art" and waste time & resources and that they don't matter like Hellblade 2? They always trashed those all in the name of "art."

But see, I will say this, I agree that they messed up the marketing for DLSS 5. They should NOT have been about faces, I was iffy on the whole faces thing since they showed the Zorah demo back at CES in January 2025, ideally it would've been light years better marketing if they announced & showed a brand new game, running in real-time using DLSS 5, that way there wouldn't be any sort of previous expectations from Gamers, they'd simply look at it & be in on the hype.

In fact. The Grace example (which seems to be the most relevant picture from Nvidia) is what soured a lot of people on their 1st impressions of DLSS 5.

They also could've shown other games that have a stylized or cartoony style using DLSS 5 to remind people that this tech is much more versatile while providing the impression of respecting the artistic integrity of each games, because aside from faces, the environments & material quality in these supported games look generationally better, like it's not even close, I never want to go back from DLSS 5.

There's are heaps of misinformation tornados right now running wild on the Internet, Gamers have become way too conspiratorial & delusional it's actually sickening. Hope the next time Nvidia shows DLSS 5, they show it with much more understanding & judicious use of the tech.

DLSS 5 faces are still great IMO, it just needs hand-tuning, Leon's face looks quite accurate while increasing the fidelity infinitely, it's actually insane! If you were to render that face with traditional techniques, you'd need like a 100x times more computing power to do it.
 
Last edited:
Have you people completely lost it?
Do you think grace looks as good or better than these? It doesn't even look close. it looks 10x worse.
Its almost like everyone in this thread has had collective amnesia.

There are no free lunches, either you spend the 2nd 5090 equivalent power rendering the path tracing 2.0 environment below or you spend it on your beloved AI slop filter.

You can't have it both ways. People say, well its optional you can just turn it off. These people need to turn on their brains.
Its NOT optional. When DLSS was added it took up a significant portion of the die space that would have gone to traditional gpu pipeline.
Now it was worth while and a worthy cause from Day 1. But just turning it off didnt remove the tensor cores from your gpu and give you those removed gpu. The same way the 2nd 5090 equivalent tensor cores have now been reserved for this AI Slop.

So rather than spending the power of that tensor cores for the below. it will be spent on AI slop.
So turning it off just means ALL THAT die space on your GPU become useless because they have been reserved for AI slop.
If it takes off then yes, its definitely a possibility that they go with a multi die setup with the AI chip either stacked on top or taking up half the die. Doesnt look like it though. Gamers have rejected it. And like i told you earlier, this wont even be a thing on consoles which are going to be lucky to get proper AI upscaling next gen. PS6 die is like 280 mm2.

I think Leon looks fine in the cutscene. Grace looks like a pornstar with a bit too much make up on in the cutscene, but way better than the AI slop character models of Starfield and Oblivion. I stand by my opinion that the animated faces will look way better than the uncanny valley starfield character models.

Tx5H4aRw0uMXBHwf.jpg
 
Last edited:
I will always acknowledge and agree regarding the vast potential of AI.

But so far I still dislike the execution.

Here's another image I forgot to bring up:

0pdjkkptwgpg1.png


Where did his shadows go? He now looks like a photographer's photobucket sample image.

I just can't buy the whole 'accuracy' argument. Not yet. Who knows, maybe 2.0 of this mess might be a large improvement, but I need to see it get there first.

His skin and the clothing materials look so much better to me wit hit on. You talk about shadows, I see better shadowing under his cap? Honestly I think it's something we have to play with before judging it bad or inferior out right.
 
Last edited:
His skin and the clothing materials look so much better to me wit hit on. You talk about shadows, I see better shadowing under his cap? Honestly I think it's something we have to play with before judging it bad or inferior out right.
It isn't about whether you like it, it's about the amount of people claiming all of these results are 'realistically accurate' regardless of facial changes, texture changes, oversharpening effects, lighting changes, shadow changes, and artistic changes.

People are being told to disregard the glaring issues they are seeing with their own eyes.

I agree that we need to see more, but the tech also needs more work.
 
Have you people completely lost it?
Do you think grace looks as good or better than these? It doesn't even look close. it looks 10x worse.
Its almost like everyone in this thread has had collective amnesia.

There are no free lunches, either you spend the 2nd 5090 equivalent power rendering the path tracing 2.0 environment below or you spend it on your beloved AI slop filter.

You can't have it both ways. People say, well its optional you can just turn it off. These people need to turn on their brains.
Its NOT optional. When DLSS was added it took up a significant portion of the die space that would have gone to traditional gpu pipeline.
Now it was worth while and a worthy cause from Day 1. But just turning it off didnt remove the tensor cores from your gpu and give you those removed gpu. The same way the 2nd 5090 equivalent tensor cores have now been reserved for this AI Slop.

So rather than spending the power of that tensor cores for the below. it will be spent on AI slop.
So turning it off just means ALL THAT die space on your GPU become useless because they have been reserved for AI slop.

bmI6m.png


widen_1840x0.jpeg



6960c4130f4e8.gif


giphy.gif


d4a89f7e9403218b79d67bde24287c57522abbf8.gif




dd61ff146438613.62b17c22bd338.jpg


18884e146438613.62b17c22c0a61.jpg


14ffd7146438613.62b17c22bb26e.jpg
This is what I want in the future of gaming.
 
It isn't about whether you like it, it's about the amount of people claiming all of these results are 'realistically accurate' regardless of facial changes, texture changes, oversharpening effects, lighting changes, shadow changes, and artistic changes.

People are being told to disregard the glaring issues they are seeing with their own eyes.

I agree that we need to see more, but the tech also needs more work.

I'm sure it does need work, that's why it's not out yet and runs on a dedicated GPU. It will improve like RTX, DLSS and their previous tech before.

I'm excited to see what it could bring to this game :

 
Sure they look good.

But.

One of these games is barely even a game, took 5 years to make, and has very little gameplay. Graphics at that level of quality are not possible today on a large scale.


Until today.

The others are just tech demos. Games dont look like that yet

(until today)

And Kojima's game wont release for another 5 years if we're lucky. It will also likely not have any extended gameplay, like Hellblade 2.


DLSS 5 changes the equation. It can deliver faces that look just as good, at a fraction of the development time, in real-time, in full-scale games...something any studio could realistically achieve.


STARFIELD.gif
But it looks like Sora A.I. slop, the results need to look natural and non eye straining.
 
Ehhh...."artistic vision" etc. are just platitudes at this point. More often than not I hear these terms being deployed to handwave away deeper discussion, and as a rallying cry for those people who think "graphics don't need to get any better" blah blah. The best looking games are always the ones that understand the techniques and limitations of the hardware they're targeting, and combine tech and art to create something striking whether it's 2026 or 2006 or 1996. But who's to say what any dev team would have done with fewer or no limitations? Video games are also products and any final "artistic vision" is the result of tremendous compromise to even get to the finish line.

That's what a lot of the people who harp on "artstyle" choose to ignore. A lot of the "artistic vision"/"artstyle" discourse these days is coming from people who use it to disparage technical ambition....a million retarded variations on "ray tracing is a scam, all you need is ARTSTYLE!" and I've heard them all. Most of the "best looking games" that always get brought up (AC Unity, Arkham Knight etc.) were pushing insane tech in their day, often with significant drawbacks to performance or image quality or just being buggy as fuck. These games were using every trick in the book and inventing new ones along the way. Plenty of them had the same issues people blame modern games for (buggy, shitty performance etc) and forget were always an issue when ambition meets tech meets art.

Back in the day everyone knew that graphical fidelity and advanced tech were joined at the hip with a game's "art" but these days it's far more common to hear that you don't need good tech if you have good "art" almost to the point where people start sounding morally opposed to cutting edge rendering, as if you can't have both :messenger_tears_of_joy:

Again, nobody here is really saying that, it's just super common on social media etc.
 
Last edited:
Idk I'm not huge into AI and think it's a problem but I can't deny the diss 5 looks like a generational leap from the source. I do wonder how it would look in games that are more stylized but beyond the faces just the trees in AC shadows look insane, and the clothing/objects.... idk I'm torn. But how would you develop for this? Just do sort of bare minimum graphics and have diss upgrade it?
 
Quoting my own post below. It seems the footage came straight from devs. I wonder if devs made them look like AI slop on purpose. Artists cannot be happy about this. I know Nvidia said devs were thrilled but I'm not buying it and these awful faces just scream trolling to me.
Then what is this?



Either way, they're ok with it, the fucking thing is not out and is optional so what are you all bitching about?


As in graphically much better?


Lmao so the devs from these studios worked on this and turned in those horrendous AI clips?

Yep, they all sabotaged this on purpose. I bet the artists went out of their way to make it look as AI slop as possible and nvidia was so clueless that they didn't even realize it was on purpose.

I wonder if they all colluded in secret or if they all had the same idea. Absolutely hilarious.
 
The solution to the problem is simple. Tweak DLSS5 to make the character's faces change less. I am sure Nvidia will be able to do that

That way you keep the lighting improvements without making big visual changes into the characters
 
ASSCREEDGAF.jpg


I removed the text boxes from the image.

Bar some resolution issues (due to the image itself), these are the best graphics I've seen in my life.

Photo modes are gonna go crazy next gen.
 
The solution to the problem is simple. Tweak DLSS5 to make the character's faces change less. I am sure Nvidia will be able to do that

That way you keep the lighting improvements without making big visual changes into the characters

NUH UH BECAUSE HURR DURR AI SLOP AMIRIGHT

They can definitely do that.
 
ASSCREEDGAF.jpg


I removed the text boxes from the image.

Bar some resolution issues (due to the image itself), these are the best graphics I've seen in my life.

Photo modes are gonna go crazy next gen.
Many dislike dlss5 because it's character faces lighting but actually this tech go far beyond just character lighting, this far beyond that as this Pic., developers can just use it to increase the general lighting quality like a new global illumination / lighting tech, also not every studio gonna use it on characters model's and even that they will control what it's effect on it, (just imagine what everyone reaction if they showed this tech only on the environment) .
on past any game developers talk about how lighting work /color grading can be game changer, but the problem is that they can't keep many of those lighting effects on real time even with path/ray tracing, that's where Ai comes, a tech that can correct / enhance (lighting +shading +color grading) every frame on real time, it's just a normal evolution of real time graphics.
 
Last edited:
AMD was talking about neural rendering and "FSR Diamond" when they talked about Project Helix. So, it would seem this kind of thing could be possible with RDNA5/UDNA, but the performance might not be there?

Maybe they'll implement some kind if lite version of this that touches up the lighting in broader strokes, but doesn't touch up any details. Would be more performant while not being so jarring for all the "AI slop hurr durr" NPCs out there.
 
Have you people completely lost it?
Do you think grace looks as good or better than these? It doesn't even look close. it looks 10x worse.
Its almost like everyone in this thread has had collective amnesia.

There are no free lunches, either you spend the 2nd 5090 equivalent power rendering the path tracing 2.0 environment below or you spend it on your beloved AI slop filter.

You can't have it both ways. People say, well its optional you can just turn it off. These people need to turn on their brains.
Its NOT optional. When DLSS was added it took up a significant portion of the die space that would have gone to traditional gpu pipeline.
Now it was worth while and a worthy cause from Day 1. But just turning it off didnt remove the tensor cores from your gpu and give you those removed gpu. The same way the 2nd 5090 equivalent tensor cores have now been reserved for this AI Slop.

So rather than spending the power of that tensor cores for the below. it will be spent on AI slop.
So turning it off just means ALL THAT die space on your GPU become useless because they have been reserved for AI slop.

bmI6m.png


widen_1840x0.jpeg



6960c4130f4e8.gif


giphy.gif


d4a89f7e9403218b79d67bde24287c57522abbf8.gif




dd61ff146438613.62b17c22bd338.jpg


18884e146438613.62b17c22c0a61.jpg


14ffd7146438613.62b17c22bb26e.jpg
This is the future of fidelity I want...
 
Top Bottom