• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Graphical Fidelity I Expect This Gen

It's mostly memes at this point, but I don't even agree that the lighting is "improved". It's punchy but it's extremely gaudy and unrealistic. It looks like everything is lit in a photo studio.



nA9q4IdeXYOxT6xb.png


Is pre-rendered CGI lit like that? Are movies lit like that? Is real life lit like that?

The answer is no.


I think there's potential in the tech but the initial showing couldn't have been much worse than it was. It looks frankly ridiculous right now.


And I know it's in pre-alpha or whatever, but the fact that it took two 5090s to run this is crazy as well. Imagine what 2 5090's could do with traditional rendering. If this AI enhancement is this expensive, what's the fucking point?
Like i said, this tech is still in its infancy state, some pics looks good, some not much.
 
No way 6070 has enough of vram pool or actual oomph, it wont be stronger from current 5080 and thats way too weak, 5070 to 5080 is about +50% perf in games(and obviously 4gigs more vram) :P
Imho lowest we could hope for is 6070ti aka same die but cut down a bit than 6080, 24gigs of vram minimum and likely well over 1k usd streetprice too- thats why my take is, even in 1,5 year from now on dlss5 will still be niche af, kinda like rt/dlss was durning turing.
6080 it is then.
Right. But again, people are going to judge what they're being shown, and rightfully so. Not what the tech might theoretically evolve into down the line.
Not the devs\nvidia fault if people have zero imagination and zero forward thinking, after many years they should know that tech improve with time, right now they sound like childs watching their first tech special...

What they are showing now look mostly impressive except some faces here and there, but people are only focusing on the small bad part and ignoring the huge good part.
 
6080 it is then.

Not the devs\nvidia fault if people have zero imagination and zero forward thinking, after many years they should know that tech improve with time, right now they sound like childs watching their first tech special...

What they are showing now look mostly impressive except some faces here and there, but people are only focusing on the small bad part and ignoring the huge good part.

It's absolutely Nvidia's fault to not anticipate the public response to this showing. Anyone with half a brain should've seen this coming.
 
They could have avoided the more blatant tiktok filters but they gave an idea of how transformative the tech can be.

The real issue here is idea that anyone was going to give literally the largest company on our planet some slack or the benefit of the doubt. That's not the audience's responsibility.
 
Last edited:
It's mostly memes at this point, but I don't even agree that the lighting is "improved". It's punchy but it's extremely gaudy and unrealistic. It looks like everything is lit in a photo studio.



nA9q4IdeXYOxT6xb.png


Is pre-rendered CGI lit like that? Are movies lit like that? Is real life lit like that?

The answer is no.


I think there's potential in the tech but the initial showing couldn't have been much worse than it was. It looks frankly ridiculous right now.


And I know it's in pre-alpha or whatever, but the fact that it took two 5090s to run this is crazy as well. Imagine what 2 5090's could do with traditional rendering. If this AI enhancement is this expensive, what's the fucking point?

Generally AI has had a tendency to make portrait imagery look like it's been run through clarity or edge contrast boosting which might be happening here, but that's just out of the box settings.

This is where the artist should draw the line and make a decision to make characters look more artful rather than photo real. Current games have this hybrid real and rendered look because tech isn't there to bring it much further anyways. But AI can easily generate a high quality offline rendered look too.

So yeah DLSS5 is definitely going to make uncanny valley type enhancements but artists will just have to be more careful going about applying it.
 
The real issue here is idea that anyone was going to give literally the largest company on our planet some slack or the benefit of the doubt. That's not the audience's responsibility.
The biggest company in the world should not care about some haters that can't see beyond their nose and that are still gonna end buying their products because they are so far beyond what amd can offer, it is the advantage of being the best of the best.

In a way i apprecciate that jensen doesn't give a fuck tbh, it's not like he is gonna lose ANY money or clients after this, you know it, i know it, he knows it.
 
Last edited:
Path traced:

nPOx7hAzTE4l1U6c.jpeg


DLSS5:

eI4teqjipOGBcMs9.jpeg



My Photoshop:


PQuSOYl7OaKFh3Ch.png


Ported the eyes and mouth directly from the path traced image. Adjusted the opacity between the DLSS5 and path traced face by about 75%, to blend it better. Lowered the overall scene brightness to closer match original.

I assume developers will have far more control than I did with the masking.
 
Path traced:

nPOx7hAzTE4l1U6c.jpeg


DLSS5:

eI4teqjipOGBcMs9.jpeg



My Photoshop:


PQuSOYl7OaKFh3Ch.png


Ported the eyes and mouth directly from the path traced image. Adjusted the opacity between the DLSS5 and path traced face by about 75%, to blend it better. Lowered the overall scene brightness to closer match original.

I assume developers will have far more control than I did with the masking.
Yours looks so much better. It's more subtle and doesn't have that very obvious fake generated image look that so many ai photos use.
 
Path traced:

nPOx7hAzTE4l1U6c.jpeg


DLSS5:

eI4teqjipOGBcMs9.jpeg



My Photoshop:


PQuSOYl7OaKFh3Ch.png


Ported the eyes and mouth directly from the path traced image. Adjusted the opacity between the DLSS5 and path traced face by about 75%, to blend it better. Lowered the overall scene brightness to closer match original.

I assume developers will have far more control than I did with the masking.

Really good. Definitely an improvement
 
I am hyped.
I mean, the only time I get the impression that I'm actually looking at a convincing human being in a game is during cutscenes like in Uncharted 4, TLOU 2, etc., that require tons of resources to create.
In-game, 99.9% of all characters still resemble a more or less decently assembled N64 robot.
DLSS5 has the potential to ramp up facial fidelity dramatically for every frickin single NPC basically on the fly. This is huge. I just hope they'll do something on a similar scale to improve animations; otherwise, it would look jarring.

Concerning RE9, a super hot 'AI Slop Grace' is certainly something I'm looking forward to, but what I'm really interested in is the DLSS5 rendition of the zombies. If they go the whole hog and make them look like actual corpses, it would be horrifyingly scary, truly unsettling. :messenger_hushed:
 
Path traced:

nPOx7hAzTE4l1U6c.jpeg


DLSS5:

eI4teqjipOGBcMs9.jpeg



My Photoshop:


PQuSOYl7OaKFh3Ch.png


Ported the eyes and mouth directly from the path traced image. Adjusted the opacity between the DLSS5 and path traced face by about 75%, to blend it better. Lowered the overall scene brightness to closer match original.

I assume developers will have far more control than I did with the masking.
And people think devs can't fix this shit when a dude in a forum did it in 5 min...
 
It's mostly memes at this point, but I don't even agree that the lighting is "improved". It's punchy but it's extremely gaudy and unrealistic. It looks like everything is lit in a photo studio.



nA9q4IdeXYOxT6xb.png


Is pre-rendered CGI lit like that? Are movies lit like that? Is real life lit like that?

The answer is no.


I think there's potential in the tech but the initial showing couldn't have been much worse than it was. It looks frankly ridiculous right now.


And I know it's in pre-alpha or whatever, but the fact that it took two 5090s to run this is crazy as well. Imagine what 2 5090's could do with traditional rendering. If this AI enhancement is this expensive, what's the fucking point?
It just highlighted the models details.

This games weren't designed to pump dlss5 .

In any case is an ugly wrinklet old lady. The model in re9 is great.
 
It's mostly memes at this point, but I don't even agree that the lighting is "improved". It's punchy but it's extremely gaudy and unrealistic. It looks like everything is lit in a photo studio.



nA9q4IdeXYOxT6xb.png


Is pre-rendered CGI lit like that? Are movies lit like that? Is real life lit like that?

The answer is no.


I think there's potential in the tech but the initial showing couldn't have been much worse than it was. It looks frankly ridiculous right now.


And I know it's in pre-alpha or whatever, but the fact that it took two 5090s to run this is crazy as well. Imagine what 2 5090's could do with traditional rendering. If this AI enhancement is this expensive, what's the fucking point?
A few things:
  1. They already have this running on a single 5090 in their labs. This dual setup was just for the demo. They are confident that this will run on all 5000 series GPUs at launch. Just at lower resolutions or lower settings.
  2. Go to Civitai.com. I cannot post pics because they are almost all nude, but most random people who train their own sexy women models have already figured out how to get rid of this gaudy AI slop look you are seeing in hogwarts and Starfield. I have seen girls who look so realistic and so normal, i honestly couldnt tell if they were AI. I have seen AI models that specifically design 7s instead of 10s. I've seen AI models with zero makeup. Maybe i will have my AI put some clothes on these women so I can post some examples here. Nvidia simply needs to train their model a bit more to avoid the Sora AI slop look. This will be fixed by launch, or by year two.
  3. Their model is purposefully trying to replicate photorealism. My guess is that these idiots used a model that was trained on real life footage instead of movies which tend to be more cinematic and more of what video games typically try to emulate. I think they missed a mark here by going for photorealism instead of cinematic realism.
 
And people think devs can't fix this shit when a dude in a forum did it in 5 min...
Exactly. I don't understand the hate against DLSS 5. It seems a technology that starts it's journey. We'll see more and more robust examples in the future I'm sure.

The thing is AI will not go anywhere. Quite the opposite, it's use will be increased much much more in the future.
 
So yeah DLSS5 is definitely going to make uncanny valley type enhancements but artists will just have to be more careful going about applying it.
I think the artists kinda sabotaged this on purpose. Imagine you are an RE9 dev and you just shipped a game and these guys show up with a bag full of money that your execs use on cocaine and hookers while you now have to get back to work on something that completely bypasses your original art work. They probably did the least amount of work possible and sent nvidia this awful looking AI slop footage hoping it would create a backlash like we are seeing now.

Path traced:

nPOx7hAzTE4l1U6c.jpeg


DLSS5:

eI4teqjipOGBcMs9.jpeg



My Photoshop:


PQuSOYl7OaKFh3Ch.png


Ported the eyes and mouth directly from the path traced image. Adjusted the opacity between the DLSS5 and path traced face by about 75%, to blend it better. Lowered the overall scene brightness to closer match original.

I assume developers will have far more control than I did with the masking.
this is good and it would be interesting if they can play around with the sliders to reduce the intensity of the photorealism filters. So if you can do it with photoshop, so can the developers. The fact that they did not tells me that they were taking the piss and wanted nvidia to get fucked.

During the demo, the DLSS research talked through the level of granularity available. Developers don't just get an on/off switch. They get intensity controls that can be dialed anywhere, not just full strength. They get spatial masking, so they can set the water enhancement to 100%, wood to 30%, characters to 120%, all independently within the same scene. They get color grading controls for blending, contrast, saturation, and gamma. All of this runs through the existing SDK, which means studios already using DLSS and Reflex have a familiar pipeline to work with.
 
6080 it is then.

Not the devs\nvidia fault if people have zero imagination and zero forward thinking, after many years they should know that tech improve with time, right now they sound like childs watching their first tech special...

What they are showing now look mostly impressive except some faces here and there, but people are only focusing on the small bad part and ignoring the huge good part.
Why show it, when its not ready?
 
Why cant Nvidia train the model on stuff like Dune lol. The Dune 3 trailer looks amazing.



We need directors like Nolan and Villenvue in the gaming industry. We had Kojima but he's phoned in this gen. Neil went to hollywood. There is no one else even remotely close to these hollywood directors. Just phenomenal work from Villenvue on the Dune trilogy.
 
Top Bottom