• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Graphical Fidelity I Expect This Gen

I just finished watching it.
I can agree it needs work, probably why we don't have lengthy footage of it, but it's not out yet and in no one's hands so he's opinion's still only based off the footage they have released which he doesn't like... cool. I do, and see pontential in it, that's pretty much it?
There´s a pretty stark difference between "not liking" and showing real errors .....
Massive disocclusion artifacts, ghosting, lighting direction without an actual source, boiling and pulsating for everything without motion vectors or the massive interpretation fails with eyes "clipping" through eyelids etc.........that`s not in the realm of "liking" and happens plenty in just a few seconds of very best case press material. And ofc the fundamental architectural issue stands that the tech doesn`t have access to the data you`d need to ever really stabilize things like f.e. lighting direction because it doesn`t know the attributes and positions of the sources. It is a purely...interpretative approach.
It absolutely is "just" an AI filter....and in that regard every bit of negative feedback is absolutely warranted.
 
Last edited:


Looks two generations ahead. It looks so much grounded and realistic.

uh-huh...


53647836f8990bcdae593ab1d02113fd5d0e8f89_2_747x500.jpeg
 
DLSS 1 was shit. Every first version of a new tech is kinda shit.

If the worse it can do is some artfacts while the character blinks than its more than fine for a first version
and ghosting and disocclusion artifacts and broader consistency issues and non existent lightsources throwing directional light completely off a cliff and of course the issue that the model simply doesn`t have the data to "enhance" the picture and instead just reinterpretes everything which is apparent in the completely changed faces f.e. ....
Nothing about what was shown so far was good, let alone accurate, just different from what it is supposed to "enhance" and riddled with artifacts of all kinds.
They should have put that hardware power behind more rays/bounces and the Ray Reconstruction..
 
Last edited:
and ghosting and disocclusion artifacts and broader consistency issues and non existent lightsources throwing directional light completely off a cliff and of course the issue that the model simply doesn`t have the data to "enhance" the picture and instead just reinterpretes everything which is apparent in the completely changed faces f.e. ....
Nothing about what was shown so far was good, let alone accurate, just different from what it is supposed to "enhance" and riddled with artifacts of all kinds.
They should have put that hardware power behind more rays/bounces and the Ray Reconstruction..
*sigh* it does have the data to enhance the game 'cause it takes into account motion vectors. It's more profound than an AI "enhancing" a video that you've uploaded, for example




Nvidia fucked up with the demos, but some of them have examples of DLSS 5 not changing character models. The Starfield demo has DLSS not changing the models in terms of geometry. It's just better lighting, textures and subsurface scattering.

There's a screenshot with Grace from RE Requiem where it's just that: better lighting, SS and textures, but the same model, unchanged. Not the case with the initial video with Grace, 'cause Nvidia probably dialed DLSS 5 to 11 to show "how realistic it can be", completely changing her model.

It's up to devs
 
Ive seen both clips multiple times and did not notice any of these issues. the eyes popping out of the eyelids happen while she blinks for a fraction of a second, same goes for this weird lazy eye effect on this dude. you dont notice it at all during the clip.

That's fine, but this is mostly in response to those trying to claim "It's just a change to lighting!"


Clearly it isn't. It's GenAI
 
*sigh* it does have the data to enhance the game 'cause it takes into account motion vectors. It's more profound than an AI "enhancing" a video that you've uploaded, for example
How exactly do motion vectors help with correctly identifiying light sources and their attributes or the exact geometry of models/assets?
Ah right, not at all which is why we see the hallucination shit and the consistency issues.

It's up to devs
So far Nvidia just talked about masking and sliders. We´ll see how far that goes.
 
Last edited:
The most impressive thing with DLSS5 is what it does for NPC's

I mean fucking WOW

We're looking at a 2 generation difference here

ZjIIePEoMmRrlXVB.jpeg


VS

Screenshot-2026-03-18-at-10-36-03-AM.png


You're all going to love this tech when you see it running in 4K in front of you.
 
Last edited:
Has anyone seen the fifa footage where the football is just a mess in motion. My thinking is that this tech is currently very dependant on MFG/frame rate generation to ensure performance so you'll be getting artifacts from this also. It's quite obvious their demo shots avoided motion as much as possible.
 
I feel like the AI hate is almost an Olympic level performative act at this point. Its borderline hilarious to listen to people speak of AI tools like its the devil himself.
 
How could you definitively say this with a purposefully awful Grace screengrab (non-path traced as well)…

…And then say this literally right afterwards:
Thats how the game looks on Base PS5 at 60fps with no ray tracing. Its a true representation of the game. Its not my fault its ugly
 
The most impressive thing with DLSS5 is what it does for NPC's

I mean fucking WOW

We're looking at a 2 generation difference here

ZjIIePEoMmRrlXVB.jpeg


VS

Screenshot-2026-03-18-at-10-36-03-AM.png


You're all going to love this tech when you see it running in 4K in front of you.

occlusion.gif

Sure Jan GIF



people complaining about the issues with low RT resolutions haven`t seen what Jensen-The-Jacket-Huang got in store for them next.
 
Last edited:
occlusion.gif

Sure Jan GIF



people complaining about the issues with low RT resolutions haven`t seen what Jensen-The-Jacket-Huang got in store for them next.
You're nitpicking at issues they've already acknowledged that will be sorted out come public release, and ignoring the giant fucking picture.

carry on.
 
lol at the thumbnails, ridiculous. As if a murder was commited or something


The more I watch of this video, the more I keep asking 'why didn't you mention these potential issues in your original coverage video?'

Like they're suddenly, now listing out it's problems in this new video. Now they're suddenly bringing up the other half of this tech.

This is what I mean B BlownUpRich it just comes across optically bad and almost disingenuous, and DF definitely doesn't help Nvidia's case here.

Transparency about this tech should have been there from the start, from all sides.
 
The more I watch of this video, the more I keep asking 'why didn't you mention these potential issues in your original coverage video?'

Like they're suddenly, now listing out it's problems in this new video. Now they're suddenly bringing up the other half of this tech.

This is what I mean B BlownUpRich it just comes across optically bad and almost disingenuous, and DF definitely doesn't help Nvidia's case here.

Transparency about this tech should have been there from the start, from all sides.

I think the reason here is clear. Nvidia paid some checks and they had to advertise it. Simple as that
 
The most impressive thing with DLSS5 is what it does for NPC's

I mean fucking WOW

We're looking at a 2 generation difference here

ZjIIePEoMmRrlXVB.jpeg


VS

Screenshot-2026-03-18-at-10-36-03-AM.png
Yes, crowd NPCs would be a good use for it since no one really gives a shit about what pedestrian NPCs look like in video games.
You're all going to love this tech when you see it running in 4K in front of you.
Quick question, you are a big fan of TLOU2. It's on PC now. Someone is going to mod in DLSS5 support. Actually you dont even need to do it, nvidia app now lets you do it. How would you feel if this incredible cutscene is replaced by the tv show model?

Do you really want to lose this? Just to get the tv shit?

dd3f8e97d8b198143311ba6278ef1cc8d824bed0.gifv
34d6c36ca4ce0b9a26be168ae39a714312189687.gif


bdfed7c2097b3a48862df51598e7e1e3bd0d23fe.gif
da49ba474de0a51c52da6c110683740e88f8aef3.gif


THE-LAST-OF-US-Ep6-04.gif
TLOU-203-02.gif


Is this how you want to play Intergallactic for the first time?
 
You're nitpicking at issues they've already acknowledged that will be sorted out come public release, and ignoring the giant fucking picture.

carry on.
Picture turns to boiling mush in motion..."nitpicking".

But since you seem to have insider information:
are they gonna sort out the hallucinated light sources next?
 
Last edited:
Yes, crowd NPCs would be a good use for it since no one really gives a shit about what pedestrian NPCs look like in video games.

Quick question, you are a big fan of TLOU2. It's on PC now. Someone is going to mod in DLSS5 support. Actually you dont even need to do it, nvidia app now lets you do it. How would you feel if this incredible cutscene is replaced by the tv show model?

Do you really want to lose this? Just to get the tv shit?

dd3f8e97d8b198143311ba6278ef1cc8d824bed0.gifv
34d6c36ca4ce0b9a26be168ae39a714312189687.gif


bdfed7c2097b3a48862df51598e7e1e3bd0d23fe.gif
da49ba474de0a51c52da6c110683740e88f8aef3.gif


THE-LAST-OF-US-Ep6-04.gif
TLOU-203-02.gif


Is this how you want to play Intergallactic for the first time?
Holy hyperbole...
 
Great things are happening on twitter. Console gamers finally realizing we had great looking next gen games all along.



Insane to me how everyone just ruined an entire generation for themselves by playing at terrible looking 720p 60 fps modes. These games all had 1440p 30 fps games reconstructed to 4k with very similar image quality to what gamers are getting with PSSR2. But no, I must play modern games at 60 fps on my 5 year old $399 console.
 
uh-huh...


53647836f8990bcdae593ab1d02113fd5d0e8f89_2_747x500.jpeg

Duh, this again? This is a game bug. Stop spreading misinformation.
This bug exists even in the original 2006 game.

bz17q5z9bdh91.jpg


How exactly do motion vectors help with correctly identifiying light sources and their attributes or the exact geometry of models/assets?
Ah right, not at all which is why we see the hallucination shit and the consistency issues.


So far Nvidia just talked about masking and sliders. We´ll see how far that goes.

It's not just motion vectors, it recognizes the colors of the pixels. Lighting in games is nothing more than variations in pixel colors.
The AI recognizes what is skin, hair, fabric, metal, foliage, water, and also the lighting conditions.
The neural network injects photorealistic lighting, subsurface scattering (light penetrating the skin), reflections, glows in fabrics, more precise shadows, etc.

A developer could also achieve the same effect, but it would take a long time and might not be performant. Not to mention the expertise required to avoid falling into the uncanny valley.

how-nvidia-dlss-5-works.jpg


The uncanny valley that everyone felt is probably because the subsurface scattering became very precise and resembled a human being, which can be repulsive if you're not careful.

maxresdefault.jpg
 
I agree with the measured take, but not on the positive side of things, and that means a lot from me since I don't usually like how negative this forum can be on a lot of things. However, this specifically should be treated with a high level of skepticism until proven otherwise.

This guy did research and explains it all really well at 4:04



And he came away from it all just as worried.


Vex only looks at things superficially and goes with the flow of the internet.

I remember he used to speak highly of Nvidia GPUs and get a bit of hate. Then he started making videos praising AMD and criticizing Nvidia, which made his videos get more views.

YouTubers are more concerned with making the YouTube algorithm like them, so they can earn more money.



Looks two generations ahead. It looks so much grounded and realistic.


 
Last edited:
Duh, this again? This is a game bug. Stop spreading misinformation.
This bug exists even in the original 2006 game.
bummer. now only 99 other more problematic issues remain. gee

It's not just motion vectors, it recognizes the colors of the pixels. Lighting in games is nothing more than variations in pixel colors.
You are mixing up "presentation" and "calculation". Just from pixel color you don`t get accurate directionality let alone source(s) attributes.

The AI recognizes assumes what is skin, hair, fabric, metal, foliage, water, and also the lighting conditions.
ftfy. And it also doesn`t have any deeper material information at that point.
The neural network injects photorealistic lighting, subsurface scattering (light penetrating the skin), reflections, glows in fabrics, more precise shadows, etc.
ha, bullshit. The NN doesn`t even know where the light comes from and the "assumptions" in the examples so far range from okayish to grossly wrong. It`s working on a 2d plane, not nearly enough information to generate anything accurate.

A developer could also achieve the same effect, but it would take a long time and might not be performant.
contrary to a model running on a second 5090 of course....how far we could`ve gotten if that power had instead went into more rays for the PT model and more power for RR.....

The uncanny valley that everyone felt is probably because the subsurface scattering became very precise and resembled a human being, which can be repulsive if you're not careful.
riiight, it`s the SSS and not the fotostudio lighting and the averaged porn actress face
Ken Jeong Yes GIF by The Masked Singer
 
Last edited:
Duh, this again? This is a game bug. Stop spreading misinformation.
This bug exists even in the original 2006 game.

bz17q5z9bdh91.jpg




It's not just motion vectors, it recognizes the colors of the pixels. Lighting in games is nothing more than variations in pixel colors.
The AI recognizes what is skin, hair, fabric, metal, foliage, water, and also the lighting conditions.
The neural network injects photorealistic lighting, subsurface scattering (light penetrating the skin), reflections, glows in fabrics, more precise shadows, etc.

A developer could also achieve the same effect, but it would take a long time and might not be performant. Not to mention the expertise required to avoid falling into the uncanny valley.

how-nvidia-dlss-5-works.jpg


The uncanny valley that everyone felt is probably because the subsurface scattering became very precise and resembled a human being, which can be repulsive if you're not careful.

maxresdefault.jpg
This and the glitchy starfield npc video confirms it's using only screen space data and very likely mfg. This is definitely going to have motion artifacts and will be reliant upon RT to have proper reflections from off-screen assets.
 
Last edited:
How everyone plays GTA6 at launch:


c7s472ehb00f1.gif
xx28i7e4w9ze1.gif


How Gymwolf plays at launch after enabling DLSS5:


The key has always been the balance between realism and fantasy.

Everyone wants to be immersed and feel like you can repeoduce on GTAVI the best car chasing scene you have in your favorare action movie wich goes beyond visuals (physics)

On the other hand, people dont get that impressed by a low cost live action movie wih cheap instagram AI filter just because it looks *photorealistic*

That is when fantasy needs to step on and the artistic touch makes sure that we are still immersed in a fantasy world on a game that has personality and aritstic signature.

We want games as a tool to scape reality with some dose of fantasy and not feel like we are on a reality simulation

Maybe this work for sports games and a few racing games.

This dlss 5 just looks like a mask was put on the character face with a click and that's it. Zero effort and zero artistic value
 
Last edited:
bummer. now only 99 other more problematic issues remain. gee

Feel free to talk about the issues. Technical details, not emotional opinions.

You are mixing up "presentation" and "calculation". Just from pixel color you don`t get accurate directionality let alone source(s) attributes.

They are literally pixels! The direction of the light is literally pixels directed to a region.

ftfy. And it also doesn`t have any deeper material information at that point.

You seem to have no idea how a machine learning model works.

ha, bullshit. The NN doesn`t even know where the light comes from and the "assumptions" in the examples so far range from okayish to grossly wrong. It`s working on a 2d plane, not nearly enough information to generate anything accurate.

You seem to have no idea how a machine learning model works.

contrary to a model running on a second 5090 of course....how far we could`ve gotten if that power had instead went into more rays for the PT model and more power for RR.....

The model is still being fine-tuned, that's how it works. ML models start out huge and are refined over time.
NVIDIA Funhouse required three GTX 1080s to run. After a few months, it was running on one GTX 1060.

riiight, it`s the SSS and not the fotostudio lighting and the averaged porn actress face

Light has a transformative power in a scene. Complain to Capcom if they made the model like that. And besides, I don't see what the problem is, she looks even more beautiful now.


This and the glitchy starfield npc video confirms it's using only screen space data and very likely mfg. This is definitely going to have motion artifacts and will be reliant upon RT to have proper reflections from off-screen assets.

Yes, something mentioned by Oliver from DF, he observed it in Starfield's reflections.
 
This trailer shows all the next gen features in Saros. 3D Audio, Adaptive Triggers, Haptic Feedback and near Instant Loading.

Only problem? Every single one of these features was in the first game. They didnt add anything new or next gen 5 years later. Despite having access to a brand new mid gen console. So the marketing team ran the same trailer again lol



If this isnt the emblematic of Sony studios' effort this gen, i dont know what is. We used to think that once these indie studios got absorbed by big Publishers like Sony, they would get an influx of cash and graduate to another tier. instead they made the same game again with a different coat of paint while still taking 5 years.

But cant call them lazy. No, thats too harsh. Give me a fucking break.
 
Top Bottom