• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Graphical Fidelity I Expect This Gen

the vision as soon as you leave the "directed" cutscenes :
This is funny but the whole point is completely off the mark.
There will be no issues with "asrtistic vision" because it's the artists who will be intergrating the tech into their games.
The whole rage festival is based on a completely wrong premise.
This is the funniest thing about DLSS 5 right now.
 
Just an FYI, changing graphics settings disables ray reconstruction so be sure to turn that back on.

Performance is really weird. I switched from Cinematic to Ultra and gained nothing. Maybe the day one patch broke something.

lol what can I say? I like pretty graphics.

The first area is not particularly impressive. Very dark and crushed blacks. Also, lots of effects on screen at low res so it felt very dated.

But then the sun comes up, and you are sitting in a valley lifted straight out of RDR2 and yeah, its gorgeous. It's basically RDR2.5. Or maybe RDR2.8. I thought Kingdom Come 2 was RDR2.5, but this looks way better than that.

The level of detail is fantastic. Rocks and mountains look fantastic. Draw distance is fantastic. Lighting looks beautiful. It's a stunner. Maybe not up there with say Avatar or AC Shadows, but it is doing things like Draw Distance and rocks much better. It's a very beautiful game.

I really like the animations. i like that i can destroy everything. Very cool looking cutscene mocap work as well.
I think the introduction looked pretty shit between what you said and ps4 tier models/faces but it improve when you are in the open world, the brief part in the sky temple was also good and in the city it's where it looks at its best, but ue5 this is fucking not, the game look the best when the camera is pulled away.

There must be something wrong in my settings because mountains and rocks are nowhere near "fantastic", and yet i'm Playing at mostly cinematic, something on ultra and rr turned on, so maybe we have different meaning for the word "fantastic"

Is dlss performance mode fucking my textures? But i can't do anything more than perf mode if i want to keep rr on and have a decent framerate.

So yeah the game look exactly how i expected to look from the trailers, good highs, very bad lows and very uneven.


Yt compression didn't pulled an ac shadows with this one, at least for me.

Maybe later i'm gonna post some zoomed pics of these fantastic rocks and mountains...

I only played like 90 min and it was very late so it is not my final judgement.
 
Last edited:
Hey guys, am I the only one to see A LOT of pop ins/ups in Crimson Desert videos? It is so distracting for me that I cannot focus anything else. Some of them are really close ones, some of them are middle distance ones.

I cannot buy the game in some forseeable future therefore I wanted to ask.
Yes, indeed. This was also clearly visible in the DF video. The PS5 Pro subreddit also mentions a lot of pop in.
But DF said this is "normal". They do videos where they zoom in 200% to see artifacts and say the image quality is bad but a game with massive pop in is normal... It's clear they didn't want to piss of the developer so they could get games early in the future. Same as their high on life 2 video where it's clear the game is broken but they say "Nah..., it's ok".
 
Excuse my ignorance but what the hell does chromatic aberration do? I did a quick Google search and got the gist of it, but would like to understand it from fellow gamers!!
 
Holy moly this game looks great, i wonder if there is any way to remove CA though?

DekejhpkNqIkdQuo.png

It's subtle enough so I don't care personally.
 
Yeah when i return home i'm definitely trying dlaa and no rr.

@boji what is the best dlls version and profile for dlaa? Still 4.0 profile k? 4.5 dlaa is probably super heavy since 4.5 is made for perf mode right?
 
Yeah when i return home i'm definitely trying dlaa and no rr.

@boji what is the best dlls version and profile for dlaa? Still 4.0 profile k? 4.5 dlaa is probably super heavy since 4.5 is made for perf mode right?

Yeah, I think for DLSS4.0 is the best. It also should clean RT noise better.
 
I am hearing there's tons of pop in in the game even with Cinematic preset. Is it true?
U can see it even on 5090 with max settings(ray reconstruction included):


Watch whole 20min and judge for urself if u can somewhat tolerate it or(like unfortunately for myself) its way too distracting, hell some ppl might not even see it(godbless their eyes) so best if u check for urself, doesnt work with some1 else telling u one way or another.
That problem exist coz game is very light on the gpu's vram, even max settings dont change that too much, so personally hoping if not for patch then a mod that can fix it- coz in my layman eyes/terms i would describe it as simply game assets/lod transitions arent using of stronger gpu's vram pool enough.

Max i ever saw was 12,8gigs vram taken on 5090 and trully max settings (native 4k,cinematic, rr, dlaa) like here:


Obviouisly its cranked to 11 if u set game settings to low like here( non remastered oblivion/gothic nostalgia warning :P )

Under those settings pop-ins have their own pop-ins :messenger_ok: :messenger_grinning_sweat: :messenger_grinning_sweat: :messenger_grinning_sweat:
 
Last edited:
Finally up to 500 devs on Witcher 4. they were around 200 back in 2023 iirc. 7 years on one game and one DLC. I think devs need to stop working on DLCs. Just fix the bugs and move on. Phantom Liberty took 3 years and hundreds of devs that couldve been working on Witcher 4. Who knows maybe we wouldve had the game by now.

HDyoU_BWwAA3kyZ
Phantom Liberty was amazing though and the continued fixing of cyberpunk really helped recover their credibility as a studio. I wish there more DLCs for cyberpunk if anything, they put a huge amount of work into creating the world and characters in that game, it's great to see it fleshed out. There's tonnes of other games to play in the meantime, I've got nothing against them taking their time and doing it properly. They're moving engines as well so they really shouldn't rush it.
 
Last edited:
U can see it even on 5090 with max settings(ray reconstruction included):


Watch whole 20min and judge for urself if u can somewhat tolerate it or(like unfortunately for myself) its way too distracting, hell some ppl might not even see it(godbless their eyes) so best if u check for urself, doesnt work with some1 else telling u one way or another.
That problem exist coz game is very light on the gpu's vram, even max settings dont change that too much, so personally hoping if not for patch then a mod that can fix it- coz in my layman eyes/terms i would describe it as simply game assets/lod transitions arent using of stronger gpu's vram pool enough.

Max i ever saw was 12,8gigs vram taken on 5090 and trully max settings (native 4k,cinematic, rr, dlaa) like here:


Obviouisly its cranked to 11 if u set game settings to low like here( non remastered oblivion/gothic nostalgia warning :P )

Under those settings pop-ins have their own pop-ins :messenger_ok: :messenger_grinning_sweat: :messenger_grinning_sweat: :messenger_grinning_sweat:

I mean i hate UE5 with a passion, but nanite would giga boost this game.
 
But DF said this is "normal". They do videos where they zoom in 200% to see artifacts and say the image quality is bad but a game with massive pop in is normal...
Alex B : Unless papi Nvidia steps in with some sort of magical Deeplearning (Well pretend_they_are_learning_or_just_faking) Levelofdetail Super Slider then everything should be fine and diddy :p
dKc7NAOGNYxisM2X.gif
 
This is funny but the whole point is completely off the mark.
There will be no issues with "asrtistic vision" because it's the artists who will be intergrating the tech into their games.
The whole rage festival is based on a completely wrong premise.
This is the funniest thing about DLSS 5 right now.

But the artists are raging ! Did you see the kotaku article? /s

This anti AI church right now, and I saw even peoples I respected in neogaf for technical discussion go into that camp, very disappointed, is such a silly movement. The irony is probably AI has never been more used in the past week to make memes. The pipeline is 100% going neural rendering, AMD, Intel, Microsoft are fully onboard with the consortiums and I'm sure Sony is going all in too for next gen.
 
The most impressive thing with DLSS5 is what it does for NPC's

I mean fucking WOW

We're looking at a 2 generation difference here

ZjIIePEoMmRrlXVB.jpeg


VS

Screenshot-2026-03-18-at-10-36-03-AM.png


You're all going to love this tech when you see it running in 4K in front of you.
HOLY FUCK! The differences are massive here! Groundbreaking, and she reserves her look much better than Grace in the same shot lol, that's weird. But that's OK, the tech is still in its early state & what they've shown apparently is a "snapshot" of what's to come. So excited!
 
But the artists are raging ! Did you see the kotaku article? /s

This anti AI church right now, and I saw even peoples I respected in neogaf for technical discussion go into that camp, very disappointed, is such a silly movement. The irony is probably AI has never been more used in the past week to make memes. The pipeline is 100% going neural rendering, AMD, Intel, Microsoft are fully onboard with the consortiums and I'm sure Sony is going all in too for next gen.
Yeah it's inevitable and I am so enthused by that, finally we get to see major graphical leaps!

Mark Cerny literally said in his PS5 Pro Seminar thing that AI Neural Rendering is going to provide quantum leaps in visual fidelity & that the future of PlayStation will rely heavily on that. He also teased that in his PlayStation X AMD video with Jack huynh, who is AMD's senior vice president and general manager of the Computing and Graphics Group.

In Jason Ronald's Project Helix GDC presentation that took place a week ago, he explicitly stated in the "Hardware innovation" section that Project Helix will leverage the "Next-generation of Neural Rendering techniques", he continues "whether that's Neural Materials, whether that's Generated images..."

Full GDC video:
Timestamps:
- When he talks about Hardware: 8:15 (Timestamped).
- When he talks about Neural Rendering: 10:01

 
Last edited:
Yeah it's inevitable and I am so enthused by that, finally we get to see major graphical leaps!

Mark Cerny literally said in his PS5 Pro Seminar thing that AI Neural Rendering is going to provide quantum leaps in visual fidelity & that the future of PlayStation will rely heavily on that. He also teased that in his PlayStation X AMD video with Jack huynh, who is AMD's senior vice president and general manager of the Computing and Graphics Group.

In Jason Ronald's Project Helix GDC presentation that took place a week ago, he explicitly stated in the "Hardware innovation" section that Project Helix will leverage the "Next-generation of Neural Rendering techniques", he continues "whether that's Neural Materials, whether that's Generated images..."

Full GDC video:



Timestamps:
- When he talks about Hardware: 8:15 (Timestamped).
- When he talks about Neural Rendering: 10:01


Yea, any of them would be left in the dust in the coming years if they don't go neural rendering. It would not even look the same generation as they struggle to with traditional pipeline limits.

The AI cancel movement is cringe as fuck. "Gamers" are so easily swayed by grifter movements. Of course the whole fucking techtuber sphere smelled blood/money when Nvidia unveiled this.

Of course you cannot put that on everything and that's fine, but the highest fidelity graphics we'll see in coming years will be neural. If not DLSS 5, it'll be DLSS 5.1, 5.5 or 6 or something that it becomes inevitable.

"wow fuck Nvidia, I'm going AMD / consoles!" - something I read here on neogaf during the whole fiasco



Hand On Shoulder GIF
 
Last edited:
The AI cancel movement is cringe as fuck. "Gamers" are so easily swayed by grifter movements. Of course the whole fucking techtuber sphere smelled blood/money when Nvidia unveiled this.
So you're suggesting that most of the public hates this look because they've been grifted by youtubers?

People don't agree with you so of course they have been conned by a psyop?

How simple minded of you.
 
So you're suggesting that most of the public hates this look because they've been grifted by youtubers?

People don't agree with you so of course they have been conned by a psyop?

How simple minded of you.

There's never any bandwagon on the internet, noooo /s

Slapping AI slop on anything AI is sloppy thoughts. If you don't see how ridiculous the movement is for this pushover or "cancel" movement, I don't know what to tell you.
 


What the fuck is this, the quality mode on Base PS5 looks noticeably sharper and cleaner than the Pro Quality mode.

PSSR 2 being useless - They need to fix this immediately.

edit: I guess it may be a little sharper, but on the Pro, Ray Tracing is set to ultra vs high on base. Looks better overall, different video

 
Last edited:
Yes, indeed. This was also clearly visible in the DF video. The PS5 Pro subreddit also mentions a lot of pop in.
But DF said this is "normal". They do videos where they zoom in 200% to see artifacts and say the image quality is bad but a game with massive pop in is normal... It's clear they didn't want to piss of the developer so they could get games early in the future. Same as their high on life 2 video where it's clear the game is broken but they say "Nah..., it's ok".
Pop-in is always disregarded when it's imo the worst visual flaw.
 


As I mentioned in the other thread, the guy who responded to Daniel, Jacob Freeman, is basically an evangelist, someone involved in marketing.
All the answers he gave are exactly the same as those in the blog post published by Nvidia.


Before jump to that conclusion, I would wait for a more detailed document or a word from developers like Bryan Catanzaro. He said that DLSS5 was the result of extensive research and cited a paper they did in 2018 about the initial research.

It wouldn't make sense for it to be just a filter, it would be added to the existing filters and not a feature of DLSS.

nvidia-app-unable-to-disable-game-filter-v0-0ftjkauqns0e1.png
 
Crimson Desert on PS5Pro looks pretty awesome. Even without the ray reconstruction from PC, the RTGI, RT reflections on water and all reflective materials, armor pieces reflecting onto itself, RT Shadows for all spot lights like torches, lamps, city lights is impressive. Only the shadows from the sun's direct lighting is raster. It's pretty ridiculous. There's a lot little details everywhere; displacement mapping even on the animals to create short fur which isn't noticeable until you get really up close. Performance mode with pssr2 image quality is pretty great too. The only big drawback is the pop-in.

Crimson-Desert-20260319210223.png
My biggest pet peeve with all video games' graphics is the bending and twisting of supposedly rigid armour and costume elements when a character moves!
 
My biggest pet peeve with all video games' graphics is the bending and twisting of supposedly rigid armour and costume elements when a character moves!
That, and clipping....feels like progress on these things has slowed to a crawl. Maybe in another decade or so we'll see beards that don't clip into the rest of the model when the head moves.
 
As I mentioned in the other thread, the guy who responded to Daniel, Jacob Freeman, is basically an evangelist, someone involved in marketing.
All the answers he gave are exactly the same as those in the blog post published by Nvidia.


Before jump to that conclusion, I would wait for a more detailed document or a word from developers like Bryan Catanzaro. He said that DLSS5 was the result of extensive research and cited a paper they did in 2018 about the initial research.

It wouldn't make sense for it to be just a filter, it would be added to the existing filters and not a feature of DLSS.

nvidia-app-unable-to-disable-game-filter-v0-0ftjkauqns0e1.png
butt-spray.gif
 
This is funny but the whole point is completely off the mark.
There will be no issues with "asrtistic vision" because it's the artists who will be intergrating the tech into their games.
The whole rage festival is based on a completely wrong premise.
This is the funniest thing about DLSS 5 right now.

Someone posted this before be it deserved to reiterate



Arguing about the current limits they had with a dual GPU setup and perhaps round corners as there's no fucking way you edit a frame in the pipeline between two GPUs, is a moot point. When it gets to single GPU the tech will have access to any information the pipeline has and that has been used for multitudes of AI solutions such as ray reconstruction. If not DLSS 5's launch, maybe 5.1, 5.5, it doesn't matter. This is the future pipeline.
 
Crimson Desert on PS5Pro looks pretty awesome. Even without the ray reconstruction from PC, the RTGI, RT reflections on water and all reflective materials, armor pieces reflecting onto itself, RT Shadows for all spot lights like torches, lamps, city lights is impressive. Only the shadows from the sun's direct lighting is raster. It's pretty ridiculous. There's a lot little details everywhere; displacement mapping even on the animals to create short fur which isn't noticeable until you get really up close. Performance mode with pssr2 image quality is pretty great too. The only big drawback is the pop-in.

Crimson-Desert-20260319210223.png


Crimson-Desert-20260319214541.png


Crimson-Desert-20260319220003.png


Crimson-Desert-20260319210716.png


Crimson-Desert-20260319210457.png


Crimson-Desert-20260319204007.png


Crimson-Desert-20260319203627.png


Crimson-Desert-20260319203618.png
The rt reflections on the armor add so much. I've been begging for it since Demon souls. The first guy you arm wrestle has the coolest reflections on his helmet. And he's inside a poorly lit bar.
 
Someone posted this before be it deserved to reiterate



Arguing about the current limits they had with a dual GPU setup and perhaps round corners as there's no fucking way you edit a frame in the pipeline between two GPUs, is a moot point. When it gets to single GPU the tech will have access to any information the pipeline has and that has been used for multitudes of AI solutions such as ray reconstruction. If not DLSS 5's launch, maybe 5.1, 5.5, it doesn't matter. This is the future pipeline.

No one is disagreeing that lighting makes a huge difference, that's why this forum is basically us wanking off to how good path tracing looks and how disappointed we are with lighting tech in most games this gen. Applying a dumb 2d ai filter isn't improving lighting if it can't access the actual game pipeline, it's literally just guessing and looks stupid and out of place. This tech is clearly too early, if it could plug-in to the game engine and actually access geometry, material and lighting data correctly I think we would be much more impressed by it. Where has Nvidia said it would be able to do this when running on a single card, that's just pure hopium. Yes, perhaps in future versions, but you're telling me they decided to demo it so poorly but everything will be fixed later this year with half the gpu power.
 
Someone posted this before be it deserved to reiterate



Arguing about the current limits they had with a dual GPU setup and perhaps round corners as there's no fucking way you edit a frame in the pipeline between two GPUs, is a moot point. When it gets to single GPU the tech will have access to any information the pipeline has and that has been used for multitudes of AI solutions such as ray reconstruction. If not DLSS 5's launch, maybe 5.1, 5.5, it doesn't matter. This is the future pipeline.

That guy is wrong LOL. He's on the left side of the curve.
 
Georgian Avasilcutei who worked on Remember Me and Life is Strange at DONTNOD, Dishonored 2 and Dishonored: Death of the Outsider at Arkane, and Hogwarts Legacy at Avalanche

Or funking giblet

Tell Me More Jeff Goldblum GIF by National Geographic Channel
Appeal to Authority now? He says it works with existing lights in the scene, but it's image based. Square that Circle. I work with AI for a living, and have written Game Engines and Rasterisers.
 
RR on vs. off

M3LceZ0P4olN9Liu.jpeg
68GRDaKXZRZIqTKV.jpeg


Nice and sharp DLSS 4.5 (it can make a bit more issues with RT but foliage looks amazing with it and everything is sharp in general - this suits the game)

KpSDoXH7jU5H5P4S.jpeg
VVVPAJzwKzWF1d5A.jpeg



High Quality Rocks.
 
Appeal to Authority now? He says it works with existing lights in the scene, but it's image based. Square that Circle. I work with AI for a living, and have written Game Engines and Rasterisers.

And what would be the reason by the time it's implemented that it does not have access to existing lights in the scene as does Ray reconstruction?

You're jumping the gun over a tech demo that's seceded the pipeline in half because of dual GPU, a problem that the final product months from now will not have to face.

Why would Nvidia not use the same information as ray reconstruction. RTX neural face was fully in pipeline.

And yea, you can say you invented a rocket but really, unless you go public, it's pretty much going on the "who the fuck are you" pile.
 
Last edited:
And what would be the reason by the time it's implemented that it does not have access to existing lights in the scene as does Ray reconstruction?

You're jumping the gun over a tech demo that's seceded the pipeline in half because of dual GPU, a problem that the final product months from now will not have to face.

Why would Nvidia not use the same information as ray reconstruction. RTX neural face was fully in pipeline.

And yea, you can say you invented a rocket but really, unless you go public, it's pretty much going on the "who the fuck are you" pile.
I already went through this in the other thread, you absolutely can improve the pipeline by adding more maps to the input, but that isn't what it is doing yet, and unless something changes, it will be image / map based (ie: Data generated by what's visible in the framebuffer / encoded in the gbuffer). I agree with that point. But it STILL is generative AI, the same way any Image to Image LORA is, instead it can use pipeline data instead of "prompts" as such.
 
I already went through this in the other thread, you absolutely can improve the pipeline by adding more maps to the input, but that isn't what it is doing yet, and unless something changes, it will be image / map based (ie: Data generated by what's visible in the framebuffer / encoded in the gbuffer). I agree with that point.

I'll wait to see their single gpu solution and the implementation sometime this fall/winter because I have a hard time imagining they would regress over the RTX neural face implementation. Unless it's such a hard task to make it work on blackwell that they need shortcuts and will reserve the more robust version for 6000 series with likely a lot of optimization in the SMs for speeding this up. They have should have all the parameters available when going back to single GPU, you agree?

To be seen. If not DLSS 5.0, an iteration afterwards, eventually, will improve on it. That's why I don't understand the drama around it. 1 year was the difference between DLSS 1 and DLSS 2. 1 year. That's fucking nothing for the jump in quality it got. Neural faces will advanced as such a rapid pace, AMD included, that there's no future where this is put back into Pandora's box like the internet is screaming right now.


But it STILL is generative AI, the same way any Image to Image LORA is, instead it can use pipeline data instead of "prompts" as such.

Generative AI words do not scare me because Ray Reconstruction is the same. If you feed it the right amount of information and tune it, it will look good. There's obviously gonna be pattern recognition, that's the entire reason of existence of the tech. So what do you mean it is STILL generative AI? What would you want it to be?

And that's also what Avasilcutei says. He doesn't say it's not generative AI, he says peoples are afraid of the AI boogeyman.
 
I'll wait to see their single gpu solution and the implementation sometime this fall/winter because I have a hard time imagining they would regress over the RTX neural face implementation. Unless it's such a hard task to make it work on blackwell that they need shortcuts and will reserve the more robust version for 6000 series with likely a lot of optimization in the SMs for speeding this up. They have should have all the parameters available when going back to single GPU, you agree?

To be seen. If not DLSS 5.0, an iteration afterwards, eventually, will improve on it. That's why I don't understand the drama around it. 1 year was the difference between DLSS 1 and DLSS 2. 1 year. That's fucking nothing for the jump in quality it got. Neural faces will advanced as such a rapid pace, AMD included, that there's no future where this is put back into Pandora's box like the internet is screaming right now.




Generative AI words do not scare me because Ray Reconstruction is the same. If you feed it the right amount of information and tune it, it will look good. There's obviously gonna be pattern recognition, that's the entire reason of existence of the tech. So what do you mean it is STILL generative AI? What would you want it to be?

And that's also what Avasilcutei says. He doesn't say it's not generative AI, he says peoples are afraid of the AI boogeyman.
It doesn't scare anyone, but let's call a spade a spade. It's redrawing the frame, and there are definite technical reasons why while it looks "awesome" it won't be "correct" and "consistent" across an entire game. I use generative AI every day, and it's amazing! I'm even using it right now as we speak to help improve an old game. People have rights to be concerned that it will undermine the artist intention even if you think it looks like shit, the AI will do things that are incongruent. I might play New Vegas with all the mods, and DLSS 5 running at full blast, but this is a fundamental shift in drawing frames, that reduces the need for Artist control in the game itself / rendering tech. There is a reason so many, including yourself have to wishful think your way into saying "Oh it will be geometry aware, it will understand the lights, it will have access to the game". The answer is.. No, it won't, and doesn't need to. So what does that leave us? Do we even need frames...
 
Top Bottom