• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Graphical Fidelity I Expect This Gen

Feel free to talk about the issues. Technical details, not emotional opinions.
right, let`s pretend the other threads don`t exist. I´m not gonna go on a gif- collect-a-thon. click any of the multitude of YT vids that have already done that or simply scroll up for other examples....
They are literally pixels! The direction of the light is literally pixels directed to a region.
nope. That´s just how it`s presented. Reversing the calculations that led to that to get an accurate position of the source is quite literally impossible. Best you get is a very broad assumption and the moment you have several lightsources in a picture that can all interact with the same pixels you `re down to blind guessing which is exactly what we`re seeing in the blown out example scenes....
You seem to have no idea how a machine learning model works.
you have no idea how basic lighting works...
Light has a transformative power in a scene. Complain to Capcom if they made the model like that. And besides, I don't see what the problem is, she looks even more beautiful now.
Capcom did nothing of the sort. The DLSS5 version has nothing to do with the original model and if you like it better is completely irrelevant. Fact is the filter doesn`t enhance the scene it reimagines
 
Last edited:
Vex only looks at things superficially and goes with the flow of the internet.
Just so that we're clear here, you're disregarding him and that specific video because he researched the information and, not only came to the same conclusion as Digital Foundry in their recent correction video, but also showed that exact same research that DF just did in their correction video (that they should have shown to begin with)?

hmmm-thinking.gif


Just wanted to make sure.
 
lol at the thumbnails, ridiculous. As if a murder was commited or something


There is a great link there to the reddit post, everyone should check it out

The faces have improved a lot, but the rest of it is not that "realistic" anymore.
 
Last edited:
The more I watch of this video, the more I keep asking 'why didn't you mention these potential issues in your original coverage video?'

Like they're suddenly, now listing out it's problems in this new video. Now they're suddenly bringing up the other half of this tech.

This is what I mean B BlownUpRich it just comes across optically bad and almost disingenuous, and DF definitely doesn't help Nvidia's case here.

Transparency about this tech should have been there from the start, from all sides.
There is some disingenuous marketing from Nvidia around what DLSS 5 does to graphics, first it was only lighting, now it's more things than that, there are some things that Nvidia MUST IMMEDIATELY address, I knew that Neural Faces would be extremely controversial & problematic the day they revealed & showed it in that Zorah tech demo at CES 2025 with the RTX 50-series reveal.

Since that day I wished Nvidia would just ditch or re-do the faces thing, Grace's and others' faces look better than that demo but they are still far away from looking the same, they look over-contrasted, faulty & honestly very problematic from an ethical standpoint.

I hope Nvidia fixes this NOW! Just remain focused on materials, lighting and environments for now until you find a way around faces.
 
Last edited:
Do they not know that Neural Rendering tools can also be malleable & customized to each of the developers' needs?
We dont know this. From the few snippets given to us, the devs do not have control over the AI model. Just some sliders that adjust the photorealism aspect of each material or character model.

It's possible that there is more fine grain control over this in the future but the results so far from six different developers show the same AI slop in every single game. People are going by the results here, not the potential of this tech or Jensen's word that this is completely customizable which it is not.
Gamers want graphics to improve dramatically? That's the only way,
No it's not. How many times do we have to post gifs of hellblade 2, Marvel Avengers, Matrix, Death Stranding 2 (in cutscenes), Kojima's next game OT, and other photorealistic games before the point sinks in? It is already possible on 10 tflops GPUs. It will only get better when we get 30-40 tflops GPUs in a year and a half.

Hell we havent even seen next gen only games from ND, GG, Quantic Dream and SSM at this point. Go look at what ND with doing with 1.8 tflops at the end of last gen and tell me if having AI generate visuals based on pornstars is the only way.
if they don't want that & want companies to become "lazy"
This is literally the definition of lazy. They didnt do any work themselves, and just put their game through a photorealism filter that literally made up the graphics for them. The oblivion upgrade for example is by far the best looking one. The game was built on UE5. We have near photorealistic games shipping on UE5 this gen. It shouldve looked like that at launch.
 
The more I watch of this video, the more I keep asking 'why didn't you mention these potential issues in your original coverage video?'

Like they're suddenly, now listing out it's problems in this new video. Now they're suddenly bringing up the other half of this tech.

This is what I mean B BlownUpRich it just comes across optically bad and almost disingenuous, and DF definitely doesn't help Nvidia's case here.

Transparency about this tech should have been there from the start, from all sides.
I mean I am not going to chastise them for being hyped about an insane new tech like this. I too was blown away at first. You can see me literally have an orgasm in my first few posts.

The problem with their first video was that they were too blinded by the hype to think logically. It happens to the best of us. DF are only human. Their mistake was going back to the hotel and getting a video recorded within minutes of seeing the demos to capture the hype they were feeling. And thats unprofessional. They are not hype men. They are not cheerleaders paid for by Nivida. They are tech JOURNALISTS paid for by their viewers. They should've sat on it and done a proper post mortem on it. I know youtubers who literally write a script for their videos and these guys dont even pretend to be journalists. Richard has been a journalist and an editor for 3 decades, he should've known better the limitations of these podcast style reaction videos.

Now that they've had some time to think, you are seeing a more measured approach that should've been there in their first article/coverage of the game. But making reaction videos is cheap and quick, and gets way more hits on youtube then some article on their website.
 
right, let`s pretend the other threads don`t exist. I´m not gonna go on a gif- collect-a-thon. click any of the multitude of YT vids that have already done that or simply scroll up for other examples....

nope. That´s just how it`s presented. Reversing the calculations that led to that to get an accurate position of the source is quite literally impossible. Best you get is a very broad assumption and the moment you have several lightsources in a picture that can all interact with the same pixels you `re down to blind guessing which is exactly what we`re seeing in the blown out example scenes....

you have no idea how basic lighting works...

Capcom did nothing of the sort. The DLSS5 version has nothing to do with the original model and if you like it better is completely irrelevant. Fact is the filter doesn`t enhance the scene it reimagines

You base your opinion on other people's opinions and YouTube videos.
Don't you think it would be better to open a book on linear algebra and computer graphics?

Just so that we're clear here, you're disregarding him and that specific video because he researched the information and, not only came to the same conclusion as Digital Foundry in their recent correction video, but also showed that exact same research that DF just did in their correction video (that they should have shown to begin with)?

hmmm-thinking.gif


Just wanted to make sure.

I'm saying this because I've been following the channel for a while, but I recently stopped watching precisely for that reason. He basically does what our friend above does: watches videos from other YouTubers and repeats what they say, but with different words so it doesn't seem like plagiarism.
This is not uncommon, many other smaller YouTubers do the same.
 
Traduction: they saw the collective internet shatting on them and they changed tune immediately.

Like we don't know how volatile these dudes are after the whole hogwarts legacy debacle...
 

Wow. Amazing.

The combat reminds me of Mass Effect 2. FINALLY!

I love the materials in the platforming sequence. Really gorgeous stuff. I am starting to get so fucking hyped for this gen. After a rough first 4-5 years (with some exceptions) it seems like we are about get a lot of amazing games (both gameplay and visuals) back to back to back over the next 12 months or so.
 
I'm saying this because I've been following the channel for a while, but I recently stopped watching precisely for that reason. He basically does what our friend above does: watches videos from other YouTubers and repeats what they say, but with different words so it doesn't seem like plagiarism.
This is not uncommon, many other smaller YouTubers do the same.
That's irrelevant to the fact that their information literally lines up.

You are discussing character, I am discussing evidence.

Also I just wanted to state for the record here, that his video came out before DF's video, so there was no way he repeated that research information from them.
 
We dont know this. From the few snippets given to us, the devs do not have control over the AI model. Just some sliders that adjust the photorealism aspect of each material or character model.

It's possible that there is more fine grain control over this in the future but the results so far from six different developers show the same AI slop in every single game. People are going by the results here, not the potential of this tech or Jensen's word that this is completely customizable which it is not.

If the developers created a model from scratch, they would get the same result. It's like ChatGPT, Gemini, Claude, they are all essentially the same. What changes is the fine-tuning.

And this is where the developer will come in and vary the results. They can make numerous modifications to the color scheme, as well as make the model ignore objects. I explained this to the friend above, but he refused to understand. Light in games is pixels, and the way they appear is simply brightness and color.

Anyone who has ever edited a photo will notice that with just a few adjustments, you can completely change a picture.

hq720.jpg

Random photoshop pic
 
That's irrelevant to the fact that their information literally lines up.

You are discussing character, I am discussing evidence.

Also I just wanted to state for the record here, that his video came out before DF's video, so there was no way he repeated that research information from them.

His information is superficial, I've already explained that. There's no point in doing any research if he doesn't have a grasp of the subject.
 
This trailer shows all the next gen features in Saros. 3D Audio, Adaptive Triggers, Haptic Feedback and near Instant Loading.

Only problem? Every single one of these features was in the first game. They didnt add anything new or next gen 5 years later. Despite having access to a brand new mid gen console. So the marketing team ran the same trailer again lol



If this isnt the emblematic of Sony studios' effort this gen, i dont know what is. We used to think that once these indie studios got absorbed by big Publishers like Sony, they would get an influx of cash and graduate to another tier. instead they made the same game again with a different coat of paint while still taking 5 years.

But cant call them lazy. No, thats too harsh. Give me a fucking break.

They moved from UE4 to UE5 at least.
 
After this whole debate about DLSS 5 I came to the conclusion that most of the people talking about it are completely unaware of what they don't know...they're on the peak of ignorance and don't even grasp how little they understand.

They just heard generative AI and like Pavlov's dog they just start drooling thinking it's the same shit as unethical slop image generators...for the love of Christ...go and educate yourself before raging on the internet for no reason.

DLLS 5 is not a prompt based generator...it's not creating stuff based on someone else's images and hallucinates results. It uses the information from the raster to build up a final render frame with the same information but with better lighting and shading...

I'll even give you an example on how much of an impact better shading and lighting has. This is a character I've worked on not long ago. On the left you have a raster render, with some bad shaders. On the right you have a render with raytrace on, a much better shader for both hair and skin. They don't even look like the same person...do they? This is what DLSS5 is doing....getting a result like the one on the right(tbh a lot better) at a smaller cost than actually rendering it.
Still the same geo, same textures, same light sources.

Some of you will go and say the one on the left is better and it's the artist's vision. It's not...it's just the artist's limitation due to shading and lighting constrains. Every single artist out there would love to get the right result in real time.



HDq-dbKXYAAzGx6


Georgian Avasilcutei works on Remember Me and Life is Strange at DONTNOD, Dishonored 2 and Dishonored: Death of the Outsider at Arkane, and Hogwarts Legacy at Avalanche.
 
You base your opinion on other people's opinions and YouTube videos.
Don't you think it would be better to open a book on linear algebra and computer graphics?
That's rich coming from someone who obviously doesn't even know the absolute basics of what he's talking about. And since when is example material not fit for forming an opinion since we're all looking at the same press material after all....
 
Last edited:
Wow. Amazing.

The combat reminds me of Mass Effect 2. FINALLY!

I love the materials in the platforming sequence. Really gorgeous stuff. I am starting to get so fucking hyped for this gen. After a rough first 4-5 years (with some exceptions) it seems like we are about get a lot of amazing games (both gameplay and visuals) back to back to back over the next 12 months or so.
Yes finally
 


HDq-dbKXYAAzGx6


Georgian Avasilcutei works on Remember Me and Life is Strange at DONTNOD, Dishonored 2 and Dishonored: Death of the Outsider at Arkane, and Hogwarts Legacy at Avalanche.

"it's not creating stuff based on someone else's images and hallucinates results"

Porn face Grace begs to differ as do all the scenes that suddenly have Photostudio lighting....

That guy doesn't know an IOTA more than everyone else who watched that presentation. He is still on the "it's just the lighting" track that's been disproven.....
 
Last edited:
Is that jensen ackles model?

It certainly looks like it. If you look on ArtStation, he's made quite a few 3D models of famous artists.


"it's not creating stuff based on someone else's images and hallucinates results"

Porn face Grace begs to differ as do all the scenes that suddenly have Photostudio lighting....

That guy doesn't know an IOTA more than everyone else who watched that presentation. He is still on the "it's just the lighting" track that's been disproven.....

Charlie Murphy Laugh GIF by hero0fwar
 
Boy, I don't where on the stupidity scale I'd rate "someone did good in a completely different field so he must know about that other one, too" .
With your track record so far I'll give it a 10/10
Some of the "all in" takes basically amount to - everyone that doesn't like this thing I like simply doesn't have the brain capacity to truly understand it. As if they're some kind of technical director at a bleeding edge company and the rest of the world has never owned a computer.
 
Some of the "all in" takes basically amount to - everyone that doesn't like this thing I like simply doesn't have the brain capacity to truly understand it. As if they're some kind of technical director at a bleeding edge company and the rest of the world has never owned a computer.

There are different types of arguments:

Few people discuss technical aspects. Those who do clearly don't know what they're talking about. It's not a matter of arrogance, but are you going to talk about relativity if the person says the Earth is flat?

What I observe are purely emotional arguments: "Porn face Grace begs to differ, as do all the scenes that suddenly have Photostudio lighting..."

The mere fact that he mentions "porn face" makes me completely dismiss his argument.
 
The mere fact that he mentions "porn face" makes me completely dismiss his argument.
Says the guy who thinks pixels come with a "lit by" sticker, quotes people that still pretend that there are no model changes and likens Photoshop brightness adjustment to directional lighting in motion....
Mr Rogers Clown GIF


Just wow....
 
Last edited:
No it's not. How many times do we have to post gifs of hellblade 2, Marvel Avengers, Matrix, Death Stranding 2 (in cutscenes), Kojima's next game OT, and other photorealistic games before the point sinks in? It is already possible on 10 tflops GPUs. It will only get better when we get 30-40 tflops GPUs in a year and a half.
We had Hellblade (5 hours walking sim done in 5 years), Matrix (trailer), Marvel Avengers (trailer), DS2 (as you said, cutscenes), OT (trailer).
How do scale that efficiently in a game like Crimson Desert? GoY? Starfield... any open world or even linear 30h game, unless the dev time is GTA5 times 3.

On a very small scale or snippets is feasible, on a end to end game with a decent length , you need 10 years.
 
Last edited:


HDq-dbKXYAAzGx6


Georgian Avasilcutei works on Remember Me and Life is Strange at DONTNOD, Dishonored 2 and Dishonored: Death of the Outsider at Arkane, and Hogwarts Legacy at Avalanche.

Appreciating your takes on this so far. Just remember you're never going to reason people out of positions they didn't reason themselves into. The ongoing collective gamer freakout is unfortunate and inevitable for numerous reasons, but I keep finding myself coming back to the way gamers love to harp on "artistic vision." Quite indicative of the biploar relationship a lot of folks have with developers. One moment berating and disparaging them, the next, venerating them and never daring question "the vision." As if what makes it to the end product is what the developers envisioned in their heads. As if there isn't massive compromise involved at every step of shipping a video game thanks to time and technology constraints. And as if any new tool in the dev toolbox could ever be a bad thing in the grand scheme.
 
Somebody keeps leaving laughing emojis on comments. They must be a very happy person!
Imagine someone takes the time out of their day to engage with you while you spout nonsense. They give you the courtesy of responding to you like a human being and you turn around and laugh at everything they say.

It's so dumb and insulting.
 
We had Hellblade (5 hours walking sim done in 5 years), Matrix (trailer), Marvel Avengers (trailer), DS2 (as you said, cutscenes), OT (trailer).
How do scale that efficiently in a game like Crimson Desert? GoY? Starfield... any open world or even linear 30h game, unless the dev time is GTA5 times 3.

On a very small scale or snippets is feasible, on an end to end game with a decent length , you need 10 years.
Exactly. I don't even need to rehearse and repeat myself, if people want to ride the train of "graphics are stagnating" and cry out "developers are lazy" then there should be something else they need to blame, it's Moore's Law, it is on the way out.

Neural Rendering is the future, Neural Faces look like trash now (I've always been against it since they revealed it in Zorah demo 2025), I will say it is very elaborate technologically, you'd need like a 100x times more compute power to run Grace's DLSS 5 face fidelity-wise if you're using traditional methods, good luck with that!

Slimy's reply is so narrow-minded, and frankly, his expectations are just unrealistic, all he mentioned were a few "games" & a demo with unattainably high effort with extremely limited circumstances & design choices, no wonder why him & some others call devs "lazy" with that mentality, he does not consider the fact that making games has gotten exponentially more difficult & much more expensive & time-consuming, especially in this damn economy. It's a multifaceted issue. Hellblade 2? Are you serious? Marvel 1943? All we got is a trailer & silence for two years now, Matrix demo, oh the 2021 demo that even GTA 6 may not even reach, you know, the game with over $1-2 billions estimated budget & 8 years worth of development with over 3,000 heads working on it? Yeah, seems realistic for the other 99.9% of development studios out there.
 
Last edited:
Imagine someone takes the time out of their day to engage with you while you spout nonsense. They give you the courtesy of responding to you like a human being and you turn around and laugh at everything they say.

It's so dumb and insulting.
first time GIF


Welcome to GAF!! At least this thread is usually free of that behavior. Usually.
 
We had Hellblade (5 hours walking sim done in 5 years), Matrix (trailer), Marvel Avengers (trailer), DS2 (as you said, cutscenes), OT (trailer).
How do scale that efficiently in a game like Crimson Desert? GoY? Starfield... any open world or even linear 30h game, unless the dev time is GTA5 times 3.

On a very small scale or snippets is feasible, on a end to end game with a decent length , you need 10 years.
Typically every gen we are able to get last gen's linear game graphics in an open world setting. And then some. The same thing will happen next gen. It's a matter of graphics horsepower and we should have enough next gen especially if the 10-12x ray tracing improvements are real. Nanite is already letting devs push movie quality assets in games. path tracing will make them look real life. Nanite is already solving issues where worlds and games get bigger. the vram requirements for Nanite are relatively low. Nanite handles distant geometry without destroying the vram and gpu budget. devs dont have to sit there making different LODs for everything. UE5 editor literally lets them bring in raw movie quality assets into the game seamlessly.

And Matrix isnt just a trailer. I played it for like 30 hours. It's as real as it gets. Perhaps too much for PS5, but the PS6 should be able to do it easily in a game.

These AI guys are all trying to sell you a bill of goods. It's in their interest to do so since they have put in literally over a trillion dollars into AI investments. Up until a few years ago, the same people were pushing ray tracing, path tracing, AI upscaling, virtualized geometry and now all of a sudden they are like nah, lets throw away all of that and let my one size fits all AI model make up the graphics for you??? It is a scam. Do not fall for it.
 
Last edited:
Slimy's reply is so narrow-minded, and frankly, his expectations are just unrealistic, all he mentioned were a few "games" & a demo with unattainably high effort with extremely limited circumstances & design choices, no wonder why him & some others call devs "lazy" with that mentality, he does not consider the fact that making games has gotten exponentially more difficult & much more expensive & time-consuming, especially in this damn economy. It's a multifaceted issue. Hellblade 2? Are you serious? Marvel 1943? All we got is a trailer & silence for two years now, Matrix demo, oh the 2021 demo that even GTA 6 may not even reach, you know, the game with over $1-2 billions estimated budget & 8 years worth of development with over 3,000 heads working on it? Yeah, seems realistic for the other 99.9% of development studios out there.
We are talking about next gen. Not this gen. Whats not possible this gen, is typically possible the gen after with more graphics horsepower. I cant believe i have to even state this.

You literally said this is the only way forward. No. It's NOT. We are already half way there. Even if we dont get there next gen in an open world game, it will be because we didnt have enough raw horsepower, and maybe instead of a 30 tflops PS6, we needed a 40 tflops PS6 like tim sweeney predicted all those years ago for photorealism. But eventually we will get there. Nanite, path tracing, and even AI techniques like Neural rendering to speed up path tracing will all help get us there.

There is a million other ways to do this. devs are already capturing and loading real life photos in their games. Nvidia couldve trained their model on those assets on a game by game basis. They couldve offered Capcom to train their model on the actual reference shots of the face model. They could've accelerated path tracing using AI similar to how they are doing it for virtualized geometry. They didnt. Instead they took the easy and cheap way out and are using the same AI trained on porn for every single game. It's insulting.

And while me using the term lazy devs might be insulting, you insinuating that the devs cant get there on their own without cheating is even more insulting. At least, i have faith that some of the non-lazy devs can get there. You have given up on their talents. I even praised Bethesda for their massive upgrades in graphics rendering. Defended even its NPCs. I've routinely praised devs who tried to push the bar while the rest of the gaming industry wrote them off. Callisto devs, Respawn, Bethesda, Turn10 were all heavily criticized for one reason after another, but i never once called them lazy devs, and tried to explain that the reason for the poor performance or high CPU or GPU requirements was because these devs pushed the bar. I have faith the ambitious devs will continue to deliver without needing to resort to using porno filters.
 
Last edited:
This might be the craziest shot showing the difference to me. Wtf.


oTTTJAyjvQC6N4bZ.jpg

It looks amazing, but a denoiser by itself doesn't do shit like this. Something else has to be going on. This is way too night and day.

Jinzo Prime was on to something!

Nvidia already sneaked DLSS 5 into our games, the filthy bastards!
They just did it smart and kept out the face swaps, and everybody was cheering them.
We'll all bear witness tomorrow, when Crimson Desert releases! :messenger_grinning_smiling:
 
I love how people get mad over what is going to be an OPTIONAL feature in some games for the foreseeable future.
Just turn DLSS5 off if you don't like it, if you even have a compatible graphics card that is...

I for one can't wait to see what developers do with DLSS5 when they adjust it to their liking.

I bet most of the people hating on DLSS5 are the same people who hate on upscaling tech vs. "MUH NATIVE 4K".
 
We are talking about next gen. Not this gen. Whats not possible this gen, is typically possible the gen after with more graphics horsepower. I cant believe i have to even state this.
Yeah, but with a correspondingly huge increase in dev timelines, budgets, and team sizes every single time. We're gonna see insane stuff next gen no doubt but it'll probably come from an increasingly small number of studios and games that leverage either insane passion or raw fuck-you money, barring some unforeseen paradigm shift in game development (not DLSS 5). Hungry and ambitious teams are doing insane stuff this gen, but we're often hitting the limits of current dev pipelines and manpower more than we're hitting real hardware limits. We can and do bitch about lazy devs all day, but this industry chews people up and spits them out and the majority of talented engineers and artists are eventually gonna find better, more stable jobs elsewhere no matter how passionate they might be about making games. Leaves us in a real shit spot all around.
 
Last edited:
Yeah, but with a correspondingly huge increase in dev timelines, budgets, and team sizes every single time. We're gonna see insane stuff next gen no doubt but it'll probably come from an increasingly small number of studios and games that leverage either insane passion or raw fuck-you money, barring some unforeseen paradigm shift in game development (not DLSS 5). Hungry and ambitious teams are doing insane stuff this gen, but we're often hitting the limits of current dev pipelines and manpower more than we're hitting real hardware limits. We can and do bitch about lazy devs all day, but this industry chews people up and spits them out and the majority of talented engineers and artists are eventually gonna find better, more stable jobs elsewhere no matter how passionate they might be about making games. Leaves us in a real shit spot all around.
I dont know about that. Thanks to UE5, we have seen previously B and C tier studios, chinese and korean developers, and small 20-30 person dev teams routinely outdo big Sony and MS studios in graphics rendering. The trend will only continue next gen when path tracing becomes viable on consoles.

4I9xpIC.gif



iXWALVu.gif


uBCnJkn.gif
 
I dont know about that. Thanks to UE5, we have seen previously B and C tier studios, chinese and korean developers, and small 20-30 person dev teams routinely outdo big Sony and MS studios in graphics rendering. The trend will only continue next gen when path tracing becomes viable on consoles.

4I9xpIC.gif



iXWALVu.gif


uBCnJkn.gif
I'll believe it when I see it. They were telling me at the start of this gen that ray tracing would make development cheaper, faster, and easier. What actually happened is that most games now need double the work on lighting for RT and non-RT versions. I'm going to need some serious convincing that path tracing [possibly] going mainstream will change the trajectory we've been on for decades. Also I love you but your first and third gifs are marketing shots from games that aren't even out yet :messenger_tears_of_joy: is the middle one Delta Force? Didn't play that one.

What I am expecting is more Expedition 33-tier games. Small teams using off the shelf tech to make focused games that don't blow my mind but punch well above their weight.

I do like seeing the legacy AAA get humiliated by up and coming smaller studios but only to a point. I really want legacy AAA to get their shit together.
 
Last edited:
Top Bottom