Bitstream
Member
It's a glorified snapchat filter.so DLSS 5 is just a screen space effect/filter?
It's a glorified snapchat filter.so DLSS 5 is just a screen space effect/filter?
right, let`s pretend the other threads don`t exist. I´m not gonna go on a gif- collect-a-thon. click any of the multitude of YT vids that have already done that or simply scroll up for other examples....Feel free to talk about the issues. Technical details, not emotional opinions.
nope. That´s just how it`s presented. Reversing the calculations that led to that to get an accurate position of the source is quite literally impossible. Best you get is a very broad assumption and the moment you have several lightsources in a picture that can all interact with the same pixels you `re down to blind guessing which is exactly what we`re seeing in the blown out example scenes....They are literally pixels! The direction of the light is literally pixels directed to a region.
you have no idea how basic lighting works...You seem to have no idea how a machine learning model works.
Capcom did nothing of the sort. The DLSS5 version has nothing to do with the original model and if you like it better is completely irrelevant. Fact is the filter doesn`t enhance the scene it reimaginesLight has a transformative power in a scene. Complain to Capcom if they made the model like that. And besides, I don't see what the problem is, she looks even more beautiful now.
Just so that we're clear here, you're disregarding him and that specific video because he researched the information and, not only came to the same conclusion as Digital Foundry in their recent correction video, but also showed that exact same research that DF just did in their correction video (that they should have shown to begin with)?Vex only looks at things superficially and goes with the flow of the internet.
lol at the thumbnails, ridiculous. As if a murder was commited or something
There is some disingenuous marketing from Nvidia around what DLSS 5 does to graphics, first it was only lighting, now it's more things than that, there are some things that Nvidia MUST IMMEDIATELY address, I knew that Neural Faces would be extremely controversial & problematic the day they revealed & showed it in that Zorah tech demo at CES 2025 with the RTX 50-series reveal.The more I watch of this video, the more I keep asking 'why didn't you mention these potential issues in your original coverage video?'
Like they're suddenly, now listing out it's problems in this new video. Now they're suddenly bringing up the other half of this tech.
This is what I mean B BlownUpRich it just comes across optically bad and almost disingenuous, and DF definitely doesn't help Nvidia's case here.
Transparency about this tech should have been there from the start, from all sides.
We dont know this. From the few snippets given to us, the devs do not have control over the AI model. Just some sliders that adjust the photorealism aspect of each material or character model.Do they not know that Neural Rendering tools can also be malleable & customized to each of the developers' needs?
No it's not. How many times do we have to post gifs of hellblade 2, Marvel Avengers, Matrix, Death Stranding 2 (in cutscenes), Kojima's next game OT, and other photorealistic games before the point sinks in? It is already possible on 10 tflops GPUs. It will only get better when we get 30-40 tflops GPUs in a year and a half.Gamers want graphics to improve dramatically? That's the only way,
This is literally the definition of lazy. They didnt do any work themselves, and just put their game through a photorealism filter that literally made up the graphics for them. The oblivion upgrade for example is by far the best looking one. The game was built on UE5. We have near photorealistic games shipping on UE5 this gen. It shouldve looked like that at launch.if they don't want that & want companies to become "lazy"
I mean I am not going to chastise them for being hyped about an insane new tech like this. I too was blown away at first. You can see me literally have an orgasm in my first few posts.The more I watch of this video, the more I keep asking 'why didn't you mention these potential issues in your original coverage video?'
Like they're suddenly, now listing out it's problems in this new video. Now they're suddenly bringing up the other half of this tech.
This is what I mean B BlownUpRich it just comes across optically bad and almost disingenuous, and DF definitely doesn't help Nvidia's case here.
Transparency about this tech should have been there from the start, from all sides.
right, let`s pretend the other threads don`t exist. I´m not gonna go on a gif- collect-a-thon. click any of the multitude of YT vids that have already done that or simply scroll up for other examples....
nope. That´s just how it`s presented. Reversing the calculations that led to that to get an accurate position of the source is quite literally impossible. Best you get is a very broad assumption and the moment you have several lightsources in a picture that can all interact with the same pixels you `re down to blind guessing which is exactly what we`re seeing in the blown out example scenes....
you have no idea how basic lighting works...
Capcom did nothing of the sort. The DLSS5 version has nothing to do with the original model and if you like it better is completely irrelevant. Fact is the filter doesn`t enhance the scene it reimagines
Just so that we're clear here, you're disregarding him and that specific video because he researched the information and, not only came to the same conclusion as Digital Foundry in their recent correction video, but also showed that exact same research that DF just did in their correction video (that they should have shown to begin with)?
![]()
Just wanted to make sure.
That's irrelevant to the fact that their information literally lines up.I'm saying this because I've been following the channel for a while, but I recently stopped watching precisely for that reason. He basically does what our friend above does: watches videos from other YouTubers and repeats what they say, but with different words so it doesn't seem like plagiarism.
This is not uncommon, many other smaller YouTubers do the same.
We dont know this. From the few snippets given to us, the devs do not have control over the AI model. Just some sliders that adjust the photorealism aspect of each material or character model.
It's possible that there is more fine grain control over this in the future but the results so far from six different developers show the same AI slop in every single game. People are going by the results here, not the potential of this tech or Jensen's word that this is completely customizable which it is not.
That's irrelevant to the fact that their information literally lines up.
You are discussing character, I am discussing evidence.
Also I just wanted to state for the record here, that his video came out before DF's video, so there was no way he repeated that research information from them.
This trailer shows all the next gen features in Saros. 3D Audio, Adaptive Triggers, Haptic Feedback and near Instant Loading.
Only problem? Every single one of these features was in the first game. They didnt add anything new or next gen 5 years later. Despite having access to a brand new mid gen console. So the marketing team ran the same trailer again lol
If this isnt the emblematic of Sony studios' effort this gen, i dont know what is. We used to think that once these indie studios got absorbed by big Publishers like Sony, they would get an influx of cash and graduate to another tier. instead they made the same game again with a different coat of paint while still taking 5 years.
But cant call them lazy. No, thats too harsh. Give me a fucking break.
After this whole debate about DLSS 5 I came to the conclusion that most of the people talking about it are completely unaware of what they don't know...they're on the peak of ignorance and don't even grasp how little they understand.
They just heard generative AI and like Pavlov's dog they just start drooling thinking it's the same shit as unethical slop image generators...for the love of Christ...go and educate yourself before raging on the internet for no reason.
DLLS 5 is not a prompt based generator...it's not creating stuff based on someone else's images and hallucinates results. It uses the information from the raster to build up a final render frame with the same information but with better lighting and shading...
I'll even give you an example on how much of an impact better shading and lighting has. This is a character I've worked on not long ago. On the left you have a raster render, with some bad shaders. On the right you have a render with raytrace on, a much better shader for both hair and skin. They don't even look like the same person...do they? This is what DLSS5 is doing....getting a result like the one on the right(tbh a lot better) at a smaller cost than actually rendering it.
Still the same geo, same textures, same light sources.
Some of you will go and say the one on the left is better and it's the artist's vision. It's not...it's just the artist's limitation due to shading and lighting constrains. Every single artist out there would love to get the right result in real time.
That's rich coming from someone who obviously doesn't even know the absolute basics of what he's talking about. And since when is example material not fit for forming an opinion since we're all looking at the same press material after all....You base your opinion on other people's opinions and YouTube videos.
Don't you think it would be better to open a book on linear algebra and computer graphics?
![]()
Georgian Avasilcutei works on Remember Me and Life is Strange at DONTNOD, Dishonored 2 and Dishonored: Death of the Outsider at Arkane, and Hogwarts Legacy at Avalanche.
Yes finallyWow. Amazing.
The combat reminds me of Mass Effect 2. FINALLY!
I love the materials in the platforming sequence. Really gorgeous stuff. I am starting to get so fucking hyped for this gen. After a rough first 4-5 years (with some exceptions) it seems like we are about get a lot of amazing games (both gameplay and visuals) back to back to back over the next 12 months or so.
Yes, looks really goodThe materials in the locations look pretty good.
![]()
Georgian Avasilcutei works on Remember Me and Life is Strange at DONTNOD, Dishonored 2 and Dishonored: Death of the Outsider at Arkane, and Hogwarts Legacy at Avalanche.
Is that jensen ackles model?
"it's not creating stuff based on someone else's images and hallucinates results"
Porn face Grace begs to differ as do all the scenes that suddenly have Photostudio lighting....
That guy doesn't know an IOTA more than everyone else who watched that presentation. He is still on the "it's just the lighting" track that's been disproven.....
Boy, I don't even know where on the stupidity scale I'd rate "someone did good in a completely different field so he must know about that other one, too" .It certainly looks like it. If you look on ArtStation, he's made quite a few 3D models of famous artists.
![]()
Some of the "all in" takes basically amount to - everyone that doesn't like this thing I like simply doesn't have the brain capacity to truly understand it. As if they're some kind of technical director at a bleeding edge company and the rest of the world has never owned a computer.Boy, I don't where on the stupidity scale I'd rate "someone did good in a completely different field so he must know about that other one, too" .
With your track record so far I'll give it a 10/10
Some of the "all in" takes basically amount to - everyone that doesn't like this thing I like simply doesn't have the brain capacity to truly understand it. As if they're some kind of technical director at a bleeding edge company and the rest of the world has never owned a computer.
Says the guy who thinks pixels come with a "lit by" sticker, quotes people that still pretend that there are no model changes and likens Photoshop brightness adjustment to directional lighting in motion....The mere fact that he mentions "porn face" makes me completely dismiss his argument.
Says the guy who thinks pixels come with a "lit by" sticker and quotes people that still pretend that there are no model changes.
![]()
"Dismiss", lol. More like simply having no answer ...
We had Hellblade (5 hours walking sim done in 5 years), Matrix (trailer), Marvel Avengers (trailer), DS2 (as you said, cutscenes), OT (trailer).No it's not. How many times do we have to post gifs of hellblade 2, Marvel Avengers, Matrix, Death Stranding 2 (in cutscenes), Kojima's next game OT, and other photorealistic games before the point sinks in? It is already possible on 10 tflops GPUs. It will only get better when we get 30-40 tflops GPUs in a year and a half.
You're the last one to tell anyone to "read up".I already told you to read a book on computer graphics. Don't flood this thread with nonsense.
![]()
Georgian Avasilcutei works on Remember Me and Life is Strange at DONTNOD, Dishonored 2 and Dishonored: Death of the Outsider at Arkane, and Hogwarts Legacy at Avalanche.
Imagine someone takes the time out of their day to engage with you while you spout nonsense. They give you the courtesy of responding to you like a human being and you turn around and laugh at everything they say.Somebody keeps leaving laughing emojis on comments. They must be a very happy person!
Exactly. I don't even need to rehearse and repeat myself, if people want to ride the train of "graphics are stagnating" and cry out "developers are lazy" then there should be something else they need to blame, it's Moore's Law, it is on the way out.We had Hellblade (5 hours walking sim done in 5 years), Matrix (trailer), Marvel Avengers (trailer), DS2 (as you said, cutscenes), OT (trailer).
How do scale that efficiently in a game like Crimson Desert? GoY? Starfield... any open world or even linear 30h game, unless the dev time is GTA5 times 3.
On a very small scale or snippets is feasible, on an end to end game with a decent length , you need 10 years.
Imagine someone takes the time out of their day to engage with you while you spout nonsense. They give you the courtesy of responding to you like a human being and you turn around and laugh at everything they say.
It's so dumb and insulting.
Typically every gen we are able to get last gen's linear game graphics in an open world setting. And then some. The same thing will happen next gen. It's a matter of graphics horsepower and we should have enough next gen especially if the 10-12x ray tracing improvements are real. Nanite is already letting devs push movie quality assets in games. path tracing will make them look real life. Nanite is already solving issues where worlds and games get bigger. the vram requirements for Nanite are relatively low. Nanite handles distant geometry without destroying the vram and gpu budget. devs dont have to sit there making different LODs for everything. UE5 editor literally lets them bring in raw movie quality assets into the game seamlessly.We had Hellblade (5 hours walking sim done in 5 years), Matrix (trailer), Marvel Avengers (trailer), DS2 (as you said, cutscenes), OT (trailer).
How do scale that efficiently in a game like Crimson Desert? GoY? Starfield... any open world or even linear 30h game, unless the dev time is GTA5 times 3.
On a very small scale or snippets is feasible, on a end to end game with a decent length , you need 10 years.
We are talking about next gen. Not this gen. Whats not possible this gen, is typically possible the gen after with more graphics horsepower. I cant believe i have to even state this.Slimy's reply is so narrow-minded, and frankly, his expectations are just unrealistic, all he mentioned were a few "games" & a demo with unattainably high effort with extremely limited circumstances & design choices, no wonder why him & some others call devs "lazy" with that mentality, he does not consider the fact that making games has gotten exponentially more difficult & much more expensive & time-consuming, especially in this damn economy. It's a multifaceted issue. Hellblade 2? Are you serious? Marvel 1943? All we got is a trailer & silence for two years now, Matrix demo, oh the 2021 demo that even GTA 6 may not even reach, you know, the game with over $1-2 billions estimated budget & 8 years worth of development with over 3,000 heads working on it? Yeah, seems realistic for the other 99.9% of development studios out there.
This might be the craziest shot showing the difference to me. Wtf.
![]()
It looks amazing, but a denoiser by itself doesn't do shit like this. Something else has to be going on. This is way too night and day.
Yeah, but with a correspondingly huge increase in dev timelines, budgets, and team sizes every single time. We're gonna see insane stuff next gen no doubt but it'll probably come from an increasingly small number of studios and games that leverage either insane passion or raw fuck-you money, barring some unforeseen paradigm shift in game development (not DLSS 5). Hungry and ambitious teams are doing insane stuff this gen, but we're often hitting the limits of current dev pipelines and manpower more than we're hitting real hardware limits. We can and do bitch about lazy devs all day, but this industry chews people up and spits them out and the majority of talented engineers and artists are eventually gonna find better, more stable jobs elsewhere no matter how passionate they might be about making games. Leaves us in a real shit spot all around.We are talking about next gen. Not this gen. Whats not possible this gen, is typically possible the gen after with more graphics horsepower. I cant believe i have to even state this.
Actually, I'm not that sure that Nvidia won't abandon DLSS4.I love how people get mad over what is going to be an OPTIONAL feature in some games for the foreseeable future.
I dont know about that. Thanks to UE5, we have seen previously B and C tier studios, chinese and korean developers, and small 20-30 person dev teams routinely outdo big Sony and MS studios in graphics rendering. The trend will only continue next gen when path tracing becomes viable on consoles.Yeah, but with a correspondingly huge increase in dev timelines, budgets, and team sizes every single time. We're gonna see insane stuff next gen no doubt but it'll probably come from an increasingly small number of studios and games that leverage either insane passion or raw fuck-you money, barring some unforeseen paradigm shift in game development (not DLSS 5). Hungry and ambitious teams are doing insane stuff this gen, but we're often hitting the limits of current dev pipelines and manpower more than we're hitting real hardware limits. We can and do bitch about lazy devs all day, but this industry chews people up and spits them out and the majority of talented engineers and artists are eventually gonna find better, more stable jobs elsewhere no matter how passionate they might be about making games. Leaves us in a real shit spot all around.
I'm with you on this one, at least I think people will have the option of choosing, DLSS with or without neural rendering, we will see, time will tell.Actually, I'm not that sure that Nvidia won't abandon DLSS4.
I'll believe it when I see it. They were telling me at the start of this gen that ray tracing would make development cheaper, faster, and easier. What actually happened is that most games now need double the work on lighting for RT and non-RT versions. I'm going to need some serious convincing that path tracing [possibly] going mainstream will change the trajectory we've been on for decades. Also I love you but your first and third gifs are marketing shots from games that aren't even out yetI dont know about that. Thanks to UE5, we have seen previously B and C tier studios, chinese and korean developers, and small 20-30 person dev teams routinely outdo big Sony and MS studios in graphics rendering. The trend will only continue next gen when path tracing becomes viable on consoles.
![]()
![]()
![]()
![]()
Georgian Avasilcutei works on Remember Me and Life is Strange at DONTNOD, Dishonored 2 and Dishonored: Death of the Outsider at Arkane, and Hogwarts Legacy at Avalanche.