• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Nvidia at Live GTC : DLSS 5

It's certainly nice to have more options, won't be useful for all games, sometimes more facial details make the character look older, they shouldn't use grace as the first showcase pic, in that case she looks older after it's on and actually kind of worse.
She is suppose the be ~27 in the game. Which confused the hell out of me because she looked young as hell.
 
I also cannot help this trend of not hating bullshots anymore (sorry but this trend of images turning into soup or having plenty of visual artefacts when the camera moves can be best summarised by calling them bullshots… most of these examples have a still camera or have artefacts at the edges of the screen).

Do not look forward to UE games with even more temporal accumulation induced artefacts and this on top…
Bullshot love or hate is very (brand or IP) dependent, you see.

Just as the cherry-picking goofy shit's do constantly in graphics comparison threads.
 
Last edited:
Who is this bitch.
Lover of "AI slop."

EKaVp7uSAoUHpMJC.jpg
 
Last edited:
OBLIVIONLONG.gif

OBLIVIONGIRL.gif


God Bless Nvidia. This is going to look so insanely good on your actual TV screen. Im telling you a year from now, this thread will be serving up the most crow ever on this forum.

@Vick You'll be happy to know, I've since discovered the light of Game-mode customization. I use that mode now. However, I still play around with both Vivid and FILMMAKER mode depending on the game.

I'm sorry man but that looks awful. It's extremely uncanny.
 


But if you've been scrolling social media, you'd think NVIDIA just shipped an Instagram beauty filter for video games. And I get why that's the first reaction. But it misses the true picture by a wide margin.

But faces have been the holdout. Getting a human face to look truly photorealistic in real time has been one of the most expensive problems in computer graphics from a compute standpoint. Subsurface scattering on skin, the way light interacts with individual strands of hair, the micro-expressions that make a character feel alive rather than like a wax figure. All of that requires an enormous amount of rendering horsepower..

I've probably seen ten different "floating head" tech demos over the course of my career. That's not an exaggeration. They're always a single head with no hair, no body, no environment, because rendering a photorealistic face at that level of quality is so expensive that it can only be done in isolation. You never see it inside an actual game, because the performance budget won't allow it.

The NVIDIA team put it well during my demo. It's a psychological effect. You've seen environments rendered really well before. When you suddenly see a character rendered at that same photorealistic level, your brain flags it immediately. It stands out.

What I saw in the demos was a comprehensive improvement across the entire scene. And the moment that really drove this home wasn't a face. It was a coffee maker.

In Starfield, there's a countertop scene with a coffee machine, some paper towels, a cup, napkin holders. Standard environmental clutter. With DLSS 5 off, everything looks flat. The coffee maker fades into the background. Toggle it on, and suddenly the objects have shape. The lighting wraps around them naturally. The spatial relationships between the items and the surfaces they're sitting on become clear. It goes from "assets placed in a scene" to "objects that actually belong in a room."

In the Zorah tech demo, which is a 300 GB courtyard scene built by 20 full-time artists, the subsurface scattering on foliage was just as impressive as anything happening on character faces. Leaves picked up that translucent glow from backlighting that is incredibly difficult and expensive to model and render through traditional means.

The AI model powering DLSS 5 is a single unified model. Same model for every game. It's not trained per-title, per-face, or per-object type.

That's not a filter. That's a fundamentally different approach to how the final image gets assembled. And it's deterministic and consistent from frame to frame, which is a hard requirement for games.


bSzZwyd9AteGfJ1G.jpg
 
OBLIVIONLONG.gif

OBLIVIONGIRL.gif


God Bless Nvidia. This is going to look so insanely good on your actual TV screen. Im telling you a year from now, this thread will be serving up the most crow ever on this forum.

@Vick You'll be happy to know, I've since discovered the light of Game-mode customization. I use that mode now. However, I still play around with both Vivid and FILMMAKER mode depending on the game.
What I see here are photorealistic characters over top of stiff, video-gamey mannerisms. It's uncanny.

The changes are all surface level, and they clash with the underlying engine in a very off-putting way.
 
Last edited:
I'm on the side of being very impressed by what was shown, but I don't think it was an afterthought that they didn't demo highly-active action scenes. Lots of slow movement, slow panning, and scenery. I'd be curious to see what RE:Requiem looks like during Leon's intro, for example. I'd bet there's a lot of artifacting at this point, but I'm sure they're aware and working to improve it.
 
good news: as this DLSS layer improves, we can use even more of the game's main resources for better animations and offload a ton of extremely expensive shaders etc that gradually become pointless.
I can only hope this is one of the long term benefits. Right now it just feels like a mashup of random tech that don't really belong together.
 
Last edited:
People hellbent on videogames being recognized as a serious form of art trying to convince you that this Torment Nexus nightmare is copacetic.
 
im surprised more people cant imagine the future of this. the faces look kinda weird "now" but they are just gonna keep improving. its gonna raise the baseline of visuals for amateurs especially. i mean some of these environment improvements are crazy and this is day 1. a negative is you are gonna lose individuality in your art especially as a.i. pushes more more aspiring artists out before they have a chance to develop. and who really knows what happens with gen a.i. when the snake inevitably eats the tail.
 
People hellbent on videogames being recognized as a serious form of art trying to convince you that this Torment Nexus nightmare is copacetic.
I think we just grew up as more of an enthusiast side of things, hence why most of us are still here on these forums and miss getting excited at devs hand crafting better models and worlds with hours of raster dedication, honing their crafts and not only one upping themselves, but each other every generation.

It was exciting seeing the next Naughty Dog character models, chest hair moving, pours in the skin, blemishes and tones, muscle contortions, lighting, animation motion branches, etc., and maybe some are concerned everything will become more "samey sterile by a flick of a switch" so to speak.
 
Last edited:
I can only hope this is one of the long term benefits. Right now it just feels like a mashup of random tech that don't really belong together.
This. It doesn't feel like a natural evolution of the same tech, instead it feels like an AI filter attached to it like a symbiote.

Like others have said this should somehow be applied piecemeal instead of 'this is now the new DLSS, fully packaged together.'

I feel like Todd Howard is full sail ahead on what his games looked like in this footage (without future changes), regardless of how awkward or bad it looked. And that's worrying.
 
MgMCA5bpFebHUiQH.png


I feel like Todd Howard is full sail ahead on what his games looked like in this footage (without future changes), regardless of how awkward or bad it looked. And that's worrying.

To be fair, I don't consider anything from them to be a 'looker'. Sometimes Fallout 4 with a radioactive storm was quite cool to watch but generally their art and animations were always weak. Until Starfield stepped up the game. From his point of view, he just sees it the same as anytbing external that manages to elevate the way the game looks.
 
Not a worry, this is already confirmed.

651667131_946886211038350_2432331044960303812_n.jpg


Games using DLSS5 model are indeed about look the same. A Bethesda game, a Rockstar one, a Capcom, a Naughty Dog, a Ubisoft, a CDPR, a From.

Can't actually believe so many people here are okay with this. It's a whole different story outside of these virtual walls, but still very surprised.
It's not that we don't see the potential for good, it's just too hard to appreciate being a tiny little fraction of the catastrophic potential there is for the entire industry.
You are jumping to the largest conclusions and can't be taken seriously
 
To be fair, I don't consider anything from them to be a 'looker'. Sometimes Fallout 4 with a radioactive storm was quite cool to watch but generally their art and animations were always weak. Until Starfield stepped up the game. From his point of view, he just sees it the same as anytbing external that manages to elevate the way the game looks.
The answer to this problem shouldn't just be what we just saw with this footage. It is an extreme change to try and solve a problem they could have fixed ages ago.
 
The answer to this problem shouldn't just be what we just saw with this footage. It is an extreme change to try and solve a problem they could have fixed ages ago.

It wouldn't be a Bethesda game if the character modeling and animation system didn't feel 10-15 years behind where the rest of the industry is.
 
If DLSS 5 is 'AI slop,' but current in-market DLSS, FSR, frame gen, PSSR, etc. are fine..

They aren't. #TeamNative

timboulder-bender.gif
I always try native first with a new game, but It has its uses. When you're going from 1440p to 4K the difference is unnoticeable these days. I'm not a fan of these 720p to 4K upscales though.
 
Top Bottom