• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

NVIDIA DLSS 4.5 to feature 2nd Gen Transformer model and Dynamic 6x Frame Generation

DLSS is my favorite graphics tech from the last few years. I always enable it, even when my GPU could throw decent fps at high res because it usually means less power drawn.

I've been trying to play phantom liberty with MFG but haven't liked the results so far, maybe with this update we will get a noticeable improvement using 2x and 4x as well.
 
They will eventually. You can only spend so much money and time on something before giving up if you don't see results. And AGI anytime soon is still extremely unlikely. The scientific consensus is pretty clear on this. LLM research especially appears more and more likely to be a dead end.

If a decade passes and nobody can come close to an AGI, well, expect countries and companies to loose interest.

Thinking these companies are stuck on LLMs is old news

By the time you hear mass media talk about a limit or bottleneck in AI, it's like stock market, by the time your uncle talks about a stock at the Christmas party, it's already old news.

These companies hire the best brains in the field and participate heavily with the best computer scientists in universities.

There's already a ton of branching models in development moving away from LLMs towards world/physical models

Meta's Yann LeCun VL-JEPA
Google's Titan architecture
Amazon's project Prometheus
OpenAI is known to be working on a world model

And of course the other ones like Anthropic, China and other countries'

Nobody will slow down until AGI, even if it's 20 years, but it'll be much faster than that I guarantee it. Even before AGI, when you have agents that can beat the brightest minds in a field or scale up 200,000 experts level human equivalents on an engineering project and speed development that would take decades into hours or days, it's already a huge national security asset.

It's not humans who will develop AGI, it'll be AI. New models with the wheels off for programming their own language and trying to create the future model that will achieve AGI will speed run any attempts that would be made by humans.
 
Thinking these companies are stuck on LLMs is old news

By the time you hear mass media talk about a limit or bottleneck in AI, it's like stock market, by the time your uncle talks about a stock at the Christmas party, it's already old news.

These companies hire the best brains in the field and participate heavily with the best computer scientists in universities.

There's already a ton of branching models in development moving away from LLMs towards world/physical models

Meta's Yann LeCun VL-JEPA
Google's Titan architecture
Amazon's project Prometheus
OpenAI is known to be working on a world model

And of course the other ones like Anthropic, China and other countries'

Nobody will slow down until AGI, even if it's 20 years, but it'll be much faster than that I guarantee it. Even before AGI, when you have agents that can beat the brightest minds in a field or scale up 200,000 experts level human equivalents on an engineering project and speed development that would take decades into hours or days, it's already a huge national security asset.

It's not humans who will develop AGI, it'll be AI. New models with the wheels off for programming their own language and trying to create the future model that will achieve AGI will speed run any attempts that would be made by humans.
Think Tanks on another level. It's as much fascinating as the potential horror scenarios are scary.
 
Is their plan to keep increasing fake frames each generation? Such a weird play
Not really a weird play, its the inevitable future and a smart long-term strategy. Theres really no other way to infinitely progress "performance" this fast + affordably because CPU cores get faster at a much slower rate than GPUs and game engines run into a wall (and nobody needs to buy a new GPU once theyre limited by the CPU in everything)

Almost all frames are going to be generated before long as the standard, youll just select your desired amount of input/output frames like 500 or 1000 and get as many rendered ones as possible and the rest will be generated (perhaps with limits in place) and you wont even care cause it wont even matter as itll be impossible to tell them apart when youre seeing hundereds of frames per second.

Well, we are already near that point now, ive been using 3/4x FG for everything that supports it for a while and its kind of hard to tell wheter MFG is on already, thats with a screen that only goes up to 270 and while using ELMB sync which should make artifacts much easier to spot, my CPU can barely do like 100fps in modern games so FG is necessary already. As refresh rates along with base performance rise and with frame generation tech getting better, itll become increasingly more useful/interchangeable, gonna be while for it to become the norm, but noone will have to worry about game performance soon thanks to fake frames.
 
Curious on dynamic mfg. I know Lossless Scaling has that built into the latest update but its a coin toss if it works on whatever game im playing. I sometimes like to lock in 120 fps on some AAA games with the nvidia mfg 2x multiplier, god only knows if its the multiplier or the gpu doing the frames since I game at 1440p dlss with a 5070ti.
 
Last edited:
I'm excited to try out 4.5. People scoff at this stuff but it's literally extending the life and performance of Nvidia cards like never before. Last time they improved the model I gained performance by dropping down a tier while keeping the same IQ. I'm hoping to do the same again this time.

And I don't give a damn about MFG giving that I'll be using this 4090 for a while, but every time they improve 4x, and now 6x, it makes the 2x that much better, to the point that it has gone from basically unusable to be to something I use in multiple games.

It's kinda hilarious how there are still detractors to this stuff when it's clearly the best GPU tech introduced in a long ass time. And as much as I hate that Nvidia is all about AI these days, at least they are giving their gaming customers some tech that it the result of it, AMD didn't show shit that was interesting at their show and mentioned AI 200 times in an hour long show. It's no wonder the gap hasn't been closed at all.

 
I'm excited to try out 4.5. People scoff at this stuff but it's literally extending the life and performance of Nvidia cards like never before. Last time they improved the model I gained performance by dropping down a tier while keeping the same IQ. I'm hoping to do the same again this time.

And I don't give a damn about MFG giving that I'll be using this 4090 for a while, but every time they improve 4x, and now 6x, it makes the 2x that much better, to the point that it has gone from basically unusable to be to something I use in multiple games.

It's kinda hilarious how there are still detractors to this stuff when it's clearly the best GPU tech introduced in a long ass time. And as much as I hate that Nvidia is all about AI these days, at least they are giving their gaming customers some tech that it the result of it, AMD didn't show shit that was interesting at their show and mentioned AI 200 times in an hour long show. It's no wonder the gap hasn't been closed at all.
Still can't believe a year ago I made this post

"The thing that caught my ear was him mentioning Nvidia saying that at 4K DLSS 4.0 Performance is more like current DLSS balanced in terms of quality. Amazing stuff if true!"


And damn, they just keep outdoing themselves here!
 
Dang it's too late tonight, for me to play around with it. I'll have to watch youtubers do AB comparisons while at work tomorrow.
I assume everyone will be testing CP77, but what would be a better game to test? Something that might have had lots of ghosting.
 
Which is DLSS 4.5? Preset L or M?

"The second-gen Transformer model is debuting with two new model presets that you can select as you please. Model M is best suited for general compatibility, while Model L is better if you're gaming at 4K with Ultra settings enabled"

Source: Nvidia
 
Last edited:
I installed driver 591.74, but preset M and L doesnt shows up.

PS: Opting for beta make the new presets to appear.
 
Last edited:
Do you guys use DLSS Override - Super Rsolution mode from Nvidia App? Or just select the DLSS mode from in-game?
 
I was playing RDR2 with DLAA and I am testing it now with DLSS quality preset L and it looks as good as DLAA wtf. What magic is this lol
 
Last edited:
Thinking these companies are stuck on LLMs is old news

By the time you hear mass media talk about a limit or bottleneck in AI, it's like stock market, by the time your uncle talks about a stock at the Christmas party, it's already old news.

These companies hire the best brains in the field and participate heavily with the best computer scientists in universities.

There's already a ton of branching models in development moving away from LLMs towards world/physical models

Meta's Yann LeCun VL-JEPA
Google's Titan architecture
Amazon's project Prometheus
OpenAI is known to be working on a world model

And of course the other ones like Anthropic, China and other countries'

Nobody will slow down until AGI, even if it's 20 years, but it'll be much faster than that I guarantee it. Even before AGI, when you have agents that can beat the brightest minds in a field or scale up 200,000 experts level human equivalents on an engineering project and speed development that would take decades into hours or days, it's already a huge national security asset.

It's not humans who will develop AGI, it'll be AI. New models with the wheels off for programming their own language and trying to create the future model that will achieve AGI will speed run any attempts that would be made by humans.
They definitely moved pass simple chatbots.
Nvidia and Google in particular are now placing AI into a simulation (game world) to speed up the process.



Video Summary on World Models (Genie)
DeepMind's Genie series (Genie 2/3) are foundation world models trained on massive video datasets to learn real-world causal mechanics/physics (e.g., gravity, collisions, fluids), object permanence, spatial dynamics, and interactions beyond what language encodes.

How it works:
From a text prompt (e.g., "cyberpunk city"), Genie generates interactive 3D environments in seconds. Photorealistic, playable worlds with consistent rules, long-horizon memory, and real-time action controls. It simulates "what-ifs" (counterfactuals) and passes physics benchmarks (e.g., pendulums, liquids) better than prior models.

Pairs with SIMA/Simma agents (Gemini-powered):
Agents explore these worlds via curiosity-driven Reinforcement Learning, execute natural language tasks (e.g., "build shelter"), and generate infinite synthetic data for self-improvement loops.

Hassabis calls world models his "longest standing passion," essential for AGI as they enable experiential learning that LLMs lack.


Benefits to Gaming
Hassabis explicitly plans to "reapply" these to games for "ultimate game experiences"—transforming static titles into living, infinite simulations.

Procedural Infinity:
On-demand worlds (e.g., endless No Man's Sky-style universes) with hyper-real physics/destruction, no artist bottlenecks.

Intelligent NPCs/Companions:
SIMA agents as adaptive, conversational allies that plan, team up, evolve quests, replacing scripted bots.

Emergent Replayability:
Self-generating content/scenarios via agent loops; personalized stories, difficulty, genres (e.g., AI societies).

This closes the loop: Games trained early DeepMind AI; now world models supercharge gaming.
 
Do you guys use DLSS Override - Super Rsolution mode from Nvidia App? Or just select the DLSS mode from in-game?
Download the latest Nvidia Profile Inspector


Set these settings in global to force transformer model in all DLSS games

CdQzOyF9ie3REbMs.png
 
I think it's hilarious that people think AI is some kind of bubble that will pop. Funny shit. It's here to stay and it's only getting better.
 
Do you guys use DLSS Override - Super Rsolution mode from Nvidia App? Or just select the DLSS mode from in-game?
I just use Global-Latest in the Nvidia app, that's usually the default anyway, no need to change it for the most part. Convenient enough to me.
 
Last edited:
Thinking these companies are stuck on LLMs is old news

By the time you hear mass media talk about a limit or bottleneck in AI, it's like stock market, by the time your uncle talks about a stock at the Christmas party, it's already old news.

These companies hire the best brains in the field and participate heavily with the best computer scientists in universities.

There's already a ton of branching models in development moving away from LLMs towards world/physical models

Meta's Yann LeCun VL-JEPA
Google's Titan architecture
Amazon's project Prometheus
OpenAI is known to be working on a world model

And of course the other ones like Anthropic, China and other countries'

Nobody will slow down until AGI, even if it's 20 years, but it'll be much faster than that I guarantee it. Even before AGI, when you have agents that can beat the brightest minds in a field or scale up 200,000 experts level human equivalents on an engineering project and speed development that would take decades into hours or days, it's already a huge national security asset.

It's not humans who will develop AGI, it'll be AI. New models with the wheels off for programming their own language and trying to create the future model that will achieve AGI will speed run any attempts that would be made by humans.
Actual research in the field, not hype from companies, cannot even give a timeframe. The overwhelming consensus is it's still decades away. Not because of any real science, the fact is that most don't even know how or if AGI is possible. Currently the entire idea of AGI is built on a ton of hopium.

Key problems like distribution shift, causal understanding, logical consistency, continual learning, world grounding, and alignment are all fundamental issues yet unresolved.

Things like Yann LeCunn's VL-JEPA, Google's Titan ideas, or OpenAI's internal work are research programs, not general solutions. They currently work in narrow, controlled settings and still struggle with the issues I mentioned like robustness, causal reasoning, long-horizon planning, and distribution shift. That's why they're research, not deployed AGI systems.

They can hire all the top talent they want, it doesn't solve fundamental unknowns. For example physics hired the best minds for decades before quantum gravity made progress. Intelligence is not an engineering-only problem yet, hell we still lack agreed-upon theories for general reasoning, abstraction, causal learning, and lifelong adaptation. How can we build AGI when we don't even fully understand the building blocks of intelligence itself?

As for scaling up agents, intelligence is not linearly parallelizable. We already see this in multi-agent systems today, more agents often means more instability, not amazing speedups. Throwing more agents at a problem just slows everything down.

The hope for AI designing AI? AI cannot reliably design a better successor if we cannot measure whether the successor is truly better at general intelligence rather than just beating benchmarks! How can the AI possibly know what is intelligence if we cannot even define it?

There is no empirical evidence that today's AI approaches solve the core problems required for AGI, or that AGI is close rather than an open-ended research problem.
 
Last edited:
Just watched the video. Goddamn seems like framegen has improved a lot aswell, with much better image quality and frame pacing so that makes MFG viable instead of 2x. Good times ahead for my 5070ti lol
 
Last edited:
Speaking of framegen.
After longtime I used lsfg 2x Last week on kcd2 running at 4k ultra at 30-35 fps base.
I was surprised to see the big improvement with acceptable input lag.
Before the lag was huge
 
Do you guys use DLSS Override - Super Rsolution mode from Nvidia App? Or just select the DLSS mode from in-game?
I like to know too. I dont wish to install the Nvidia app, just too bloaty visuals I dislike
You can use Inspector and DLSSSwapper to get the latest builds of DLSS in your games.

The NVapp is so much better than GeforceExperience and access to ShadowPlay, Reflex, Filters, Override are worth the "bloat"........the app literally doesnt even load on startup on my machine.
I only open it when I need to change presets.....otherwise global latest and just leave it alone.
DLSSSwapper gets me the DLSS builds.
 
You can use Inspector and DLSSSwapper to get the latest builds of DLSS in your games.

The NVapp is so much better than GeforceExperience and access to ShadowPlay, Reflex, Filters, Override are worth the "bloat"........the app literally doesnt even load on startup on my machine.
I only open it when I need to change presets.....otherwise global latest and just leave it alone.
DLSSSwapper gets me the DLSS builds.
Yeah the app is essential for RTX HDR amongst other things. My only complain is that it sometimes meddles with my NVPI profiles without permission.
 
MFG is the future. I know some may not like it, but there's not much you or we can do about it.

The jury's still well out on MTG, and I've never gotten along with frame gen anyway, but I eventually enabled X2 on Avatar so that I could max out the shiny and frankly, after about twenty seconds it is completely undetectable (on controller). I've also played about with FG on Arc Raiders and it feels almost 1:1 when you already had the processing headroom to begin with. It's impressive stuff, but not needed in that game - which is the thing with FG: a game needs to have already very low base input latency to begin with. Some games do not (imagine what the absolute state GTA6 will be).
 
I want to know if/when they will improve their RTX HDR so that it stops adding a skin mottling effect. Their latest iteration certainly added more detail but when it came to skin added too much. I had to drop from quality level 4 back to 3, which I believe is the lighter weight model, and which doesn't have the issue.
 
While 4x often struggled in DLSS 4, that's not really the point. We're simply anticipating the improvements in the new 4.5 models, like we saw when going from DLSS3 to 4.

TLDR; Perhaps 4x is the new 2x in image quality now, or something 🤷‍♂️

It's not the quality. It's the added latency.
 
I think it's hilarious that people think AI is some kind of bubble that will pop. Funny shit. It's here to stay and it's only getting better.
The bubble is not about the quality and usefulness of AI, but rather the investments into companies purporting to leverage AI but are nothing burgers. The DOT COM bust didn't happen because the Internet was a bad idea, but companies being overleveraged while promising the world. Looking into the financials of the various layers of AI companies today, something smells. A lot of borrowing from Peter to pay Paul.
 
Top Bottom