AMD FSR Redstone uses machine learning to achieve parity with Nvidia DLSS

IbizaPocholo

NeoGAFs Kent Brockman

As part of its Computex 2025 announcements, AMD has given gamers a sneak peek at the company's major update for its FidelityFX Super Resolution (FSR) technology. Dubbed FSR Redstone, the upcoming installment will bring many new features to match rival Nvidia's Deep Learning Super Sampling (DLSS).

AMD is already plotting ahead and preparing FSR Redstone as the next substantial upgrade for FSR.

Although AMD did not provide specific details, the chipmaker emphasized three features to be included in FSR Redstone: neural radiance caching, machine learning ray regeneration, and machine learning frame generation. Some of these features might sound familiar, as they are already part of the Nvidia DLSS suite.

AMD states that neural radiance caching effectively learns how light reflects within a scene. The objective is to predict and store indirect lighting assets in a cache, which can subsequently be used to generate heaps of other rays. Logically, this helps accelerate path tracing.

Ray regeneration is equivalent to Nvidia's ray reconstruction. This feature leverages a trained neural network to regenerate pixels that couldn't be accurately traced. Thanks to machine learning, it can predict and filter grainy noise in real time.
 
It's already pretty hard to tell between their quality settings. It's time to compete on how far those internal resolutions can be pushed down.
 
Competition is good. Hopefully they can achieve parity with DLSS4. By the time of my next GPU purchase they may become a contender.
 
Last edited:
Parity? I'll believe it when I see it. AMD's got a lot of work to do, but I hope they and Intel progress far enough in the GPU space to bring some proper competition into the arena.
 
Umm, did an AI write the article?

Machine learning is literally how all the AI upscaling methods work
I think the point they're trying to get across - perhaps poorly - is that up to FSR3, AMD did not use machine learning. It's why they were able to have FSR run on pretty much anything from Turing and up. With FSR4, they started to use ML, which is why it's not available on all the same cards FSR3 is.
 
It's FSR4 that bad? I can understand FSR2 but with FSR4 the difference with DLSS start to be really subtle it seems.
The issue with FSR4 is it being exclusive to a recent GPU only a fraction of AMD owners have and being exclusive to a certain select amount of games. Meanwhile DLSS is supported across hundreds of games (most high-end games in recent years) and supports Nvidia GPUs from five years ago. And despite that, FSR4 is still inferior to the DLSS that 2xxx series Nvidia cards can run. This is a decisive defeat for AMD and one that may cost them the entire dedicated GPU segment. Too late, too little, too expensive.
 
The issue with FSR4 is it being exclusive to a recent GPU only a fraction of AMD owners have and being exclusive to a certain select amount of games. Meanwhile DLSS is supported across hundreds of games (most high-end games in recent years) and supports Nvidia GPUs from five years ago. And despite that, FSR4 is still inferior to the DLSS that 2xxx series Nvidia cards can run. This is a decisive defeat for AMD and one that may cost them the entire dedicated GPU segment. Too late, too little, too expensive.

That is only true in part.
The list of FSR4 games is constantly increasing with each new driver release. And if we consider Optiscaler, then the list increases into the hundreds of games.
So although Nvidia still has an advantage there, it's not as big as you think it is.

Yes, it's a major problem that FSR4 is only available on the 9000 series. But according to people on the AMD's Vanguard program, AMD is porting FSR4 to RDNA3.
But we also have to consider that Nvidia did the same thing with Frame Generation and the RTX 2000 and 3000 GPUs. And did it again with Multi Frame Generation, and the RTX 2000, 3000 and even the 4000 series.
Meanwhile AMD allowed Frame Generation to be used on all GPUs, be it AMD, Intel or Nvidia, from many older generations.
So in this situation, AMD and Nvidia have similar problems.

FSR4 Is very close to DLSS4, in a few aspects it matches DLSS4 and in disocclusion artifacts, it's better than DLSS4. But you are right that overall, DLSS is still the best upscaler in the market.
 
isn't fsr4 just that? and dlss2? And pssr?
These are are using machine learning and motion vectors for deeper integration with the game contrary to fsr2
 
isn't fsr4 just that? and dlss2? And pssr?
These are are using machine learning and motion vectors for deeper integration with the game contrary to fsr2

All temporal upscalers use motion vectors, color buffers and depth buffers for upscaling.
The AI upscalers just use an AI pass to improve and clean up the image.
 
AMD graphics - the eternal follower

Unlike their cpu business where they actually innovate

AMD really STILL are two separare companies: AMD and ATI. They just rebranded ATI and are still doing the same shit
 
The issue with FSR4 is it being exclusive to a recent GPU only a fraction of AMD owners have and being exclusive to a certain select amount of games. Meanwhile DLSS is supported across hundreds of games (most high-end games in recent years) and supports Nvidia GPUs from five years ago. And despite that, FSR4 is still inferior to the DLSS that 2xxx series Nvidia cards can run. This is a decisive defeat for AMD and one that may cost them the entire dedicated GPU segment. Too late, too little, too expensive.
Eh, I think that's probably a little too alarmist. Like winjer winjer said, Nvidia does the same thing. RTX 2000 and 3000 only supports some DLSS4 features, RTX 4000 supports more, and RTX 5000 supports all.

I think strategically they made the sound decision. AMD was never going to get from 0 to FSR4 without going through the iterative process it went through. Obviously they started later than Nvidia, but to get here in this short amount of time is impressive, and if Redstone can get them closer to whatever the latest DLSS is, that's most of the battle won. Everything else is marketing and cost control.

TL;DR: Radeon will be fine.
 
Last edited:
Good stuff. Nvidia is definitely paying attention...

sweating key and peele GIF
 
This is all great news, but AMD needs to do something for the RDNA 3+ crowd. This shit is needed THE MOST on lower power devices and as great as it is that the 9060 family will be able to use it, the best case scenario would be for all the APU they have on the market.

I'd be ok with it if any RDNA4 APU was coming out, but even the Z2 Extreme family will be RDNA3.5 and the fact it will take 2 years to get anything portable to be able to use FSR4 is ridiculous.
 
Could they implement this on the base PS5 or no? I have a 3080 PC so no real reason to use AMD's upscaler there until they can beat NVIDIA's, but it'd be great to get gains on a five-year-old console.
 
Could they implement this on the base PS5 or no? I have a 3080 PC so no real reason to use AMD's upscaler there until they can beat NVIDIA's, but it'd be great to get gains on a five-year-old console.
No, it lacks the hardware to do it. I think thats why people are so salty in this thread even though AMD just delivered a massive upgrade with FSR4.
 
This is all great news, but AMD needs to do something for the RDNA 3+ crowd. This shit is needed THE MOST on lower power devices and as great as it is that the 9060 family will be able to use it, the best case scenario would be for all the APU they have on the market.

I'd be ok with it if any RDNA4 APU was coming out, but even the Z2 Extreme family will be RDNA3.5 and the fact it will take 2 years to get anything portable to be able to use FSR4 is ridiculous.

I think it's safe to assume that RDNA3 and earlier will be left with FSR3.1 quality level. They made FSR4 and all that FG and RR tech with RDNA4 and UDNA in mind.

Just go to straight to "acceptance" stage.
 
I just wanna say that I LOVE that it feels like AMD is back in the graphics card game.

9070 XT is such a beast of a card and I have high hopes for 9060 XT
 
Last edited:
Not the point. They were promissing to be extremely close to DLSS, and that has never happened. FSR4 is an improvement, yes, but i don't want an improvement, i want it to be on par, or better than DLSS.
They are extremely close or on par in upscaling - which is what everyone meant by DLSS until recently. AMD never had good enough ray tracing performance for anyone to even care about ray reconstruction stuff until just now.
 
Could they implement this on the base PS5 or no? I have a 3080 PC so no real reason to use AMD's upscaler there until they can beat NVIDIA's, but it'd be great to get gains on a five-year-old console.

No. The base PS5 doesn't even have support DP4A. Much less the AI accelerators that RDNA4 had.
 
Not the point. They were promissing to be extremely close to DLSS, and that has never happened. FSR4 is an improvement, yes, but i don't want an improvement, i want it to be on par, or better than DLSS.

The only way to compete with DLSS is to dedicated part of the GPU to machine learning, as that what DLSS uses.

Perhaps AMD thought they could compete without ML, but some of this was on you, the consumer, to understand how realistic such a claim was.

Nvidia have a headstart in that more of their products support machine learning supersampling, though they don't have any real presence in the consumer APU/SoC space other than the Switch (2).
 
So are these algorithms doing the similar to this? Or are they just the mathematical magic described in the video dressed up as AI?


So basically, yes. The kernels are actually what are derived using machine learning various types of signal processing, it will "learn" kernels that best work with the job you are trying to solve, and funnily enough for most image classifiers, will randomly generate edge detectors and such over time! You saw in that video, certain kernels are used for edge detection, blurring etc, well when training a model that can categorise images, the model will be made up of these kernels (not exactly as shown, but you would need to understand neural net activations and deep neural nets to see how it works). The model is made up of different kernels that can "detect" features either themselves, or in conjunction with each other, and when applied in a neural net, can activate a neuron which corresponds to a given category you are looking to find (or not). You will have feature activations based on detecting round shapes within an image, along with fangs and whiskers to identity a cat for example. These can also be ran "backwards" so to speak to apply more detail where the model detects a lower resolution / barely activating version of the feature (deconvolution). The first publicly known version of this was Google's Deep Dream, which when given random noise, and say a "cat" model, would hallucinate features, then "extract" them from the image, over and over again. Kinda like seeing shapes in clouds + LSD :)
 
Last edited:
Top Bottom