AMD FidelityFX Super Resolution (FSR) review roundup

Or maybe you are dumb those pics are from NVidias own website.
And the picture shows it compares itself to a 16k image.


This is the second time your tried to claim this bs. It's totally false.

They do use 16K images but in order to train a general model.

"The original DLSS required training the AI network for each new game. DLSS 2.0 trains using non-game-specific content, delivering a generalised network that works across games. This means faster game integrations, and ultimately more DLSS games."

 
They do use 16K images but in order to train a general model.

"The original DLSS required training the AI network for each new game. DLSS 2.0 trains using non-game-specific content, delivering a generalised network that works across games. This means faster game integrations, and ultimately more DLSS games."

I dont know if you are agreeing or disagreeing with me here. But just incase you are disagreeing. What they mean by that is that they use the non game specific content in the games it trains with that can be used across games. Such as common languages, fence patterns etc. It still trains with games specifically every time but it uses what it learned from previous games that are non game specific to make integrations faster.

Or it would conflict with this:
During the training process, the output image is compared to an offline rendered, ultra-high quality 16K reference image, and the difference is communicated back into the network so that it can continue to learn and improve its results. This process is repeated tens of thousands of times on the supercomputer until the network reliably outputs high quality, high resolution images.
 
ultra quality fsr is the best way to save GPU power after dlss.
I hope they make it optional so all the haters will activate it (obviously) of their own free will ... and they won't be able to say anything anymore
 
Last edited:
Very impressive for a non temporal, non ML based approach.

Really good performance gains, and I love how it's platform agnostic.
 
Cyberpunk dlss 2.2 mod
"And the result is better framerate, crisper visuals, and the elimination of the 'ghosting' artifacts frequently seen around small objects and lights while using DLSS."
I've not tested but nice if true.
 
I dont know if you are agreeing or disagreeing with me here. But just incase you are disagreeing. What they mean by that is that they use the non game specific content in the games it trains with that can be used across games. Such as common languages, fence patterns etc. It still trains with games specifically every time but it uses what it learned from previous games that are non game specific to make integrations faster.

Or it would conflict with this:
I'm not clear yet either since I am not sure of your position.

The point is that DLSS 2.0 doesn't use machine learning to generate a higher resolution version of a given static image (frame), taken in isolation. The problem with that approach is that it tends to create ("hallucinate") information not present in the native output. Rather it uses machine learning to find the best way of combining samples taken over multiple frames, with temporal upscaling. So you are still using temporal upscaling to generate the higher resolution image, but in the most intelligent way possible.

See here from 17:30 onwards.

 
Another review with similar conclusions as Digital Foundry.


"In Godfall, FSR Ultra Quality is easy to recommend. There's a slight loss in detail, but in motion, you likely won't notice the difference. Having the UI overlay applied at the target resolution (4K in the screenshots) also helps keep the text looking sharp."

ultra quality fsr seem very very good..and is a great achievement seen that doesn't need dedicated he as tensor core
 
Last edited:
"In Godfall, FSR Ultra Quality is easy to recommend. There's a slight loss in detail, but in motion, you likely won't notice the difference. Having the UI overlay applied at the target resolution (4K in the screenshots) also helps keep the text looking sharp."

ultra quality fsr seem very very good..and is a great achievement seen that doesn't need dedicated he as tensor core

I wonder if they can choose to do even better "settings" then FSR UQ (1662p) like 1800p or more?
Then I can see this become a "darn" we didn't fully get 60fps locked, but with FSR "our own setting above UQ" locks it to 60fps.
 
Last edited:
I wonder if they can choose to do even better "settings" then FSR UQ (1662p) like 1800p or more?
Then I can see this become a "darn" we didn't fully get 60fps locked, but with FSR "our own setting above UQ" locks it to 60fps.
I'm sure that devs will have access to that
 
Do you realize that DLSS operates at lower than native resolution? Of course those effects will be lower resolution...
However maybe in future DLSS will process half-res/quarter-res textures separately and those "artifacts" will gone.
I have an Nvidia RTX card, I use DLSS when it's available to bring games to 4k without traditional upscaling artefacts. And it's a proprietary solution, it should go the way of G-sync if they can't make it standard (and make a proper effort so that it works on other vendors hardware)... Otherwise they are just splitting a platform that's not theirs to begin with.

Now, DLSS is not the magic upscaler DF is trying to sell to us and given how it handles some specific cases I would say that their AI tech is suboptimal in some cases.
 
Use this tool to compare and judge for yourselves.


Good tool but the viewer itself is kinda messing up the quality of shots.
 
I'm not clear yet either since I am not sure of your position.

The point is that DLSS 2.0 doesn't use machine learning to generate a higher resolution version of a given static image (frame), taken in isolation. The problem with that approach is that it tends to create ("hallucinate") information not present in the native output. Rather it uses machine learning to find the best way of combining samples taken over multiple frames, with temporal upscaling. So you are still using temporal upscaling to generate the higher resolution image, but in the most intelligent way possible.

See here from 17:30 onwards.


Maybe this tidbit will clear things up. That video is talking about what happens at home with a NVidia card with Tensor cores and not the training process it uses. I honestly dont know why they didnt mention it. You cant get those results with just multiple low res frames as reference.

Pixel-by-pixel: DLSS 2.0 Architecture

Nvidia's DLSS 2.0 architecture captures a low resolution (current frame) and the high resolution previous frame to decide on a pixel-by-pixel basis how to produce a higher quality 'now' frame. But it goes deeper than that!

During the training process (part of the DLSS 2.0 AI process) the output frame, image then, is compared to an offline rendered, super-high quality 16K reference image - that's right, a whopping 16K!. The difference between the two are then compared and collaborated, returning to the network in order for it to continue to learn and improve its results. It's quite amazing how fast this all happens, in that the process is rapidly repeated tens of thousands of times on the supercomputer until the network reliably outputs high quality, high resolution images. Okay, so what happens next?

Once the network is trained, NGX sends the AI model to your RTX enabled PC via Game Ready Drivers (GRD) and over the air (OAT) updates. With Turning's Tensor Cores having the ability to deliver up to 110 teraflops of dedicated horsepower, the DLSS network can be simultaneously run in real-time, even with the most demanding 3D game. We need to thank those Tensor Cores and Nvidia's Turing for that.

Nvidia arduously continues to work on DLSS and additional, newer, DLSS features and integrations into popular games. So the future looks good for gamers:




During the training process, the output image is compared to an offline rendered, ultra-high quality 16K reference image, and the difference is communicated back into the network so that it can continue to learn and improve its results. This process is repeated tens of thousands of times on the supercomputer until the network reliably outputs high quality, high-resolution images.

Once the network is trained, NGX delivers the AI model to your GeForce RTX PC or laptop via Game Ready Drivers and OTA updates. With Turing's Tensor Cores delivering up to 110 teraflops of dedicated AI horsepower, the DLSS network can be run in real-time simultaneously with an intensive 3D game. This simply wasn't possible before Turing and Tensor Cores.





The training process for the DLSS 2.0 network also includes comparing the image output to an "ultra-high-quality" reference image rendered offline in 16K resolution (15360 x 8640). Differences between the images are sent to the AI network for learning and improvements. Nvidia's supercomputer repeatedly runs this process, on potentially tens of thousands or even millions of reference images over time, yielding a trained AI network that can reliably produce images with satisfactory quality and resolution.


With both DLSS and DLSS 2.0, after the AI network's training for the new game is complete, the NGX supercomputer sends the AI models to the Nvidia RTX graphics card through GeForce Game Ready drivers. From there, your GPU can use its Tensor Cores' AI power to run the DLSS 2.0 in real-time alongside the supported game.

 
Yeah I think it's a good option to have for consoles, especially the UQ preset. Tested it a bit with Anno and it's pretty hard to spot the degradation at TV viewing distances.

Could you share direct feed screenshots of native vs UQ of some games supported? It'll be much better than those from youtube.
 
Did anyone just compare it to the native lower res? DF kept comparing it to 4k, but my question is how much better does it look than with no upscaling? Expect no magic with one frame of data to work with. Not impressed.
 
Maybe this tidbit will clear things up. That video is talking about what happens at home with a NVidia card with Tensor cores and not the training process it uses. I honestly dont know why they didnt mention it. You cant get those results with just multiple low res frames as reference.
I am not saying they are not using 16K ground truth images to train their neural network. I am saying what they are training are weights used in the assessment of TAA samples. DLSS 2.0 is not giving the game a high-resolution image. It's giving the game a way of intelligently rejecting samples used in the temporal upscaling process. The effect of that is temporal reconstruction with much higher detail while minimising ghosting.
 
Did anyone just compare it to the native lower res? DF kept comparing it to 4k, but my question is how much better does it look than with no upscaling? Expect no magic with one frame of data to work with. Not impressed.
Hardware Unboxed compared 4K FSR Performance to Native 1080p and FSR looks better.
 
Hardware Unboxed compared 4K FSR Performance to Native 1080p and FSR looks better.
Of course it is more expensive to the hardware…
That is one of the useless comparisons.

It should compared with 1440p or 4k native.
 
Last edited:
Could you share direct feed screenshots of native vs UQ of some games supported? It'll be much better than those from youtube.

Here's a 300% zoom on Anno. Left is native right is 4K FSR UQ. Nearly impossible to tell at regular zoom sitting at 8ft away from 65" panel.

anno-fsr-zoom.png
 
Here's a 300% zoom on Anno. Left is native right is 4K FSR UQ. Nearly impossible to tell at regular zoom sitting at 8ft away from 65" panel.

anno-fsr-zoom.png

Thank you very much for sharing! It's really great and doesn't seem to be computationally expensive and also GPU-agnostic, which is the best part. It might improve in the future and might tackle some GPU power for even better results.
 
I trust DF more than the other outlets that's for sure. They always tell it like it is, no fanboy is gonna impact their analysis.
 
Last edited:
I just tried this for myself and the performance setting is definitely superior to a straight upscale from 1080p > 4k. I don't see myself using this much but it's not a poor solution at all for those with older GPUs.

The ultra quality setting also isn't too bad. I'm not interested in DLSS comparisons but it does seem better than the usual 'resolution scale' options that we often see in games. I won't comment on other games but it looks better than I thought it would.
 
FSR comes to PS5





Added AMD FSR 1.0

  • PS5 - Enabled AMD FSR 1.0 + TAAU Hybrid Upscaling by default
  • PC - Added upscaling options including AMD FSR 1.0 and TAAU
 
Last edited:
Last edited:
1626518299-01230397-bf0d-449c-948f-f123782f3a13.jpeg

1626518284-e384b376-16a4-435b-bae0-877df62d2aec.jpeg

1626518293-ff574c58-b212-4981-ab11-4e1c3f52763b.jpeg


In this game : DLSS > Native > FSR


Al of those look the same to me.

Made a comparison in some thread a while ago in anno and frankly ultra quality might as well be native i couldn't tell the difference and that's on a monitor and 1080p where its the hardest for FSR to work with. 4k will only make it more difficult and tv's.

And after watching that video in your post + PS5 support



Honestly DLSS suddently becomes a lot less interesting.
 
Al of those look the same to me.

Made a comparison in some thread a while ago in anno and frankly ultra quality might as well be native i couldn't tell the difference and that's on a monitor and 1080p where its the hardest for FSR to work with. 4k will only make it more difficult and tv's.

And after watching that video in your post + PS5 support



Honestly DLSS suddently becomes a lot less interesting.


Are you blind or watching on phone?
 
They no interest (for consoles) to use FSR at 1080p (wich native is sub-1080p)

4k or for the worse, 1440p FSR will be used... but even at 1440p, still hurting a bit (when I see comparison on top)

I wonder if checkboarding 3840x2160 is more accurate in the fact...
 
Last edited:
The source code is up on GitHub for anyone curious.

Important bits:

The core functions are EASU and RCAS:
[EASU] Edge Adaptive Spatial Upsampling ....... 1x to 4x area range spatial scaling, clamped adaptive elliptical filter.
[RCAS] Robust Contrast Adaptive Sharpening .... A non-scaling variation on CAS.
RCAS needs to be applied after EASU as a separate pass.

Optional utility functions are:
[LFGA] Linear Film Grain Applicator ........... Tool to apply film grain after scaling.
[SRTM] Simple Reversible Tone-Mapper .......... Linear HDR {0 to FP16_MAX} to {0 to 1} and back.
[TEPD] Temporal Energy Preserving Dither ...... Temporally energy preserving dithered {0 to 1} linear to gamma 2.0 conversion.
See each individual sub-section for inline documentation.
 
FSR comes to PS5





Added AMD FSR 1.0

  • PS5 - Enabled AMD FSR 1.0 + TAAU Hybrid Upscaling by default
  • PC - Added upscaling options including AMD FSR 1.0 and TAAU

Good to see PS5 supports FSR. Not really surprising since it's supported by pretty much everything.
 
Last edited:
don't know how legit this is, but apparently his guy managed to use FSR to give an effective visual upscale from 720p-->1080p on the Witcher 3.



if this is true, AMD could take input from the modding community and use it to improve FSR 0.2
 
More I see FXSR and more remind me CBR. Same results, same problematic with subpixels details. What exactly change compared the CBR?
 
Can someone more knowledgeable than me tell me if I need FSR when I only have a 1080p monitor anyway. Or what would be the best option for me? I have a RX5700XT if that helps. I only play at 1080p. Thx in advanced.
 
Can someone more knowledgeable than me tell me if I need FSR when I only have a 1080p monitor anyway. Or what would be the best option for me? I have a RX5700XT if that helps. I only play at 1080p. Thx in advanced.

You'd probably only need to use FSR in the near future if you turned raytracing on, but that problem is solved for you because that card doesn't support raytracing.
So nope, you don't.


In some games, you could do a little trick where you get more FPS than your monitor's refresh rate and get better quality, which is to activate Virtual Super Resolution in the driver to fake your monitor as a 4K one to the game, and then play games at 4K with FSR set to Quality. You'd get fantastic anti-aliasing and better texture quality due to the edge-detection upscaling as if it was Super Sampling 4xAA, but at the cost of running the game at 1440p performance.
 
Last edited:
Top Bottom