Hardware Unboxed - FSR4 is even better at 4K

winjer

Member


List of supported games

Mod FSR4 into games with Optiscaler

 
Cerny didn't make FSR4.
It was made by AMD, and will be adopted by Sony and PS5 Pro in a future PSSR update.
This is what pssr should've been. Cerny didn't need to go off and do his own thing as admirable as it sounds. But the results have not been there.
 
Last edited:
Dat Cerny magic!
OVz6oPb.gif

frw6zEB.gif
 
Last edited:
Cerny didn't make FSR4.
It was made by AMD, and will be adopted by Sony and PS5 Pro in a future PSSR update.
Trolling again let him be. 😂😂😂

Eurogamer article

"The neural network (and training recipe) in FSR 4's upscaler are the first results of the Amethyst collaboration," Cerny told us. "And results are excellent, it's a more advanced approach that can exceed the crispness of PSSR. I'm very proud of the work of the joint team!"
 
On topic, it sounds like fsr4 quality is somewhere between dlsse and dlss4.

I've not been using dlss4 because of the performance hit and several issues with artifacts and am wondering if i would be able to use fsr4 instead of dlss3 going forward.
 
Moving goal post? You accused me of trolling man, when I proved you wrong you move goal post? Ok


So Cerny is wrong and you are right somehow? Ok
Stop licking that butthole man….stop polluting pc thread.
You don't see me in every PS5 thread saying that those game don't have pathtracing on those low cost devices
 
Last edited:
So Cerny is wrong and you are right somehow? Ok

Cerny is correct. But Amethyst started at a later date.
Just consider that the Radeon 9070 and FSR4 were released over a month ago and there is no FSR4 on the Pro.
The collaboration probably started when Cerny made that interview. So it will take some time until we see the results of such collaboration.
And it will be beneficial for AMD and Sony.
 
Seems FSR4 is about as taxing as DLSS4, even a bit more.
That's not really a statement that the data we have supports.
What we see is that 'relative to base performance, FSR4 on supported AMD hw is about as taxing as DLSS4 on latest NVidia hardware', at what is 'perceptually' a similar level of quality (No outlet has so far applied quantifiable quality tests to any of these to date) - basically we're not comparing FSR and DLSS at all, we're just comparing the GPUs.

For your statement to be relevant - we'd need measured performance of algorithms themselves (NVidia has shared those numbers with DLSS4 - AMD did not) and divide that with tensor performance on a specific hw-target to get cost that could be compared across algorithms. This wouldn't control for architecture specific gotchas - but it's the closest we can get given the exclusive nature of these two.
 
Cerny is correct. But Amethyst started at a later date.
Just consider that the Radeon 9070 and FSR4 were released over a month ago and there is no FSR4 on the Pro.
The collaboration probably started when Cerny made that interview. So it will take some time until we see the results of such collaboration.
And it will be beneficial for AMD and Sony.

We know for a fact that their collaboration started a while ago. It did NOT start when Cerny did that interview.
 
On topic, it sounds like fsr4 quality is somewhere between dlsse and dlss4.

I've not been using dlss4 because of the performance hit and several issues with artifacts and am wondering if i would be able to use fsr4 instead of dlss3 going forward.

Strange, you are getting artifacts with the Transformer model but not with the CNN model.

And the perf hit is high enough that you would rather use DLSS3?


In my own testing DLSS4 Balanced looks better than DLSS3 Quality and performs about the same if not better.

*Using an RTX4080.



P.S
FSR4 looks most like DLSS3 tier for tier if your GPU performs better with FSR4 then you might as well switch to FSR4 for however many frames you are gaining.
Otherwise just go a tier down using DLSS4.
 
That's not really a statement that the data we have supports.
What we see is that 'relative to base performance, FSR4 on supported AMD hw is about as taxing as DLSS4 on latest NVidia hardware', at what is 'perceptually' a similar level of quality (No outlet has so far applied quantifiable quality tests to any of these to date) - basically we're not comparing FSR and DLSS at all, we're just comparing the GPUs.

For your statement to be relevant - we'd need measured performance of algorithms themselves (NVidia has shared those numbers with DLSS4 - AMD did not) and divide that with tensor performance on a specific hw-target to get cost that could be compared across algorithms. This wouldn't control for architecture specific gotchas - but it's the closest we can get given the exclusive nature of these two.
You are correct, but I was really just taking a more practical approach. ie, we have two GPUs with similar levels of performance with upscaling of a similar quality. I suppose I should have instead said that DLSS4 on a 5070 Ti is more efficient/less demanding than FSR4 on a 9070 XT. At least, based on that 4 games sample.
Or the Tensor Cores in RDNA4 are not as powerful, so they take more time to process each frame.
Are they called Tensor Cores? Thought this was patented by NVIDIA. Techpowerup calls them Tensor Cores, but no documentation from AMD calls them that.
 
If the collaboration had started much sooner, then Pro would already be using the FSR4 code.
Instead, it will only use FSR4 next year.

This article is from December 2024.


I think it's safe to say the collaboration started sometime in 2024.

"With Amethyst, we've started on another long journey and are combining our expertise with two goals in mind," Mark Cerny said.
"The first goal is a more ideal architecture for machine learning. Something capable of generalized processing of neural networks but particularly good at the lightweight CNNs needed for game graphics and something focused around achieving that Holy Grail of fully-fused networks."
Cerny explained that the second goal is to develop "in parallel, a set of high quality CNNs for game graphics" to will help further graphical capability. "Both SIE and AMD will independently have the ability to draw from this collection of network architectures and training strategies, and these components should be key in increasing the richness of game graphics as well as enabling more extensive use of ray tracing and path tracing," he said.
 
This article is from December 2024.


I think it's safe to say the collaboration started sometime in 2024.

But it's not ready on the PS5 Pro, while it's ready on RDNA4 GPUs, for over a month.
So it's clear that AMD started work on FSR4 much sooner than Sony.
Chances are that Sony saw that FSR4 was superior to PSSR and decided to ditch it and join AMD in developing FSR4 for the Pro as well.
 
If the collaboration had started much sooner, then Pro would already be using the FSR4 code.
Instead, it will only use FSR4 next year.
There are so many coincidences

RDNA2 with Ray Tracing acceleration, the first product announced with this is the PS5.
RDNA4 with hardware-accelerated FSR4, announced at the same time as the PS5 PRO, hardware acceleration for AI, is the PS5 PRO

AMD's biggest leaps are with the launches of the PS5 and PS5 PRO

But Sony has nothing to do with this and much less collaborated with AMD to make this possible

It is obvious and evident that FSR4 and PSSR are both byproducts of AMD and Sony's collaboration.

Sony used PSSR because on consoles in general the FPS is fixed and the resolution is variable.
AMD uses FSR4 because on PC in general the resolution is fixed and the FPS is variable.

AMD's main GPU customers aren't PC gamers, they're Sony and Playstation gamers. And be happy about that, it means AMD's focus is on people who play video games. Not like Nvidia who makes GPUs for AI and the waste becomes GPUs for PC games.
 
Last edited:
There are so many coincidences

RDNA2 with Ray Tracing acceleration, the first product announced with this is the PS5.
RDNA4 with hardware-accelerated FSR4, announced at the same time as the PS5 PRO, hardware acceleration for AI, is the PS5 PRO

AMD's biggest leaps are with the launches of the PS5 and PS5 PRO

But Sony has nothing to do with this and much less collaborated with AMD to make this possible

RDNA2 RT was a response to Nvidia's Ray-Tracing that came out in 2018 with Turing.
RDNA4 has different Tensor units to the ones in the PS5 Pro.
And FSR4 is much better than PSSR.
 
RDNA2 RT was a response to Nvidia's Ray-Tracing that came out in 2018 with Turing.
RDNA4 has different Tensor units to the ones in the PS5 Pro.
And FSR4 is much better than PSSR.

AMD's main GPU customers aren't PC gamers, they're Sony and Playstation gamers. And be happy about that, it means AMD's focus is on people who play video games. Not like Nvidia who makes GPUs for AI and the waste becomes GPUs for PC games.
 
AMD's main GPU customers aren't PC gamers, they're Sony and Playstation gamers. And be happy about that, it means AMD's focus is on people who play video games. Not like Nvidia who makes GPUs for AI and the waste becomes GPUs for PC games.

Almost every advancement in the GPU market, in the past half decade was done by Nvidia.
RT cores, Tensor Cores, AI upscaling, Mesh Shaders, Variable Rate Shading, Frame generation, Ray reconstruction.
Even with Nvidia focusing on AI and servers, they are still the leaders in GPU tech.
What AMD has been doing is catching up. And Sony uses hardware designed by AMD.
 
ey called Tensor Cores? Thought this was patented by NVIDIA. Techpowerup calls them Tensor Cores, but no documentation from AMD calls them that.
Tensor is a mathematical operation, so anyone can use it.

Nvidia calls them Tensor Cores.
AMD calls them AI Accelerators.
Intel I believe calls them Vector Engines.

They effectively do the same thing but yeah they use different naming schemes.
 
Are they called Tensor Cores? Thought this was patented by NVIDIA. Techpowerup calls them Tensor Cores, but no documentation from AMD calls them that.

They already call their Tensor Core-equivalent Matrix Cores on their CDNA GPUs. What RDNA4/RDNA3/PS5Pro does isn't a Tensor/Matrix Core but MLOPs running on their shader core. Likely we'll see them bring that with UDNA.
 
Last edited:
Almost every advancement in the GPU market, in the past half decade was done by Nvidia.
RT cores, Tensor Cores, AI upscaling, Mesh Shaders, Variable Rate Shading, Frame generation, Ray reconstruction.
I don't think there's any disputing NVidia leadership in GPU space (market share aside - they dominated the news and mindshare for a reason) but I wouldn't attribute a bunch of individual things to them at all here.
Mesh Shaders were a DX construct and GPU vendors were 'compliant' with them (from hardware perspective, drivers took awhile) years ahead of the spec being there. Frame Gen (for games) predates AI or even programmable GPUs by quite some time, and VRS is a conceit of MSAA hardware that dates back to early 00s (we literally had games shipping with Tier-0 VRS throughout 00s, and beyond).
Hell PS4 Pro 'invented' a variant of its own specifically for VR acceleration (and yes this was apparently Pro exclusive, to my knowledge no AMD gpus shipped with it later), and I could be wrong, but timeline wise I remember Intel demoing Tier-2 VRS before anyone else did (the hope was after all that it would change fates for low-end hw - which didn't materialize - but anyway).

I do think we have NVidia to thank for AI accelerated pixel treatment being so widespread (and go beyond reconstruction), and their investment into RT absolutely helped bring that forward ahead of the market being ready for it. They were slow at getting adoption - but without their push, who knows if current gen of consoles would have bothered at all, let alone mobile hardware and beyond.
 
Last edited:
What kind of artifacts?
  • Squiggly lines on zebra crossings in cyberpunk.
  • Shimmering artifacts on godrays through trees in kcd2.
  • Similar issues with distant foliage in virtually every open world game with foliage i tried though i did not try it in ac shadows since its already so heavy.
  • Blocky artifacts in Space Marine 2 when alpha effects are in play which is all the time since the game is so effects driven. especially in the distance.
this was back at launch a couple of months ago, so maybe things have improved since but im barely able to do dlss quality in these games. makes no sense to take a big performance hit and drop down to dlss 4 balanced or performance anyway.
 
They already call their Tensor Core-equivalent Matrix Cores on their CDNA GPUs. What RDNA4/RDNA3/PS5Pro does isn't a Tensor/Matrix Core but MLOPs running on their shader core. Likely we'll see them bring that with UDNA.
I'm reading a lot of different stuff about UDNA, but Kepler has a solid track record, so I'll take his word for it.



 
Top Bottom