Digital Foundry gets called out on apparent double standard between coverage of NVidia vs. AMD Frame Generation Tech

I mean… one of the very first replies to the tweet nails it. It's likely that DF just has a more thorough understanding of what frame generation actually is now. When DLSS3 was first introduced it was essentially just witchcraft. Now that there's been some time to fiddle with it and really understand the nitty gritty and everything it is - when talking about a similar technology they are more nuanced in their verbiage.

I personally think FSR3 looks better. It's less blurry. Sure there are some more artifacts compared to DLSS and I've always been pro-dlss against FSR but I have to admit AMD is really bringing it to the table. Especially since FSR isn't hardware locked.
 
I mean… one of the very first replies to the tweet nails it. It's likely that DF just has a more thorough understanding of what frame generation actually is now. When DLSS3 was first introduced it was essentially just witchcraft. Now that there's been some time to fiddle with it and really understand the nitty gritty and everything it is - when talking about a similar technology they are more nuanced in their verbiage.

I personally think FSR3 looks better. It's less blurry. Sure there are some more artifacts compared to DLSS and I've always been pro-dlss against FSR but I have to admit AMD is really bringing it to the table. Especially since FSR isn't hardware locked.

So far FSR3 is non functional unless you can make game run in refresh rate of your display.

They need to add VRR support for it to be worth using.

What do you mean about sharpness? Immortals have sharpness increased with FSR but no such setting for DLSS, it's on the devs in this case, image quality of DLSS is generally much better.
 
I mean… one of the very first replies to the tweet nails it. It's likely that DF just has a more thorough understanding of what frame generation actually is now. When DLSS3 was first introduced it was essentially just witchcraft. Now that there's been some time to fiddle with it and really understand the nitty gritty and everything it is - when talking about a similar technology they are more nuanced in their verbiage.

I personally think FSR3 looks better. It's less blurry. Sure there are some more artifacts compared to DLSS and I've always been pro-dlss against FSR but I have to admit AMD is really bringing it to the table. Especially since FSR isn't hardware locked.
It would be more damning if DF didn't understand how frame generation worked when DLSS3 came out, motion smoothing has been on TVs for decades.
 
If you actually watch the FSR video he goes on to explain that it's not extra performance for either technology and that it actually comes with an overhead. It's a framerate increase but not a performance increase.
 
I mean… one of the very first replies to the tweet nails it. It's likely that DF just has a more thorough understanding of what frame generation actually is now.
Remember, this is the internet, if you say something that means you can NEVER change your mind or have a different opinion later!
 
If you actually watch the FSR video he goes on to explain that it's not extra performance for either technology and that it actually comes with an overhead. It's a framerate increase but not a performance increase.

dumb the simpsons GIF

This is such an idiotic take.
More frames = more performance, any processing penalty is vastly out done by the large framerate increase, since that processing performance is on full display when frame gen is off and yet framerates are significantly lower.

The only valid example of when frame gen does not = more performance is when there are latency or image quality issues to get that increase.
 
Theres's visual performance and gameplay or latency performance.

We have all kinds of ways to measure visual performance (resolution, texture detail, fps, model detail/poly count, lighting, etc.) but latency performance measurement is sadly still in its infancy.
 
I would not call frame gen higher performance. It's got a hit to performance, which is how many real frames it can put out.

I think of it more like aa. It's a toggle that costs a little performance but it's worth it. Like any graphics setting, actually.
 
Based on the replies, it seems that that clip throws together clips out of chronological order, cherry picking to make a point completely void of context.

OP did you make a thread for your own tweet? Be honest
 
Last edited:
The video is whack. It's rich pondering the terminology of this new thing compared with everyone naturally using the word 'performance'.

He said we say it but maybe we shouldn't. We say it because the frame counter number goes up, but it's distinct from what we normally considered performance to be.He didn't set a policy.

Also I didn't get from the video being biased. I don't even know which way the bias is supposed to be going.
 
Last edited:
It's a odd thing, the game visually looks like it is performing much better, but the latency or feel of the game remains roughly the same. I can see the argument either way. Rich does make a valid point.

As for FSR3 itself, it is mostly useless right now, as it lacks support for VRR.
 
If you actually watch the FSR video he goes on to explain that it's not extra performance for either technology and that it actually comes with an overhead. It's a framerate increase but not a performance increase.
Sure but they weren't saying this for DLSS3. "up to 5x perf vs native K". but then for FSR3 they suddenly understood that "frame generation is not extra performance". Very convenient to understand that for FSR3.

Personnaly I think they are all blurry methods (and add input latency) compared to native or proper/clean TAA, Ti or even some CBR methods. With DLSS and frame generation you get more frames but it's so blurry. I like my textures sharp.
 
I personally think FSR3 looks better. It's less blurry. Sure there are some more artifacts compared to DLSS and I've always been pro-dlss against FSR but I have to admit AMD is really bringing it to the table. Especially since FSR isn't hardware locked.

Doesnt FSR3 FrameGeneration currently only work on 7000 series GPUs.
AMD Fluid Motion Frames only works on AMD GPUs 6000 series and up.

So isnt it hardware locked?
 
Sure but they weren't saying this for DLSS3. "up to 5x perf vs native K". but then for FSR3 they suddenly understood that "frame generation is not extra performance". Very convenient to understand that for FSR3.

Personnaly I think they are all blurry methods (and add input latency) compared to native or proper/clean TAA, Ti or even some CBR methods. With DLSS and frame generation you get more frames but it's so blurry. I like my textures sharp.
DLSS Quality at 4k should not be any more blurrier then a native 4k image. It will even have some advantages, such as better stability and less ghosting. Frame Generation doesn't make anything blurrier either, if your base FPS is more then 60. Frame Generation can lead to some minor artifacts though, especially with the UI.
 
Rich: Im careful with words used to describe the frame rate increase because similar to DLSS3, I dont think you can call it extra performance as such.






Where is the double standard here?
He says he doesnt think you can call FrameGen from DLSS3 or FSR3 extra performance.
 
Rich: Im careful with words used to describe the frame rate increase because similar to DLSS3, I dont think you can call it extra performance as such.






Where is the double standard here?
He says he doesnt think you can call FrameGen from DLSS3 or FSR3 extra performance.
The fact that he literally called it extra performance when presenting DLSS3 frame generation.
 
I personally think FSR3 looks better. It's less blurry. Sure there are some more artifacts compared to DLSS and I've always been pro-dlss against FSR but I have to admit AMD is really bringing it to the table. Especially since FSR isn't hardware locked.
It doesn't. FSR has an auto-sharpening pass in Immortals of Aveum and many other games. DLSS isn't as aggressive with its sharpening. You can either toggle it in-game or use the NVIDIA Control panel for extra sharpness. FSR frame generation requires you to toggle DLSS on, thus inheriting all the flaws of FSR.

Sure but they weren't saying this for DLSS3. "up to 5x perf vs native K". but then for FSR3 they suddenly understood that "frame generation is not extra performance". Very convenient to understand that for FSR3.

Personnaly I think they are all blurry methods (and add input latency) compared to native or proper/clean TAA, Ti or even some CBR methods. With DLSS and frame generation you get more frames but it's so blurry. I like my textures sharp.
Why are you comparing TAA or CBR to frame generation? They aren't the same thing at all. You can compare DLSS to CBR/TAA, and DLSS shits on both. The last sentence is also completely false. Do you even know what these things do?
Rich: Im careful with words used to describe the frame rate increase because similar to DLSS3, I dont think you can call it extra performance as such.






Where is the double standard here?
He says he doesnt think you can call FrameGen from DLSS3 or FSR3 extra performance.
There is no double standard. He makes the same point for DLSS frame generation and FSR frame generation over a year later and fanboys use this to stoke the flames of war. Rich clearly says, "AMD and NVIDIA are likely to say it's extra performance,".
 
Last edited:
Sure but they weren't saying this for DLSS3. "up to 5x perf vs native K". but then for FSR3 they suddenly understood that "frame generation is not extra performance". Very convenient to understand that for FSR3.
They already came to this way of viewing things in the 4090 review video. It didn't have anything to do with FSR 3's arrival.

 
The fact that he literally called it extra performance when presenting DLSS3 frame generation.
Have you actually watched the FSR3 video at all?
When does he say its extra performance for DLSS3 but NOT extra performance for FSR3?
 
DLSS Quality at 4k should not be any more blurrier then a native 4k image. It will even have some advantages, such as better stability and less ghosting. Frame Generation doesn't make anything blurrier either, if your base FPS is more then 60. Frame Generation can lead to some minor artifacts though, especially with the UI.
It doesn't even sound like they understands what they're talking about. They says DLSS is blurry and adds latency and then compares it to CBR/TAA. DLSS doesn't add latency and is less blurry than TAA/CBR. Frame generation does add latency but is not comparable to TAA/CBR.
 
dumb the simpsons GIF

This is such an idiotic take.
More frames = more performance, any processing penalty is vastly out done by the large framerate increase, since that processing performance is on full display when frame gen is off and yet framerates are significantly lower.

The only valid example of when frame gen does not = more performance is when there are latency or image quality issues to get that increase.
Gotta admire people that be so confidently wrong.

More frames does not mean more performance. It means a smoother presentation, but thats not what actual performance is in this situation.

You measure performance by measuring latency. So if at 1440p@60fps native refresh, you have a latency of 40ms, then you put on frame gen and get 1440p@120fps but now your latency has increased to 50ms, that is a quantifiable loss in performance. And something anyone playing competitively will not use. And the funny thing is you said it yourself, `when there is a latency penalty to get those extra frames`, and that is what FG is, there will always be a latency penalty to get those extra frames vs whatever latency you had when running the game without it.

TVs have been doing frame interpolation for over two decades, we haven't been calling 60hz native refresh rate TVs 120hz/240hz TVs have we? Be happy that with games we finally have proper interpolation, but pls don't go calling it what it isn't.
 
Last edited:
Have you actually watched the FSR3 video at all?
When does he say its extra performance for DLSS3 but NOT extra performance for FSR3?
Yes, I have. It's the perception they give at the time that's the double standard. The point is that a year ago they were calling it extra performance when they were kind of shilling for the 4000 series cards. Much like years ago they were all giddy for a particular midgen refresh but now they dislike midgen refreshes "equally". A decade ago using motion vectors and interpolation was not great either because it wasn't "true 1080p" but now motion vectors and interpolated frames are cool for extra frames and resolution. They can market and sell things when they need to.
 
Last edited:
Yes, I have. It's the perception they give at the time that's the double standard. The point is that a year ago they were calling it extra performance when they were kind of shilling for the 4000 series cards. Much like years ago they were all giddy for a particular midgen refresh but now they dislike midgen refreshes "equally". A decade ago using motion vectors and interpolation was not great either because it wasn't "true 1080p" but now motion vectors and interpolated frames are cool for extra frames and resolution. They can market and sell thing when they need to.
"There's also the matter of DLSS3. NVIDIA's newest technology that boosts performance further...well we should discuss whether, "performance" is the right word. He then states further "What this means at a fundamental level is, it's perfectly acceptable to say that your system isn't gaining extra performance, but with that said, gaming performance is measured by frame rates or frame times and DLSS3 certainly helps there. So perhaps your games are not running faster but they appear smoother."



The original video in the OP is also the preview predating the public release of DLSS3 and the 4090 and DF weren't even allowed to share frame rates, just the fps multiplier. He was comparing the 4090 to what we used to have with the 3090 Ti and yes, even without DLSS3, the 4090 offers extra performance period.

There is nothing to see here. There's enough warring nonsense with NVIDIA vs AMD, no need to stoke the flames to turn this into PS vs Xbox (although, it probably is already too late for that).
 
Last edited:
DLSS Quality at 4k should not be any more blurrier then a native 4k image. It will even have some advantages, such as better stability and less ghosting. Frame Generation doesn't make anything blurrier either, if your base FPS is more then 60. Frame Generation can lead to some minor artifacts though, especially with the UI.
Yeah not in my experience, dlss always has more ghosting than native.
 
Having watched the video, FSR3 not working with VRR right now is a huge issue, almost defeats the point really since you'll never be at exactly 120-144 etc. for the the entirety of the game and will have to deal with vsync judder that hasn't been a thing on PC for years. Double standards my butt.
 
Last edited:
Doesnt FSR3 FrameGeneration currently only work on 7000 series GPUs.
AMD Fluid Motion Frames only works on AMD GPUs 6000 series and up.

So isnt it hardware locked?

No. FSR3 FG can be used even on old GPUs, like an RX580 and GTX 1060. Although AMD does not recommend it.

What is still limited to the 60000 and 7000 series, is the preview driver that enables AFMF.
 
Yes, I have. It's the perception they give at the time that's the double standard. The point is that a year ago they were calling it extra performance when they were kind of shilling for the 4000 series cards. Much like years ago they were all giddy for a particular midgen refresh but now they dislike midgen refreshes "equally". A decade ago using motion vectors and interpolation was not great either because it wasn't "true 1080p" but now motion vectors and interpolated frames are cool for extra frames and resolution. They can market and sell things when they need to.
Even before DLSS 3.0 was announced, I remember them being excited about the possibility of A.I based interpolation on DF Direct. And the decision over whether to call it extra performance or not is a terminological one. I don't see any evidence they are less excited about interpolation now than they were before. The tone of the FSR 3.0 video is even positive, despite the limitations.
 
Everything DF has ever said or produced outside of just framerate comparisons of games on different platforms has been either very handwavey or woefully uninformed at best and just straight up bullshit at worst. DF is good if you want to know which platforms a game might run well on. It's absolutely shit for everything else. Especially when they start to talk about technical things.
 
It doesn't. FSR has an auto-sharpening pass in Immortals of Aveum and many other games. DLSS isn't as aggressive with its sharpening. You can either toggle it in-game or use the NVIDIA Control panel for extra sharpness. FSR frame generation requires you to toggle DLSS on, thus inheriting all the flaws of FSR.


Why are you comparing TAA or CBR to frame generation? They aren't the same thing at all. You can compare DLSS to CBR/TAA, and DLSS shits on both. The last sentence is also completely false. Do you even know what these things do?

There is no double standard. He makes the same point for DLSS frame generation and FSR frame generation over a year later and fanboys use this to stoke the flames of war. Rich clearly says, "AMD and NVIDIA are likely to say it's extra performance,".
OK I am mixing stuff a bit but my point is each time I watch a DLSS vs something I always notice how blurry the DLSS textures often become in motion. They are all like "extraordinary stuff with DLSS" but I don't see it. You get more frames but high frequency details from textures often get lost.
 
Everything DF has ever said or produced outside of just framerate comparisons of games on different platforms has been either very handwavey or woefully uninformed at best and just straight up bullshit at worst. DF is good if you want to know which platforms a game might run well on. It's absolutely shit for everything else. Especially when they start to talk about technical things.

That is true. They are very lacking in technical ability, unless a dev specifically tells them what to say.
They are the kind of people that place a GPU on the second slot, have it run at PCIE 4X, then wonder for several tests why performance is lower.
Or that screw up the memory configuration of a system, then get 90ns latency on a CPU.
 
OK I am mixing stuff a bit but my point is each time I watch a DLSS vs something I always notice how blurry the DLSS textures often become in motion. They are all like "extraordinary stuff with DLSS" but I don't see it. You get more frames but high frequency details from textures often get lost.
This is patently false. DLSS isn't blurrier than TAA and especially not CBR. Furthermore, DLSS's major strength over other upscaling methods is its ability to maintain details and lessen ghosting/fizzling with moving objects, something it does better than TAA, FSR, and CBR (lol). You're describing what DLSS does far better than the other reconstruction method which is incredibly ironic.

kVXrZlQ.png

PC1qHMe.png

E1mgI4Q.png

tSnx4nH.png


And that's on still images. The difference is greater in motion because that's where CBR struggles the most and is at the bottom of the rung when it comes to upscaling methods.

Only frame generation adds latency because it inserts an interpolated frame between two frames and this needs to be calculated, thus lowering response time. This is mitigated by NVIDIA always enabling Reflex for frame generation which makes the latency LOWER than native (but still higher than with DLSS upscaling). You took everything that DLSS does better than the competition, flipped it on its head and claimed that other far inferior methods are better. And to top it all off, DLSS offers much better performance than CBR or TAA.
 
Last edited:
Cs
OK I am mixing stuff a bit but my point is each time I watch a DLSS vs something I always notice how blurry the DLSS textures often become in motion. They are all like "extraordinary stuff with DLSS" but I don't see it. You get more frames but high frequency details from textures often get lost.
Can you give us a game where this happens?
 
I'm not an expert on these techs but for what I know, FSR is hardware agnostic because it relies on shader cores while DLSS uses its own cores.

It's clear that FSR eats some performance using cores intended for graphics while DLSS not.
 
I'm not an expert on these techs but for what I know, FSR is hardware agnostic because it relies on shader cores while DLSS uses its own cores.

It's clear that FSR eats some performance using cores intended for graphics while DLSS not.

In the case of FSR2 vs DLSS2, yes.
But in the case of Frame Generation, FSR3 FG has lower overhead.
 
Top Bottom