I mean… one of the very first replies to the tweet nails it. It's likely that DF just has a more thorough understanding of what frame generation actually is now. When DLSS3 was first introduced it was essentially just witchcraft. Now that there's been some time to fiddle with it and really understand the nitty gritty and everything it is - when talking about a similar technology they are more nuanced in their verbiage.
I personally think FSR3 looks better. It's less blurry. Sure there are some more artifacts compared to DLSS and I've always been pro-dlss against FSR but I have to admit AMD is really bringing it to the table. Especially since FSR isn't hardware locked.
It would be more damning if DF didn't understand how frame generation worked when DLSS3 came out, motion smoothing has been on TVs for decades.I mean… one of the very first replies to the tweet nails it. It's likely that DF just has a more thorough understanding of what frame generation actually is now. When DLSS3 was first introduced it was essentially just witchcraft. Now that there's been some time to fiddle with it and really understand the nitty gritty and everything it is - when talking about a similar technology they are more nuanced in their verbiage.
I personally think FSR3 looks better. It's less blurry. Sure there are some more artifacts compared to DLSS and I've always been pro-dlss against FSR but I have to admit AMD is really bringing it to the table. Especially since FSR isn't hardware locked.
Remember, this is the internet, if you say something that means you can NEVER change your mind or have a different opinion later!I mean… one of the very first replies to the tweet nails it. It's likely that DF just has a more thorough understanding of what frame generation actually is now.
If you actually watch the FSR video he goes on to explain that it's not extra performance for either technology and that it actually comes with an overhead. It's a framerate increase but not a performance increase.
Sure but they weren't saying this for DLSS3. "up to 5x perf vs native K". but then for FSR3 they suddenly understood that "frame generation is not extra performance". Very convenient to understand that for FSR3.If you actually watch the FSR video he goes on to explain that it's not extra performance for either technology and that it actually comes with an overhead. It's a framerate increase but not a performance increase.
I personally think FSR3 looks better. It's less blurry. Sure there are some more artifacts compared to DLSS and I've always been pro-dlss against FSR but I have to admit AMD is really bringing it to the table. Especially since FSR isn't hardware locked.
DLSS Quality at 4k should not be any more blurrier then a native 4k image. It will even have some advantages, such as better stability and less ghosting. Frame Generation doesn't make anything blurrier either, if your base FPS is more then 60. Frame Generation can lead to some minor artifacts though, especially with the UI.Sure but they weren't saying this for DLSS3. "up to 5x perf vs native K". but then for FSR3 they suddenly understood that "frame generation is not extra performance". Very convenient to understand that for FSR3.
Personnaly I think they are all blurry methods (and add input latency) compared to native or proper/clean TAA, Ti or even some CBR methods. With DLSS and frame generation you get more frames but it's so blurry. I like my textures sharp.
When it's changing your mind simply depending on who is doing it then yes that should be called out.Remember, this is the internet, if you say something that means you can NEVER change your mind or have a different opinion later!
The fact that he literally called it extra performance when presenting DLSS3 frame generation.Rich: Im careful with words used to describe the frame rate increase because similar to DLSS3, I dont think you can call it extra performance as such.
Where is the double standard here?
He says he doesnt think you can call FrameGen from DLSS3 or FSR3 extra performance.
It doesn't. FSR has an auto-sharpening pass in Immortals of Aveum and many other games. DLSS isn't as aggressive with its sharpening. You can either toggle it in-game or use the NVIDIA Control panel for extra sharpness. FSR frame generation requires you to toggle DLSS on, thus inheriting all the flaws of FSR.I personally think FSR3 looks better. It's less blurry. Sure there are some more artifacts compared to DLSS and I've always been pro-dlss against FSR but I have to admit AMD is really bringing it to the table. Especially since FSR isn't hardware locked.
Why are you comparing TAA or CBR to frame generation? They aren't the same thing at all. You can compare DLSS to CBR/TAA, and DLSS shits on both. The last sentence is also completely false. Do you even know what these things do?Sure but they weren't saying this for DLSS3. "up to 5x perf vs native K". but then for FSR3 they suddenly understood that "frame generation is not extra performance". Very convenient to understand that for FSR3.
Personnaly I think they are all blurry methods (and add input latency) compared to native or proper/clean TAA, Ti or even some CBR methods. With DLSS and frame generation you get more frames but it's so blurry. I like my textures sharp.
There is no double standard. He makes the same point for DLSS frame generation and FSR frame generation over a year later and fanboys use this to stoke the flames of war. Rich clearly says, "AMD and NVIDIA are likely to say it's extra performance,".Rich: Im careful with words used to describe the frame rate increase because similar to DLSS3, I dont think you can call it extra performance as such.
Where is the double standard here?
He says he doesnt think you can call FrameGen from DLSS3 or FSR3 extra performance.
They already came to this way of viewing things in the 4090 review video. It didn't have anything to do with FSR 3's arrival.Sure but they weren't saying this for DLSS3. "up to 5x perf vs native K". but then for FSR3 they suddenly understood that "frame generation is not extra performance". Very convenient to understand that for FSR3.
Have you actually watched the FSR3 video at all?The fact that he literally called it extra performance when presenting DLSS3 frame generation.
It doesn't even sound like they understands what they're talking about. They says DLSS is blurry and adds latency and then compares it to CBR/TAA. DLSS doesn't add latency and is less blurry than TAA/CBR. Frame generation does add latency but is not comparable to TAA/CBR.DLSS Quality at 4k should not be any more blurrier then a native 4k image. It will even have some advantages, such as better stability and less ghosting. Frame Generation doesn't make anything blurrier either, if your base FPS is more then 60. Frame Generation can lead to some minor artifacts though, especially with the UI.
Gotta admire people that be so confidently wrong.![]()
This is such an idiotic take.
More frames = more performance, any processing penalty is vastly out done by the large framerate increase, since that processing performance is on full display when frame gen is off and yet framerates are significantly lower.
The only valid example of when frame gen does not = more performance is when there are latency or image quality issues to get that increase.
Yes, I have. It's the perception they give at the time that's the double standard. The point is that a year ago they were calling it extra performance when they were kind of shilling for the 4000 series cards. Much like years ago they were all giddy for a particular midgen refresh but now they dislike midgen refreshes "equally". A decade ago using motion vectors and interpolation was not great either because it wasn't "true 1080p" but now motion vectors and interpolated frames are cool for extra frames and resolution. They can market and sell things when they need to.Have you actually watched the FSR3 video at all?
When does he say its extra performance for DLSS3 but NOT extra performance for FSR3?
"There's also the matter of DLSS3. NVIDIA's newest technology that boosts performance further...well we should discuss whether, "performance" is the right word. He then states further "What this means at a fundamental level is, it's perfectly acceptable to say that your system isn't gaining extra performance, but with that said, gaming performance is measured by frame rates or frame times and DLSS3 certainly helps there. So perhaps your games are not running faster but they appear smoother."Yes, I have. It's the perception they give at the time that's the double standard. The point is that a year ago they were calling it extra performance when they were kind of shilling for the 4000 series cards. Much like years ago they were all giddy for a particular midgen refresh but now they dislike midgen refreshes "equally". A decade ago using motion vectors and interpolation was not great either because it wasn't "true 1080p" but now motion vectors and interpolated frames are cool for extra frames and resolution. They can market and sell thing when they need to.
You see their views evolve at most convenient times, which allows them to maintain a cohesive narrative.
Yeah not in my experience, dlss always has more ghosting than native.DLSS Quality at 4k should not be any more blurrier then a native 4k image. It will even have some advantages, such as better stability and less ghosting. Frame Generation doesn't make anything blurrier either, if your base FPS is more then 60. Frame Generation can lead to some minor artifacts though, especially with the UI.
Doesnt FSR3 FrameGeneration currently only work on 7000 series GPUs.
AMD Fluid Motion Frames only works on AMD GPUs 6000 series and up.
So isnt it hardware locked?
You mean they might change their views because the facts may have changed...
How bourgeois of them.
Even before DLSS 3.0 was announced, I remember them being excited about the possibility of A.I based interpolation on DF Direct. And the decision over whether to call it extra performance or not is a terminological one. I don't see any evidence they are less excited about interpolation now than they were before. The tone of the FSR 3.0 video is even positive, despite the limitations.Yes, I have. It's the perception they give at the time that's the double standard. The point is that a year ago they were calling it extra performance when they were kind of shilling for the 4000 series cards. Much like years ago they were all giddy for a particular midgen refresh but now they dislike midgen refreshes "equally". A decade ago using motion vectors and interpolation was not great either because it wasn't "true 1080p" but now motion vectors and interpolated frames are cool for extra frames and resolution. They can market and sell things when they need to.
OK I am mixing stuff a bit but my point is each time I watch a DLSS vs something I always notice how blurry the DLSS textures often become in motion. They are all like "extraordinary stuff with DLSS" but I don't see it. You get more frames but high frequency details from textures often get lost.It doesn't. FSR has an auto-sharpening pass in Immortals of Aveum and many other games. DLSS isn't as aggressive with its sharpening. You can either toggle it in-game or use the NVIDIA Control panel for extra sharpness. FSR frame generation requires you to toggle DLSS on, thus inheriting all the flaws of FSR.
Why are you comparing TAA or CBR to frame generation? They aren't the same thing at all. You can compare DLSS to CBR/TAA, and DLSS shits on both. The last sentence is also completely false. Do you even know what these things do?
There is no double standard. He makes the same point for DLSS frame generation and FSR frame generation over a year later and fanboys use this to stoke the flames of war. Rich clearly says, "AMD and NVIDIA are likely to say it's extra performance,".
Everything DF has ever said or produced outside of just framerate comparisons of games on different platforms has been either very handwavey or woefully uninformed at best and just straight up bullshit at worst. DF is good if you want to know which platforms a game might run well on. It's absolutely shit for everything else. Especially when they start to talk about technical things.
This is patently false. DLSS isn't blurrier than TAA and especially not CBR. Furthermore, DLSS's major strength over other upscaling methods is its ability to maintain details and lessen ghosting/fizzling with moving objects, something it does better than TAA, FSR, and CBR (lol). You're describing what DLSS does far better than the other reconstruction method which is incredibly ironic.OK I am mixing stuff a bit but my point is each time I watch a DLSS vs something I always notice how blurry the DLSS textures often become in motion. They are all like "extraordinary stuff with DLSS" but I don't see it. You get more frames but high frequency details from textures often get lost.
That's not how this works.Yeah not in my experience, dlss always has more ghosting than native.
Can you give us a game where this happens?OK I am mixing stuff a bit but my point is each time I watch a DLSS vs something I always notice how blurry the DLSS textures often become in motion. They are all like "extraordinary stuff with DLSS" but I don't see it. You get more frames but high frequency details from textures often get lost.
Bad look. They really need to address this.When it's changing your mind simply depending on who is doing it then yes that should be called out.
![]()
![]()
I'm not an expert on these techs but for what I know, FSR is hardware agnostic because it relies on shader cores while DLSS uses its own cores.
It's clear that FSR eats some performance using cores intended for graphics while DLSS not.