The 60fps minimum input kinda limits things.
Motion interpolation is basically a waste of time. You don't get the latency reduction in the controls that a higher framerate would give, and while it looks smoother, it tends to be prone to artifacting.
It's like the reverse of the '24fps is cinematic' meme.
Cant believe ppl are actually complaining about FSR. The nerve of you twits. Without it , Nvidia would add another $200 onto of their already uber expensive but not so great selection just because they can.
We should be thanking AMD for this, we need the damn competition. The more open options we have, the better off we are.
FSR3 will do that to ya.Wow, 27 minute wait, I wait through the whole queue, then when it starts connecting it gives me a login time-out and spits me back to main screen with a 32 minute wait.
haha wrong thread lolFSR3 will do that to ya.
in console games?So turn it off?
in console games?
It's open-source, so it's pretty much in the hands of developers how to use it.Can it be enabled only if frame-rate goes below 60fps?
It can work as framerate-compensation - but you'd get the input latency of 30fps + interpolation overhead, everytime 'real' framerate drops below 60.What if the game reaches something like 45-50fps then it gets doubled and locked down to 60fps?
I wonder how it feels. I still think that given the same input lag (let's say 40 fps), people would prefer the higher frame rate.It can work as framerate-compensation - but you'd get the input latency of 30fps + interpolation overhead, everytime 'real' framerate drops below 60.
So probably a bit of an odd experience - visual 60, with fluctuating input latency.
Can consoles get this?
Motion interpolation is basically a waste of time. You don't get the latency reduction in the controls that a higher framerate would give, and while it looks smoother, it tends to be prone to artifacting.
It's like the reverse of the '24fps is cinematic' meme.
Reminder Nvidia is releasing a 8 gigabyte VRAM card in 2023.The truth is we need better competition. Intel I believe in. It’s impressive what they’ve accomplished with their Arc GPUs since release, and their upscaling tech is great. Funny thing is even they’re rocking 16GB of VRAM, while Nvidia skimps.
AMD puts out nice cards, but they immediately fuck it up with the pricing. They had Nvidia by the balls and then botched the launch of their new cards.
It’s bad enough gamers are just programmed to blind buy Nvidia without doing their homework. I get it; they’re the first at releasing new features. But the truth is their drivers are no better, and they’re fucking the shit out of customers more than ever. 4070ti shouldn’t be 12GB of VRAM and a cut down memory bandwidth. 4070 will be a joke as will the 8GB 4060ti. Basically selling people the same cards as the two year old previous gen with a new number, and the same price.
Reminder Nvidia is releasing a 8 gigabyte VRAM card in 2023.
Bunch of fucking clowns.
I wouldn't write it off quite yet.Motion interpolation is basically a waste of time. You don't get the latency reduction in the controls that a higher framerate would give, and while it looks smoother, it tends to be prone to artifacting.
Tbh its not like game nowaday doesnt have artifact....Motion interpolation is basically a waste of time. You don't get the latency reduction in the controls that a higher framerate would give, and while it looks smoother, it tends to be prone to artifacting.
It's like the reverse of the '24fps is cinematic' meme.
I don't think you would lock it to 60hz right? It should be like dlss3 fg where you don't want to cap the framerate? so your 45-50fps should come out to like 90-100fps with the latency of 45-50fps with probably a few milliseconds ontop similar to frame generation. The 60fps talked about in the slide is talking about base frame rate input, saying they recommend at least 60fps for optimum results.What if the game reaches something like 45-50fps then it gets doubled and locked down to 60fps?
Tbh its not like game nowaday doesnt have artifact....
TAA cause everygame to be blurry mess and ghosting , and artifacting .
DLSS help but still very very visible .
If we're fine with taa I dont think interpolation could be any worse .
You forgot "and made it open source so it works on anything except JUST the latest overpriced GPUs from Nvidia"."We copied Nvidia's homework, how does it look?"
- AMD, the company any time Nvidia does anything
I admit that their 4090 is pretty good. Way too expensive for me however.It’s going to be a 3070 level card for $500+. Literally same power level, two years later for the same price.
Nvidia’s a joke with this lineup.
Just a leak, so take it with a grain of salt...
What nVidia is doing is totally different. Interpolated frames are nothing new -- it's why your grandma's TV makes everything look like soap opera. Oculus has used motion vector based frame interpolation to cover for dropped frames for years.Gonna be interesting how well this works after nvidia saying it was too hard to get working without their updated ada optical flow accelerator.
It's not just AI, DLSS3 also get motion vectors from the game engine, things that TVs and media players using frame interpolation do not have and can only guess by comparing pixels, that's why DLSS3 FG gives a higher quality than simple frame interpolation.I can't imagine just how bad it will be IQ- wise.
Those super resolution and motion interpolation methods was here in TV's for ages. The point in what's NVIDIA was doing is using AI to improve image quality in the end. And it worked because of AI training and other advancements in AI. There is no other advancements in super resolution or motion interpolation beside AI.
What AMD doing is just introducing same technologies but made without AI with completely destroys the goal.
FSR2 looks like crap compared even to performance and ultra-performance presets with AI despite same preset name and performance levels.
Expect even worse picture quality than in latest Jedi game.
Zelda ToK uses FSR2 so I don't see why not.Can consoles get this?
Zelda ToK uses FSR2 so I don't see why not.
Aight my bad then, its still FSR so don't see why 3 cant come.No. It's just FSR1.
Aight my bad then, its still FSR so don't see why 3 cant come.
Yeah, I forgot about motion vectors from the engine. That might improve quality a bit.It's not just AI, DLSS3 also get motion vectors from the game engine, things that TVs and media players using frame interpolation do not have and can only guess by comparing pixels, that's why DLSS3 FG gives a higher quality than simple frame interpolation.
That's why it has to be implemented by the devs in the engine, it's not a thing just put on top of the final rendered image.
If FSR3 is only driver side, would it have motion vectors from the engine ? I guess not, and it would not be able to separate UI from 3D scene easily either ?
If it's driver side it could only use final images, thus the game could still report x FPS while the driver doubles it, it could work on any game but with a worse quality than if it was baked in the game engine, I guess.
Every sony tv already does this![]()
![]()
Sarcastic?
TVs don’t have access to temporal vectors. Motion interpolation and these solutions aren’t even in the same league.
Lol. There was a thread with some guy saying this. Was trying to be funny.....fail.
No to either.If FSR3 is only driver side, would it have motion vectors from the engine ? I guess not, and it would not be able to separate UI from 3D scene easily either ?
Well - not quite. What Oculus has access to is motion inputs from the headset - no motion vectors for the pixels 'inside' the frame, or any other buffers (like depth etc).Oculus has used motion vector based frame interpolation to cover for dropped frames for years.
If we add additional frames to your typical Sony walking simulators or Nintendo games players won't notice much and it'll only look smother, because of the control and camera sluggishness such games have. Not all games are CSGO.If you get more fps but don't improve the input latency, it's not really useful to the gameplay. It looks better, but like DLSS 3, it's more style over substance. DLSS 2.x was the real breakthrough in my opinion.
That was true when it launched, but I'm pretty sure they added a more robust interpolation algorithm that does use motion vectors.Well - not quite. What Oculus has access to is motion inputs from the headset - no motion vectors for the pixels 'inside' the frame, or any other buffers (like depth etc).
They improved the algorithm to do spatial reprojection in addition to rotational - but it's not using anything other than RGB as input from the game, it's a driver-level reprojection.That was true when it launched, but I'm pretty sure they added a more robust interpolation algorithm that does use motion vectors.
There's not really an in-between. Every frame is reprojected in VR - the only difference is how big the delta-time is.Where DLSS3 (and like FSR 3) are using two frames to create an in-between, Oculus is just using the start point and motion vectors to create the in-between while the next frame is still rendering.
The rumors say they will present FSR3 and that Starfield will be the first title for the tech.
But as usual, take it with a pinch of salt.