• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

NVIDIA Reflex 2 With New Frame Warp Technology Reduces Latency In Games By Up To 75%

LectureMaster

Gold Member
I guess this will essential for multi frame gen.




NVIDIA Reflex 2 With New Frame Warp Technology Reduces Latency In Games By Up To 75%​

By Andrew Burnes and Nyle Usmani on January 06, 2025 | Featured StoriesCESGeForce RTX 50 SeriesGeForce RTX GPUsNVIDIA DLSSNVIDIA Reflex

In competitive games, a few milliseconds of input lag can mean the difference between victory and defeat.

In 2020, we released NVIDIA Reflex, an innovative technology that reduces PC latency in top competitive games by an average of 50%. NVIDIA Reflex accomplishes this by synchronizing CPU and GPU work, so player actions are reflected in-game quicker, giving gamers a competitive edge in multiplayer games, and making single-player titles more responsive.
In the last four years, NVIDIA Reflex has been integrated in over 100 games and reduced latency for tens of millions of GeForce gamers. Over 90% of gamers turn Reflex on, allowing them to experience better responsiveness, aim more accurately, and win more games.

At CES 2025, we’re unveiling NVIDIA Reflex 2, which can reduce PC latency by up to 75%. Reflex 2 combines Reflex Low Latency mode with a new Frame Warp technology, further reducing latency by updating the rendered game frame based on the latest mouse input right before it is sent to the display.



NVIDIA Reflex 2 Explained

Every player action taken in a video game goes through a complex pipeline before being rendered on-screen, with each step introducing latency. Inputs from your keyboard and mouse are passed to the game, where their effects are calculated by the CPU. The results are placed in a render queue, which is passed to the GPU for rendering, before finally being output to the display.

This process typically executes in tens of milliseconds for each frame, though stalls and other delays can add latency, making the game feel unresponsive.

latency-pipeline.jpg





With the launch of Reflex, we set out to optimize the latency pipeline from mouse to display via an SDK integrated directly into the game engine. Reflex better paces the CPU, preventing it from running ahead, and allowing it to submit tasks to the GPU just in time for the GPU to start work, effectively eliminating the render queue. And by starting the CPU work later, mouse inputs can be sampled closer to when a frame is being submitted to the GPU, further improving responsiveness.

nvidia-reflex-how-it-works.jpg





With Reflex 2, we’ve introduced a different approach to reducing latency. Four years ago, NVIDIA’s esports research team published a study illustrating how players could complete aiming tasks faster when frames are updated after being rendered, based on even more recent mouse input. In the experiment, game frames were updated to reduce 80 milliseconds (ms) of added latency, which resulted in players completing an aiming target test 30% faster.
When a player aims to the right with the mouse, for example, it would normally take some time for that action to be received, and for the new camera perspective to be rendered and eventually displayed. What if instead, an existing frame could be shifted or warped to the right to show the result much sooner?

Reflex 2 Frame Warp takes this concept from research to reality. As a frame is being rendered by the GPU, the CPU calculates the camera position of the next frame in the pipeline, based on the latest mouse or controller input. Frame Warp samples the new camera position from the CPU, and warps the frame just rendered by the GPU to this newer camera position. The warp is conducted as late as possible, just before the rendered frame is sent to the display, ensuring the most recent mouse input is reflected on screen.

When Frame Warp shifts the game pixels, small holes in the image are created where the change in camera position reveals new parts of the game scene. Through our research, NVIDIA has developed a latency-optimized predictive rendering algorithm that uses camera, color and depth data from prior frames to in-paint these holes accurately. Players see the rendered frame with an updated camera perspective and without holes, reducing latency for any actions that shift the in-game camera. This helps players aim better, track enemies more precisely, and hit more shots.

nvidia-reflex-2-frame-warp-explained.jpg


Here’s an example from Embark Studios’ THE FINALS, with Frame Warp, with and without in-painting:



The result is a frame which shows the freshest camera position, seamlessly inserted into the rendering pipeline.

Without Reflex, PC latency in THE FINALS on a GeForce RTX 5070 is 56 ms, using highest settings at 4K. With Reflex Low Latency, latency is more than halved to 27ms. And by enabling Reflex 2, Frame Warp cuts input lag by nearly an entire frametime, reducing latency by another 50% to 14ms. The result is an overall latency reduction of 75% by enabling NVIDIA Reflex 2 with Frame Warp.

nvidia-reflex-2-frame-warp-the-finals-latency-performance-chart.jpg



Reflex Low Latency mode is most effective when a PC is GPU bottlenecked. But Reflex 2 with Frame Warp provides significant savings in both CPU and GPU bottlenecked scenarios. In Riot Games’ VALORANT, a CPU-bottlenecked game that runs blazingly fast, at 800+ FPS on the new GeForce RTX 5090, PC latency averages under 3 ms using Reflex 2 Frame Warp - one of the lowest latency figures we’ve measured in a first-person shooter.



NVIDIA Reflex 2 is coming soon to THE FINALS and VALORANT, and will debut first on GeForce RTX 50 Series GPUs, with support added for other GeForce RTX GPUs in a future update.

 
Here’s an example from Embark Studios’ THE FINALS, with Frame Warp, with and without in-painting:




What a goofy demo. The "inpainting off" side seems to have artifacts around the gun while "inpainting on" doesn't, and obviously you don't see the mouse so the syncing of the two clips is just 'trust me bro'. Looks like sometimes the demo is two frames apart (at 60fps), so that seems like it is exaggerated for effect.

And what gets inpainted at the edge of the screen? Well who knows since they cropped the sides out in order to do the side-by-side comparison.

It's an interesting approach. It only covers the camera position and doesn't help any other 'input' but that is indeed the most important part for getting a good feel in an FPS. I like the way they're thinking but I'll be interested to see if it is noticeable once I actually get my hands on it.
 

mèx

Member
Latency aside, it seems it might work without relevant penalties in image quality only if the FPS are quite high (>240?), as there are less gaps to fill in the image that way.

I wonder how it looks at 60 FPS: with low FPS you have high input latency, which is where the latency reducing feature is most needed.

Also curious to see how it works with wide fast sweep movements, which are common in fast paced multiplayer games.

And what gets inpainted at the edge of the screen? Well who knows since they cropped the sides out in order to do the side-by-side comparison.

You can see that in the full video at 2:34.
 
Last edited:

ArtHands

Thinks buying more servers can fix a bad patch
Guessing this is their answer for those worrying about increased latency from multi frame gen features
 
Latency aside, it seems it might work without relevant penalties in image quality only if the FPS are quite high (>240?), as there are less gaps to fill in the image that way.

I wonder how it looks at 60 FPS: with low FPS you have high input latency, which is where the latency reducing feature is most needed.

Also curious to see how it works with wide fast sweep movements, which are common in fast paced multiplayer games.



You can see that in the full video at 2:34.
Ahh yes, I can see it being kind of fucky on the leading edge in that clip but not that bad. I wonder what an enemy is going to look like. Presumably invisible on the first frame he should be in? And, yeah, faster sweeps plus lower frames you'd think would be more of a problem.

I misunderstood the demo I was looking at. I thought they were trying to show off the latency improvement. But that part is entirely fake.
 

analog_future

Resident Crybaby
Nvidia's latency reduction efforts are what makes Geforce Now by far the best game streaming option. We're quickly reaching the point where it's going to be as good as what we consider to be native if you have the right connection.
 

twilo99

Member
I need to actually play a game with all this tech to get a real feel for what it can do otherwise it’s impossible to pass judgement

At this point there is just way too much software involved for my liking…
 

hinch7

Member
Gamechanger for competitive games. Pretty much what I've been waiting for a couple years

While not quite the same implementation, making mouse movement independant is definately a big step up.
 
Last edited:

SF Kosmo

Al Jazeera Special Reporter
They have been using this kind of tech for years on VR games to reduce the appearance of latency, since VR is so sensitive to lag. But this implementation seems to do a better job of avoiding artifacts.

The effect is probably pretty subtle in practice, and I think this is likely meant as a mitigation for the aggressive 3x and 4x frame gen tech they are touting on the new cards.
 

llien

Member
PC latency in THE FINALS on a GeForce RTX 5070 is 56 ms

One second contains, cough, 1000 milliseconds.
56ms latency, if the very next frame takes your mouse movement into account, means 1000/56 => 18 fps, cough cough

If you run game at, say, 100 FPS, 56ms latency means that 5 (!!!) frames were rendered IGNORING mouse movement.


Is that a "de-dumdum DLSS faux frames" feature? :messenger_beaming:

I mean, are they "defeating" the lag that was created by gloriously ahead of the curve eternally amazing DLSS4 FG? (4 stands for number of frames?)
 

marjo

Member
To me, the main benefit of high frame rate gaming is the increased responsiveness, not the smoother visuals (which hit diminishing returns after 90 fps or so). Even with this reflex technology, playing at '240'fps with multi frame gen will feel like, at best, a regular 60 fps game. Not sure I see the point.
 

SF Kosmo

Al Jazeera Special Reporter
One second contains, cough, 1000 milliseconds.
56ms latency, if the very next frame takes your mouse movement into account, means 1000/56 => 18 fps, cough cough

If you run game at, say, 100 FPS, 56ms latency means that 5 (!!!) frames were rendered IGNORING mouse movement.
No frames are "ignoring mouse movement," they are just not hitting your eyeballs until a few ms later. So you're reacting to images that are slightly behind the current game state, but you always react behind the current game state because human reflexes aren't instantaneous.

I mean, are they "defeating" the lag that was created by gloriously ahead of the curve eternally amazing DLSS4 FG? (4 stands for number of frames?)
No, more like preempting it? Like this corrects frames at frame time even when rendering is behind.

The Frame Gen x4 only adds like 7ms of latency compared to the old x2 method, so the gains here aren't just offsetting the added latency, they are still reducing it considerably.

Of course it's more reducing the feel or perception of latency. This tech is good for things like aiming in shooters, probably much less useful for, say, a fighting game where you're trying to parry with precise timing or whatever. It's a mitigation, not a magic bullet.
 

proandrad

Member
Is reflex better on the 4000 series cards? I have a 3000 card and always disabled it because it causes visual stutters.
 
Last edited:

SF Kosmo

Al Jazeera Special Reporter
Lag measures diff between input and rendered response, human reflexes have nothing to do with it.
Well, human perception has everything to do with it. If we can do things fast enough, we can trick the brain into thinking it's instantaneous.

Generally speaking, anything under 15ms is totally imperceptible for people, even in scenarios like VR where people are at their most sensitive. But for TV gaming, people are a lot less sensitive, and 50ms is a pretty normal latency for most games you play, especially on consoles.

Before we get into "pre-empting it" it would be lovely to figure how one gets 56ms lag to begin with.
A lot happens between a player hitting a button and the images hitting your eyeballs. Wireless controllers add a bit of latency off the bat, then the game itself has to reach its next input scan, then the game logic has to run, the frame has to render, and the display has to show the images (displays often have like 10-20ms or more latency on their own).

What Reflex 2 does is kind of jump in the middle of this process and manipulate the most recent completed frame to look like what it thinks the image should look like based on player input. And this much faster process allows them to create a motion to photon latency that is inside the boundaries of human perception.
 
Last edited:

rofif

Can’t Git Gud
It's the old "buffer que" setting that was there forever in nvcp settings years ago.
We always used to change it from 3 to 1 frames so gpu sends the frape asap.
That's not an issue with vrr displays anyway. you make a new frame and can deliver it as soon as it's ready. No waiting for refresh.
 

SF Kosmo

Al Jazeera Special Reporter
To me, the main benefit of high frame rate gaming is the increased responsiveness, not the smoother visuals (which hit diminishing returns after 90 fps or so). Even with this reflex technology, playing at '240'fps with multi frame gen will feel like, at best, a regular 60 fps game. Not sure I see the point.
Well I think that's the idea of Reflex 2, that it runs at frame time and manipulates each frame to respond to player input before the actual game gets a chance to, so actions like aiming and movement (really maybe only aiming and movement) feel more responsive.

But I do take your point about diminishing returns over 90 or 100hz. I am pretty framerate sensitive, and I don't notice much difference beyond 100fps.
 
Last edited:

rofif

Can’t Git Gud
To me, the main benefit of high frame rate gaming is the increased responsiveness, not the smoother visuals (which hit diminishing returns after 90 fps or so). Even with this reflex technology, playing at '240'fps with multi frame gen will feel like, at best, a regular 60 fps game. Not sure I see the point.
It will still make 30fps feel like 30fps. There is nothing to change the input latency. No magic to do that.
In fact, all the additional processing costs, so it will probably feel worse than 30fps.

BUT time warp is real and you can have your mouse feel like 120fps with world updating at 30... but framegenned to 120.... It's kinda insane.
Because everything is 30fps but it visualizes at 120 or whatever.
I think the timewarp is the biggest change if its done well.
 

DirtInUrEye

Member
I'm confused. A different preview I read or watched yesterday said the improvements to Reflex wouldn't have much effect with FG. Maybe it was Tim at HUB, but I could be mistaken. So which is it?
 
Is reflex better on the 4000 series cards? I have a 3000 card and always disabled it because it causes visual stutters.

Not just me then. Recently had it with Hogwarts Legacy when using Reflex, GPU would downclock to around 1200-1400Mhz which was fine for caves etc but caused frame drops into the low 40's elsewhere. Turning up settings had no effect and the '+Boost' option only saw the GPU clock up to around 1600 instead but still have frame drops. Turning off Reflex altogether seen the 3090 go all the way up to 1875Mhz and now runs smooth 60 outside of the odd drop into the mid 50's in Hogwarts. I've had similar issues on other games using Reflex so generally avoid it these days.
 

llien

Member
50ms is a pretty normal latency for most games you play, especially on consoles.
Yeah, but that's cause consoles 30fps a lot.

Mice, at least the gaming ones are about 1ms with all bells and whistles and irrelevant here.
The same I'd say about monitor lag (clearly, that thing cannot be fixed by GPU tech), but those are also in the low single digit ms for quite a while.

If a game runs at 100fps (just my assumption), it roughly takes 10ms to render a single frame.
When user moves a mouse, a frame is being rendered. Say it just started.
That 10ms (5 on average) to render the old frame.
Then another 10 to render the new one.
That's 10-20ms of minimal lag, just in CPU/GPU world ignoring small lag added by monitor and mouse itself.

Way below 56ms.

What Reflex 2 does is kind of jump in the middle of this process and manipulate the most recent completed frame to look like what it thinks the image should look like based on player input.
Hm. So "predict" how the next frame would look like, without actually rendering?
Is it really something that can be done without noticeable side effects?
 

DirtInUrEye

Member
Not just me then. Recently had it with Hogwarts Legacy when using Reflex, GPU would downclock to around 1200-1400Mhz which was fine for caves etc but caused frame drops into the low 40's elsewhere. Turning up settings had no effect and the '+Boost' option only saw the GPU clock up to around 1600 instead but still have frame drops. Turning off Reflex altogether seen the 3090 go all the way up to 1875Mhz and now runs smooth 60 outside of the odd drop into the mid 50's in Hogwarts. I've had similar issues on other games using Reflex so generally avoid it these days.

I remember reflex had the same undesirable effect in Robocop. Disabling it completely fix my frame pacing. Damn, the trial and error of figuring that out was a pain.
 

SF Kosmo

Al Jazeera Special Reporter
Yeah, but that's cause consoles 30fps a lot.

Mice, at least the gaming ones are about 1ms with all bells and whistles and irrelevant here.
The same I'd say about monitor lag (clearly, that thing cannot be fixed by GPU tech), but those are also in the low single digit ms for quite a while.

If a game runs at 100fps (just my assumption), it roughly takes 10ms to render a single frame.
When user moves a mouse, a frame is being rendered. Say it just started.
That 10ms (5 on average) to render the old frame.
Then another 10 to render the new one.
That's 10-20ms of minimal lag, just in CPU/GPU world ignoring small lag added by monitor and mouse itself.

Way below 56ms.
Sure, an ideal PC set up is going to give you lower motion-to-photon latency, but my point is that, until you cross a certain threshold, it's very very hard for most people to tell, so you have some overhead. For the average person, the extra 10ms of latency added by frame gen is not going to meaningfully offset the benefit of a smoother appearance. But if course preferences and sensitivities vary, that's why we have options.

Hm. So "predict" how the next frame would look like, without actually rendering?
Is it really something that can be done without noticeable side effects?
Yeah, it seems to basically take the screen image, motion vectors, and depth data and then corrects the perspective, filling in occlusion with AI.

My understanding is that this really only works for correcting camera movement, so ideal for a first person or third person shooter, but maybe not so helpful for a game like Elden Ring or street fighter where you're reacting more to animation frames rather than aiming.
 

SF Kosmo

Al Jazeera Special Reporter
Isn't this already done with Asynchronous reprojection/warping in VR to increase responsiveness?
Yes, same concept, but the implementation is a bit more sophisticated thanks to AI voodoo.

Basic rotational reprojection always worked well in VR, but ASW (which handled rotational and positional reprojection) always had pretty noticeable artifacts that Nvidia's solution seems to handle much better by using AI to fill in occluded areas.
 

Three

Gold Member
Yes, same concept, but the implementation is a bit more sophisticated thanks to AI voodoo.

Basic rotational reprojection always worked well in VR, but ASW (which handled rotational and positional reprojection) always had pretty noticeable artifacts that Nvidia's solution seems to handle much better by using AI to fill in occluded areas.
I suppose this needs to be compared in how well it does the inpainting vs other async reprojection algorithms but it's basically just async reprojection/timewarp it looks like. Nothing stops other identical software solutions on other GPU with enough TOPs performance I guess. Let's see how artifact free it is in real world tests.
 
No, more like preempting it? Like this corrects frames at frame time even when rendering is behind.

The Frame Gen x4 only adds like 7ms of latency compared to the old x2 method, so the gains here aren't just offsetting the added latency, they are still reducing it considerably.

Of course it's more reducing the feel or perception of latency. This tech is good for things like aiming in shooters, probably much less useful for, say, a fighting game where you're trying to parry with precise timing or whatever. It's a mitigation, not a magic bullet.

If I'm understanding correctly, this is really input prediction for camera shifts that doesn't result in a true input lag reduction. So the framegen input lag will still be there, but camera movements will feel better and have less of an impact. Less laggy "fishtail" effect, yet all of the input lag increases from button presses will still be there. Do I have this right?
 

SF Kosmo

Al Jazeera Special Reporter
If I'm understanding correctly, this is really input prediction for camera shifts that doesn't result in a true input lag reduction. So the framegen input lag will still be there, but camera movements will feel better and have less of an impact. Less laggy "fishtail" effect, yet all of the input lag increases from button presses will still be there. Do I have this right?
So the thing you have wrong is that it isn't predicting input, it's predicting output based on realtime input. So, for example, in an FPS it's looking at your current mouse position and bringing the game image in line with THAT rather than where it was when the frame started rendering.

So in that situation, what you're getting is actually going to make the game more responsive in terms of gameplay.

But it's just for camera position/aiming. It's not going to mitigate a game like Hi-Fi Rush, for example, where latency will still push visual cues behind the music and input, because those aren't about camera position.
 
Last edited:
Top Bottom