• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Should AI Frames be considered legitimate in FPS Performance Comparisons

Should AI Frame Generated Frames be considered

  • Yes - A frame is a frame man

    Votes: 55 17.0%
  • No - Fake frames should not be considered

    Votes: 206 63.6%
  • Other/Depends

    Votes: 24 7.4%
  • Both should be considered

    Votes: 39 12.0%

  • Total voters
    324

Myths

Member
One thing I don't see talked about in the discussion with generated frames is the additional CPU hit that would be required to hit those same higher frame rates with pure rasterization.

There is bottle necking caused by older CPUs paired with higher end new GPUs. But with most games you have a single thread or maybe two or three which are where the main processing takes place. With some additional work divided out to other cores which fly through them and then sit idle. But the frame rate you can hit will still be capped by those main threads.The existing single frame generation we've had this last generation of video cards had the option of motion vector data on options to help it decide how objects were moving to optimize the generated frame. Now it can have input data as well which will affect everything in the scene. Which will mimic the smooth motion benefit we have with higher frame rates.

It will be interesting if the market moves in the way NVIDIA is thinking it will. I bought the original GeForce 256 at launch and it was a game changer. Being able to offload things that the CPU previously had to do suddenly made games play so much better. But will this AI rendering do the same to the market? Remix being able to replace a 3D object in a game in real time without modifying the executable leads to me think we might be seeing some along the lines of the original Talisman project Microsoft was undertaking during the extremely early days of Direct3D vs OpenGL vs GLIDE.

With raster you are pumping out all of draw calls to form a full scene every frame. Talisman was object based. It was able to take a 3D object and create a 2D sprite of it. If the next frame you had the same angle, lighting, etc. Just output the saved sprite out and you didn't spend anytime rendering it. So this isn't just taking a full image and applying a fancy Photoshop filter. You could have things closer to the camera moving faster than further way to hand proper parallax effect. We could be on the verge of extremely fluid motion, especially if the rendering if aware of rigging and animation, and animation was one of the things Jensen mentioned. People forget the early days of 3D where 15-20 FPS, with the "magic" the Voodoo 1 brought was solid 30 FPS. It's been recent that solid 60 FPS on the PC was a thing which elevated it over console. In simpler e-sports games made to run on everything you can get 240, 360 FPS. Having new techniques to hit those rate in extremely complex scenes needs something more than straight brute force.
In addition to the reduce power consumption, I see them leveraging this technology for throughput as a benefit. The complaints are hardly warranted especially when then drawbacks are for fringe cases such as competitive gaming. These models/transformers upscale with incredible accuracy and precision — using much of the same scene data from the buffer too — within fractions between frames polled from user input.
 
Last edited:

readonly

Member
Probably been said but both should be measured both with RT and no RT. Then as best as possible provide captures for the user to hopefully see the differences between on and off and card to card, manufacturer to manufacturer. Then measure input latencies. If you're the kind of user that won't research and then make a decision about what to buy that's your problem and just go for the bigger number and move on with your life.
 

coolmast3r

Member
Now, I'm not someone who is into conspiracy therories, but I gotta say, I've heard a worrying amount of "tech commentators" (and some of the biggest names on YouTube among them) jump on this "all frames are fake" bandwagon after Nvidia's keynote. Raster frames are now suddenly just as fake as AI generated ones and no one seems to even try to determine why people call raster frames real and AI ones fake. To me personally, this is very simple. If a given frame directly contributes to reducing input latency (which raster frame does; meaning a game gets progressively more responsive to input the more fps you get) then it's a real frame and it is the only legitimate metric in fps performance comparisons as a result, in my opinion.
 

SScorpio

Member
Now, I'm not someone who is into conspiracy therories, but I gotta say, I've heard a worrying amount of "tech commentators" (and some of the biggest names on YouTube among them) jump on this "all frames are fake" bandwagon after Nvidia's keynote. Raster frames are now suddenly just as fake as AI generated ones and no one seems to even try to determine why people call raster frames real and AI ones fake. To me personally, this is very simple. If a given frame directly contributes to reducing input latency (which raster frame does; meaning a game gets progressively more responsive to input the more fps you get) then it's a real frame and it is the only legitimate metric in fps performance comparisons as a result, in my opinion.
But what about Reflex 2 which will take your input data and use that in the "fake" generated frames. The same should feel even more responsive than running at a lower frame rate with the new frame generation disabled.
 

Elog

Member
It is like any other graphical settings where you can trade image quality against FPS. In a basic benchmark between GPUs it should not be enabled.

The problem is that it is often discussed as something else than a graphical setting of importance (which is exactly what it is).

Fine to include as a second benchmark where frame generation/upscaling technologies are tested against each other as long as image quality commentary/analysis is attached to it (since it is reduced).
 

SweetTooth

Gold Member
They should be the last option to consider in extreme cases of performance. Priority should be:

- Native + High stable frames.
- Upscaled + High stable frames.
- Native + High unstable frames
- Native + AI frames

Etc
 
Last edited:

llien

Banned
Raw performance is where the real comparisons are.

Upscaling is a moving target that gets things blurred.
As there might be (?) enough people who have it on all the time, it's kinda sort maybe ok as additional info.

But it's no longer apples to apples.
 

llien

Banned

coolmast3r

Member
But what about Reflex 2 which will take your input data and use that in the "fake" generated frames. The same should feel even more responsive than running at a lower frame rate with the new frame generation disabled.
Thing is, we are talking about a manufactured solution to a problem that realistically should not exist in the first place, after all. Besides, Reflex 2 isn't out yet and we don't have objective third party data on how it works in real world. If it truly holds up to the claims made by Nvidia themselves and does not exhibit some ugly artifacts, then sure, I guess it deserves a place in the "bandaid" section of graphics tech.
 

RoboFu

One of the green rats
People act like they are fake or free.. they aren’t. There’s complex physical hardware in the GPU that is producing those frames.

It’s different, but it is legitimate, and you’ll be seeing it only grow in usage and efficiency with time.

Stop being a boomer.

If your game is at 30 fps no matter how many FAKE frames you make your game will still control like it's 30 fps.

Now these people are creating ways for the game to half play itself to make up for that but that affects actual control in a FPS.
 

Elios83

Member
No fake frames come at the cost of temporal artifacts and input lag.
Actually rendered frames not. There is no magic, AI or not.

Comparisons should be done on the same objective metrics and then you can say that this stuff is a really good plus to have.
Otherwise you open the door to all kind of misleading marketing claims.
Also different competitors could introduce frame rate boost modes with completely different implementations that look like shit and are used precisely just for marketing purposes to claim performance victories.
 
Last edited:

rm082e

Member
There are different audiences who read reviews, so I choose "Other/Depends".

If you're a person who's into competitive multi-player, you will probably be sensitive to input latency and you want to know the raw frame rate running at a native resolution.

If you're a hobbyist who just enjoys playing and isn't concerned with ranked competitive play or tournaments, then you might not even notice the difference, so those AI generated frames are a bonus, not "fake".

As someone who plays single-player games, I still always want to know raw performance numbers (Average, and 1% Low) at the relevant native resolutions.
 

SmokedMeat

Gamer™
No. Frame rates are decided by rasterization.

Otherwise you’ll be believing Nvidia bullshit like the 5070 being on par with a 4090.

If you want to do separate benches with frame gen, that’s fine. We’ve had the same with DLSS/FSR/XeSS.
 
Last edited:

Myths

Member
If your game is at 30 fps no matter how many FAKE frames you make your game will still control like it's 30 fps.

Now these people are creating ways for the game to half play itself to make up for that but that affects actual control in a FPS.
Yes, and the term to measure that is input latency not fps. Stop conflating the two terms, which is what you and a number of other people keep doing. They’re correlated but no longer intrinsically tied with the advent of AI. Just as the person you’re responding to stated, tensor cores exist on-board the GPU, they’re physical components computing the frames based on scene data — not just “software” fed raster images after 2D projection is already done. This isn’t simple Gigapixel AI upscaling or static image prediction.
 

coolmast3r

Member
People act like they are fake or free.. they aren’t. There’s complex physical hardware in the GPU that is producing those frames.

It’s different, but it is legitimate, and you’ll be seeing it only grow in usage and efficiency with time.

Stop being a boomer.
No one is arguing that AI frames are "free" (whatever that means) or that there isn't hardware in these GPUs responsible for their generetion...

AI frames will or will not become legitimate in the eyes of the general public without the help of techbros telling everyone that "AI is the future". At the end of the day, it all comes down to whether people like this overwhelming reliance on AI generated frames to reach an illusion of performance or not. We've had some rough years with the widespread issues of game performance optimization and Nvidia being this aggressive and arrogant with AI frame generation just looks like an enabler of this trend.

Also, being a boomer ≠ being able to tell a difference between responsiveness of a game at 120 real raster fps vs 120 fake frames generated from 30 raster frames (yuck) or 60 (best case scenario, still not good input lag wise).
 

Soodanim

Member
The fairest and most honest way is to have both with and without. You're testing the raw power of the GPU then the additional functionality.
 

Magic Carpet

Gold Member
Currently no with my RTX 4070 I vote no, I don't like the way it looks.
Maybe in the future if fake frames become unnoticeable, I'll vote yes.
 
AI is the future. Rasterization is archaic.
Just out of interest if Nvidia released a card with identicial rasterisation to the previous card, but it had exclusive DLSS features, would you buy it? Because you're essentially paying for nothing.

As soon as they started using AI upscaling in performance tests, it blurred the lines of what you're actually paying for.
 

Kataploom

Gold Member
So say we remove AI upscaling, frame gen and reflex. You're left with native

How's that performance and latency?

kurt-kurt-angle.gif
I tested with Spyro Trilogy yesterday using LSFG in my 6700 XT, in 4K I'm getting 65-72 FPS with it disabled, once I enabled it, I get mid 40 with high input latency and an output of 83 FPS approximately. The motion seems similar, BTW, but can easily tell it's running at a much lower frame rate, not to mention the judder when panning the camera.

As I said previously, it is not improving performance, it is just smoothing the animations on screen, it is great for taking advantage of high refresh rate screens once you reach around 80 fps or so, to make it LOOK LIKE 120+ FPS, but that's it.

It is basically a post processing effect and it shows, if I could use it in 30 FPS games, I would love to have Xenoblade or Zelda motion being smoothed by it (hopefully Switch 2 will allow us something like that), but sadly anything 40 FPS is unplayable to me that way, the judder is comparable to the one in my TV motion interpolation feature. Not to mention other artifacts.

I repeat, I tested it yesterday in that game and also Yakuza Kiwami 2. Also I know LSFG has much more latency, but there are artifacts that make the resulting image not optimal for all cases, the only thing that is globally optimal is raster performance, frames generated natively by game logic.
 
Last edited:

Soodanim

Member
But what about Reflex 2 which will take your input data and use that in the "fake" generated frames. The same should feel even more responsive than running at a lower frame rate with the new frame generation disabled.
Unless you're Jensen and you've tested functional builds all you're working with is hope and marketing.
 

Larxia

Member
No, and DLSS shouldn't either unless it's actually mentioned in the comparison.
If you compare something running natively, both on resolution and framerate, to something running upscaled and with generated frames, then of course the comparison wouldn't be fair.

If the comparison is to show the "benefit" of DLSS and such, ok, and if it's to compare two games using these techs, okay.
Just don't compare them while pretending the DLSS / Frame generated one is running natively while it's not, it should always be mentioned if it's native or not.

I hate these new techs :messenger_loudly_crying:
edit: or more accurately, I hate what these new techs are doing to the industry and how they are being used.
 
Last edited:

peek

Member
For now... NO. However once the tech gets better, and better, and better...

You wont even be able to tell. But yeah the added latency is its biggest problem atm
 

Crayon

Member
You have to see both. I am not interested in frame gen at all but there's a chance we get to a point where most graphics are AI rendered. Not anytime soon, but it would mean a trend in that direction.

As of now, though? Raw performance means a lot. Especially for any buying cards for mp games. If there's anywhere the frames really are fake, it's there.
 

Dacvak

No one shall be brought before our LORD David Bowie without the true and secret knowledge of the Photoshop. For in that time, so shall He appear.
It depends on two factors:

1. Can I tell it’s AI during gameplay? (Meaning does it decrease the quality of the visuals to my naked eye)

2. Does it add any latency? (+10ms or more is my threshold)

If either answer is yes, then I keep frame gen off.

But either way, I’d like analyses to include BOTH ways.
 
Last edited:

Knightime_X

Member
Most of you will have to come to terms that eventually there will be cards barely capable of running doom 3 with raw power, but can produce graphics on par or better than those found in movies through ai.
It's a different form of technology you're not accustomed to, yet.

Hopefully, AI will make graphics cards cheaper in the long run.
But right now, they're mostly hybrid gpus.
 
Last edited:

JRW

Member
Not in benchmarks, I'm more interested in raw / native performance especially when comparing to other GPUs that don't have this feature.
 
Voted no, but where frame gen I think is useful is in cutscenes or whatever is non-interactive.
Input latency is the most important benefit of higher legit frames.
 
Normally I'd say no, but I would've never had a smooth experience playing Cyberpunk 2077 with Pathtracing in all its glory. Optimizing games is what matters the most, so it depends.
 
Last edited:

AGRacing

Member
I think we’re almost there. Nvidia seemed to boost their AI power by a large margin in the new cards.. but didn’t really make a leap in raster. Two halves of a GPU working in parallel to produce frames… it makes me to think they may have an interesting long term vision in mind.

I’ve seen people AI enhancing screenshots of games… making faces look more real , etc. I’ve seen people AI filter entire game videos to make them look like real life.

I think we’re headed for AI completing the frame instead of “faking” frames inbetween real ones. Perhaps their plan is to eventually raster an approximation of reality and then use AI to make it look truly real. Perhaps starting with things like faces and eventually the entire frame.
 
More than frames generated, I would consider "non-allucinating" frames created when testing the generative power of the AI system and the model it was trained with, well generated shadows, and such.
 
No. Cause if you look and pay attention enough. You see that the image really not that great. There blooming effects and tear with them fake frames
 
Right now the state of the art is that fake frames can't be done without a certain degree of image degradation and latency increase so the obvious answer is:


Fuck no!
 

proandrad

Member
The quality of frame generation should be included in reviews if it's a main feature of a new product, Nvidia exclusive frame generation is a much better feature than the competition and it is a selling point. But benchmarks and performance comparison should always be native resolution with no frame generation.
 
Last edited:
Top Bottom